id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.16564 | The Decimation Scheme for Symmetric Matrix Factorization | Matrix factorization is an inference problem that has acquired importance due
to its vast range of applications that go from dictionary learning to
recommendation systems and machine learning with deep networks. The study of
its fundamental statistical limits represents a true challenge, and despite a
decade-long history of efforts in the community, there is still no closed
formula able to describe its optimal performances in the case where the rank of
the matrix scales linearly with its size. In the present paper, we study this
extensive rank problem, extending the alternative 'decimation' procedure that
we recently introduced, and carry out a thorough study of its performance.
Decimation aims at recovering one column/line of the factors at a time, by
mapping the problem into a sequence of neural network models of associative
memory at a tunable temperature. Though being sub-optimal, decimation has the
advantage of being theoretically analyzable. We extend its scope and analysis
to two families of matrices. For a large class of compactly supported priors,
we show that the replica symmetric free entropy of the neural network models
takes a universal form in the low temperature limit. For sparse Ising prior, we
show that the storage capacity of the neural network models diverges as
sparsity in the patterns increases, and we introduce a simple algorithm based
on a ground state search that implements decimation and performs matrix
factorization, with no need of an informative initialization. | Francesco Camilli, Marc Mézard | 2023-07-31T10:53:45Z | http://arxiv.org/abs/2307.16564v1 | # The Decimation Scheme for Symmetric Matrix Factorization
###### Abstract
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks. The study of its fundamental statistical limits represents a true challenge, and despite a decade-long history of efforts in the community, there is still no closed formula able to describe its optimal performances in the case where the rank of the matrix scales linearly with its size. In the present paper, we study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced, and carry out a thorough study of its performance. Decimation aims at recovering one column/line of the factors at a time, by mapping the problem into a sequence of neural network models of associative memory at a tunable temperature. Though being sub-optimal, decimation has the advantage of being theoretically analyzable. We extend its scope and analysis to two families of matrices. For a large class of compactly supported priors, we show that the replica symmetric free entropy of the neural network models takes a universal form in the low temperature limit. For sparse Ising prior, we show that the storage capacity of the neural network models diverges as sparsity in the patterns increases, and we introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization, with no need of an informative initialization.
###### Contents
* 1 Introduction
* 2 Decimation
* 2.1 An assumption on retrieval accuracy
* 3 Decimation free entropies
* 3.1 Fixed point equations
* 3.2 Remarks
* 4 Low temperature limits
* 4.1 Sparse prior
* 4.2 Continuous priors
* 5 Phase diagrams for the first decimation step
Numerical tests * 6.1 Testing the saddle point equations with AMP * 6.2 Expected decimation performance * 6.3 A ground state oracle for sparse Ising priors * 6.4 Reversed decimation
* 7 Related works
* 7.1 Unlearning and dreaming
* 7.2 Sub-linear rank
* 7.3 Channel universality properties
* 8 Conclusion and outlooks
## 1 Introduction
The factorization of a matrix into two, or more, factors represents a building block for many machine learning and inference problems. A well-known instance of it is _dictionary learning_[1, 2, 3, 4], which aims at representing a matrix as a product of two factor matrices, where the first, called _dictionary_, is very sparse, and the second, called _feature matrix_, has columns that form an over-complete basis of a euclidean space. As a result, each vector stored in the initial matrix is represented as a linear combination of few elements of the feature matrix. Matrix factorization is also at the basis of recommendation systems [5], and in general proves to be very effective whenever we want to reconstruct missing elements in a matrix of data, be it an image, a correlation matrix, or a matrix of preferences [6, 7, 8]. Other applications of matrix factorization include, but are not limited to, sparse principal component analysis [9], blind source separation [10], matrix completion [11, 12], robust principal component analysis [13]
In more specific terms, matrix factorization is the problem of reconstructing the two factors \(\mathbf{A}\), \(\mathbf{B}\) of a matrix \(\mathbf{AB}\) from a potentially noisy observation of the latter, say \(\mathbf{Y}\). One would like to answer two main questions: _(i)_ in what regimes of sizes of \(\mathbf{A}\), \(\mathbf{B}\) and noise is it possible to reconstruct the two factors (up to a permutation of the lines of \(\mathbf{A}\) and the columns of \(\mathbf{B}\))? _(ii)_ Do there exist efficient algorithms that achieve a good performance?
In the present paper we focus on symmetric matrix factorization in which the two factors to retrieve are identical. Consider an \(N\times P\) matrix \((\xi_{i}^{\mu})_{i\leq N}^{\mu\leq P}=\boldsymbol{\xi}\in\mathbb{R}^{N\times P}\) whose elements are independently and identically distributed according to a given prior probability \(P_{\xi}\), that we suppose to be symmetric, with unit variance and compact support: \(\mathbb{E}\xi=0\), \(\mathbb{E}\xi^{2}=1\), \(|\xi|\leq C\) for some \(C>0\). Secondly, let \((Z_{ij})_{i,j\leq N}=(Z_{ji})_{i,j\leq N}=\mathbf{Z}\) be a Wigner matrix, that is \(Z_{ij}=Z_{ji}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,1+\delta_{ij})\). Symmetric matrix factorization can thus be formulated as an inference problem: a Statistician needs to recover \(\boldsymbol{\xi}\) given the noisy observations
\[\mathbf{Y}=\frac{\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}}{\sqrt{N}}+ \sqrt{\Delta}\mathbf{Z}\,. \tag{1}\]
The strength of the noise \(\mathbf{Z}\) w.r.t. that of the signal is tuned by \(\Delta\geq 0\). In the following we will need to single out the \(P\) column vectors inside \(\boldsymbol{\xi}\), denoted by \(\boldsymbol{\xi}^{\mu}\), and we shall refer to them as _patterns_. Despite the model is presented here in a stylized way, i.e. with the two factors being identical and with completely factorized prior, we believe this setting represents a fundamental first step in the understanding of the general problem. Concerning in particular the assumption of a factorized prior, this is often used also in concrete situations. Indeed, for instance, the \(L^{2}\) norm regulators appearing in the empirical risk used to train neural networks are inherited from a zero temperature limit of a Statistical Mechanics problem that has the empirical risk as a Hamiltonian with factorized prior on the weights of the network, as clarified by [14].
A very popular setting to tackle an inference problem is the Bayes-optimal one, in which the Statistician tasked with the reconstruction of \(\boldsymbol{\xi}\) knows the generating process of the observations \(\mathbf{Y}\), namely they know that \(\mathbf{Z}\) is Gaussian, they know \(N,P,\Delta\) and the probability distribution of factors \(P_{\xi}\). This Bayes-optimal setting is of utmost relevance as it provides the information-theoretic optimal performance. Indeed, the posterior mean
estimator \(\mathbb{E}[\mathbf{XX^{\intercal}}|\mathbf{Y}]\), where
\[dP(\boldsymbol{\xi}=\mathbf{X}\mid\mathbf{Y})=\frac{1}{\mathcal{Z}(\mathbf{Y})} \prod_{i\leq N,\mu\leq P}dP_{\xi}(X_{i}^{\mu})\exp\left[\frac{1}{2\sqrt{N} \Delta}\mathrm{Tr}\mathbf{Y}\mathbf{XX^{\intercal}}-\frac{1}{4\Delta N} \mathrm{Tr}(\mathbf{XX^{\intercal}})^{2}\right], \tag{2}\]
is the one that minimizes the mean square error loss on the reconstruction of \(\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}\). The normalization of the distribution \(\mathcal{Z}(\mathbf{Y})\) is called _partition function_ and the associated _free entropy_ is defined as
\[\Phi_{N,P}=\frac{1}{NP}\mathbb{E}\log\mathcal{Z}(\mathbf{Y})\,. \tag{3}\]
The free entropy has a central role. In fact, from the thermodynamic point of view, it can be used to identify what macrostates dominate probability and are thus selected at thermodynamic equilibrium. These macrostates are usually identified by the values of some global order parameters, such as \(\mathrm{Tr}\mathbf{XX^{\intercal}}\boldsymbol{\xi}\boldsymbol{\xi}^{ \intercal}/N^{2}\), which measures the average alignment of a sample from the posterior and the ground truth \(\boldsymbol{\xi}\) we want to estimate. On the other hand, the free entropy is in close relationship with the _mutual information_\(I(\boldsymbol{\xi};\mathbf{Y})\) between the data and the ground truth. This information theoretic quantity quantifies the amount of residual information about the ground truth that is still available in the data after they have been corrupted by the noise.
If the rank \(P\) is finite, the model (1) is typically referred to as _spiked Wigner model_, first introduced as model for Principal Component Analysis (PCA) [15]. The spectral properties of low rank perturbations of high-rank matrices (such as the Wigner matrix \(\mathbf{Z}\)) are by now largely understood in random matrix theory, and they can give rise to the celebrated BBP carry out a thorough study of carry out a thorough study of transition [16], further studied and extended in [17, 18, 19, 20, 21, 22, 23, 24]. Thanks to the effort of a wide interdisciplinary community, we also have a control on the asymptotic behaviour of the posterior measure (2) and an exact formula for the free entropy associated to the low-rank problem [25, 26, 27, 28, 29, 30, 31, 32] (recently extended to rotational invariant noise [33]), which yields the Bayes-optimal limit of the noise allowing the reconstruction of the low-rank spike. Finally, a particular class of algorithms, known as _Approximate Message Passing_ (AMP) [34, 35, 36, 37, 38], is able to perform factorization up to this Bayes-optimal limit.
Here we are interested in the extensive rank regime where \(P,N\to\infty\) with fixed ratio \(P/N=\alpha\). In the hypothesis of a rotationally invariant noise \(\mathbf{Z}\), the spectral properties of \(\mathbf{Y}\) are governed by the free-convolution [39] of the spectral densities of \(\mathbf{Z}\) and \(\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}\). On the information theoretic side instead, there still is no accepted closed formula that expresses \(\Phi_{N,P}\). Hence, the information theoretic limits are currently out of reach, and the Minimum Mean Square Error (MMSE) for this estimation problem is not known. Among the past attempts, we must mention the line of works [40, 41, 42, 43, 44], whose proposed solution, as pointed out in [45, 46], provides only an approximation of the correct limit. In fact, the authors of [46] build a perturbative approach that highlights the presence of relevant correlations neglected in the previous works. A further attempt to produce a closed replica formula was put forward in [47], but, as [40], it involves uncontrolled approximations.
The main obstacle in the computation of the asymptotics of (3) is the fact that it is a matrix model, and, in particular, the term \(\mathrm{Tr}(\mathbf{XX^{\intercal}})^{2}\) couples both the "rank, or patterns indices" \(\mu\), and the "dimension, or particle site indices" \(i\). We will use here a different approach that we introduced and studied recently [48] in the simplest case where the factors' elements \(\xi_{i}^{\mu}\) are independent binary variables. Instead of the Bayes-optimal setting we use a simpler procedure, that we call _decimation_. At the cost of giving up on Bayes-optimality, decimation solves this problem and allows us to identify an iterative scheme to estimate pattern by pattern, giving an estimate of \(\boldsymbol{\xi}\) through a sequential estimation of its columns, and, more importantly, whose asymptotic performance turns out to be completely analyzable. In the case of binary patterns we could thus show that matrix factorization is possible in a part of the phase diagram where \(\alpha\) and \(\Delta\) are small enough. Here we generalize this approach to arbitrary distributions of the patterns' elements.
Organization of the paper and main contributionsIn Section 2 we define the decimation scheme, laying the ground for the replica computation of Section 3. In Section 4, we compute the low temperature limits for two classes of priors: sparse Ising and a generic absolutely continuous, symmetric and bounded support prior.
Surprisingly, the free entropies of the neural network models arising from decimation evaluated at the equilibrium value of the order parameters have a universal form, but in general not the same numerical value.
As we shall argue in the following, the starting point of the decimation procedure, i.e. the initial value of the parameters \(\alpha\) and \(\Delta\), is of crucial importance for its success. Therefore, in Section 5 we analyze the phase diagrams for the initial step of decimation. For the sparse Ising prior, we show that as sparsity increases, the storage capacity of the sequential neural network models of decimation diverges. For the class of continuous priors we highlight the presence of a thermodynamic transition, where there is a non-trivial overlap between a sample from the Gibbs measure and the sought pattern, and a performance transition, where Gibbs sampling can outperform the null-estimator.
In Section 6 we provide numerical evidence in support of the replica theory. We introduce the Decimated AMP algorithm (DAMP), in order to verify the predictions of the replica theory, and we relate the replica symmetric order parameters to the mean square error on the reconstruction of the patterns, as well as to the matrix mean square error for matrix denoising, showing that decimation can outperform Rotational Invariant Estimators (RIEs) [49, 50, 51] in this task. Furthermore, this Section contains the pseudo-code of a ground state oracle, an algorithm that is indeed able to find all the patterns one by one, with no need of informative initialization, contrary to DAMP.
Section 7 contains a comparison with recent relevant works that are related to the present one. Finally, Section 8 gathers the conclusions and future perspectives.
## 2 Decimation
Let us give a closer look at the probability distribution (2). For the purpose of the theoretical analysis we can replace \(Y_{ij}\) with the r.h.s. of (1), getting
\[dP(\boldsymbol{\xi}=\mathbf{X}\mid\mathbf{Y})=\frac{1}{\mathcal{Z}(\mathbf{Y })}\prod_{i\leq N,\mu\leq P}\left[dP_{\xi}(X_{i}^{\mu})\right]\mathrm{e}^{- \beta\left[\sum_{\mu}(E_{1}(\mathbf{X}^{\mu})+E_{2}(\mathbf{X}^{\mu})+E_{3}( \mathbf{X}^{\mu}))+\sum_{\mu<\nu}E_{4}(\mathbf{X}^{\mu},\mathbf{X}^{\nu})) \right]} \tag{4}\]
where \(\beta=\frac{1}{\Delta}\), \(\mathbf{X}^{\mu}=(X_{i}^{\mu})_{i\leq N}\) and
\[E_{1}(\mathbf{x}) =-\sum_{i,j=1}^{N}J_{ij}x_{i}x_{j}\ \ ;\ \ J_{ij}=\frac{1}{N}\sum_{\nu}\xi_{i}^{\nu}\xi_{j}^{\nu} \tag{5}\] \[E_{2}(\mathbf{x}) =-\sum_{i,j=1}^{N}\frac{\sqrt{\Delta}}{2\sqrt{N}}Z_{ij}x_{i}x_{j}\] (6) \[E_{3}(\mathbf{x}) =\frac{1}{4N}\Big{[}\sum_{i}x_{i}^{2}\Big{]}^{2}\] (7) \[E_{4}(\mathbf{x},\mathbf{x}^{\prime}) =\frac{1}{2N}\Big{[}\sum_{i}x_{i}x_{i}^{\prime}\Big{]}^{2}\,. \tag{8}\]
Here one should be careful not to confuse \(\xi_{i}^{\mu}\) which is the 'ground-truth' matrix from which the signal \(\mathbf{Y}\) was generated, and \(X_{i}^{\mu}\) which is a random variable distributed according to the measure \(dP(\boldsymbol{\xi}=\mathbf{X}\mid\mathbf{Y})\), so that the expectation value of \(X_{i}^{\mu}\) gives the best possible approximation to \(\xi_{i}^{\mu}\).
Looking at the above decomposition, we notice that, if we could drop the term \(E_{4}(\mathbf{X}^{\mu},\mathbf{X}^{\nu})\), we would have a system of \(P\) decoupled problems, one for each value of \(\mu\), described by an energy \(E_{1}(\mathbf{X}^{\mu})+E_{2}(\mathbf{X}^{\mu})+E_{3}(\mathbf{X}^{\mu})\). The energy \(E_{1}\) is that of a spin glass with \(N\) variables \(x_{i}\), each with an a-priori measure \(P_{\xi}(x_{i})\), interacting by pairs through a matrix of couplings \(J_{ij}\) which has a Hebbian form determined by the ground-truth patterns \(\boldsymbol{\xi}\). The energy \(E_{2}\) is a random spin glass term created by measurement noise. The energy \(E_{3}\) is a global penalty that ensures that the norm of \(\mathbf{X}\) does not get too large; one can also incorporate it into the local measure using a Lagrange multiplier. Altogether, the system described by \(E_{1}+E_{2}+E_{3}\) is a spin glass Hamiltonian with an
interaction which is a noisy version of a Hebbian interaction. This is typical of problems that have been studied as neural networks for associative memory, following the seminal work by Hopfield [52]. The present one is a generalization of the Hopfield model, where the stored patterns components \(\xi_{i}^{\mu}\) are no longer binary but have a more general distribution which can be continuous. Based on our knowledge of associative memories, one can expect that, when the noise strength \(\Delta\) and the number of patterns per variable \(\alpha=P/N\) are small enough, there can exist a'retrieval' phase, in which the configurations \({\bf x}\) that minimize \(E_{1}({\bf x})+E_{2}({\bf x})+E_{3}({\bf x})\) are close to the stored patterns \(\xi_{i}^{\mu}\). This is certainly the case for binary patterns as shown in [48]. Assuming that such a retrieval phase exists, one can understand the use of the fourth energy term, \(E_{4}\). In fact one can interpret (2) as follows: we start from \(P\) replicas of an associative memory each with energy \(E_{1}({\bf X}^{\mu})+E_{2}({\bf X}^{\mu})+E_{3}({\bf X}^{\mu})\). These copies interact by pairs through the term \(E_{4}({\bf X}^{\mu},{\bf X}^{\nu})\) which is a repulsive term. If one works in the retrieval phase of the associative memory, then at low temperature the ground state will be found when each replica \({\bf X}^{\mu}\) is close to one of the patterns \(\mathbf{\xi}^{\pi(\mu)}\). As there are \(P\) retrieval states and \(P\) replicas, all the \(\pi(\mu)\) must be distinct from one another, and therefore \(\pi\) is a permutation. In such a scenario, one would have found a phase where the factors can be reconstructed.
Decimation is based precisely on this idea. It works as a sequence of \(P\) estimations, each one studying a probability distribution which is that of a neural network model of associative memory. More precisely, one looks for one column \(\mathbf{\xi}^{\mu}\) of \(\xi\) at a time.
To fix ideas, let us start by discussing the search of a first pattern, using a Gibbs measure in the form
\[dP({\bf x}\mid{\bf Y})=\frac{dP_{\xi}({\bf x})}{{\cal Z}_{0}({\bf Y})}\exp \left(\beta\Big{[}\frac{1}{2N}\sum_{\mu=1}^{P}\Big{(}\sum_{i=1}^{N}\xi_{i}^{ \mu}x_{i}\Big{)}^{2}+\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z_{ij}x_{ i}x_{j}-\frac{\|{\bf x}\|^{4}}{4N}\Big{]}\right). \tag{9}\]
Here we have introduced a factor \(\beta\) that plays the role of an inverse absolute temperature for this Boltzmann-Gibbs measure. We could use \(\beta=1/\Delta\) as in the Bayes-optimal approach, but as we shall see taking the large \(\beta\) limit can also be a good choice.
When using this approach with variables \(x_{i}\) that are not constrained on the hypercube \(\{-1,1\}^{N}\) or in general on a sphere, it is also useful to introduce another term in the exponential that favours \({\bf x}\)-configurations with square norm equal to \(N\), as we know that the original signal is centered and with unit variance. Hence, the Boltzmann-Gibbs measure that we use to find a first pattern is actually \(dP_{\xi}({\bf x})e^{-\beta E({\bf x}\mid{\bf Y})}/{\cal Z}_{0}\) with an energy function
\[-E({\bf x}|{\bf Y})=\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z_{ij}x_{i} x_{j}+\frac{N}{2}\sum_{\mu=1}^{P}(m^{\mu}({\bf x}))^{2}-\frac{\|{\bf x}\|^{4}}{4N}- \frac{\lambda}{4N}(\|{\bf x}\|^{2}-N)^{2} \tag{10}\]
where we have introduced the _Mattis magnetization_
\[m^{\mu}({\bf x})=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\mu}x_{i}\,. \tag{11}\]
\(\lambda\) is a parameter penalizing (if positive) configurations with \(\|{\bf x}\|^{2}\neq N\), as mentioned before. If \(\lambda\to+\infty\) then the spins are constrained on a sphere. Let us now assume that we are able to sample a configuration \(\mathbf{\eta}^{P}\) from the Boltzmann-Gibbs measure with energy (10) that, without loss of generality (we shall relabel the patterns in such a way that the permutation \(\pi\) is the identity), we take as an estimate of \(\mathbf{\xi}^{P}\). How do we find the estimate of the other \(\mathbf{\xi}^{\mu}\), \(\mu<P\)?
If \(\mathbf{\eta}^{P}\) is a good estimate of \(\mathbf{\xi}^{P}\), the corresponding rank one contribution \(\mathbf{\eta}^{P}\mathbf{\eta}^{P\intercal}\) should be close (in Frobenius norm) to \(\mathbf{\xi}^{P}\mathbf{\xi}^{P\intercal}\). Then, if we subtract it from the Hebbian coupling \(E_{1}(X)\), we can hope that the ground state of the new associative memory problem will now have only \(P-1\) ground states, each close to one of the patterns \(\mathbf{\xi}^{\mu}\), \(\mu=1,...,P-1\). This new associative memory problem therefore has \(P-1\) stored patterns instead of \(P\) so that the well known phenomenon of _pattern interference_[53, 54], which limits the storage capacity, will be reduced.
Based on this intuition, we define the decimation procedure as follows: after having found the first estimate of a pattern, we modify the coupling matrix as
\[\mathbf{Y}_{1}=\mathbf{Y}-\frac{\boldsymbol{\eta}^{P}\boldsymbol{\eta}^{P\intercal }}{\sqrt{N}}\,, \tag{12}\]
which gives a modified energy function
\[-E(\mathbf{x}|\mathbf{Y}_{1})=\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z _{ij}x_{i}x_{j}+\frac{N}{2}\sum_{\mu=1}^{P}(m^{\mu}(\mathbf{x}))^{2}-\frac{N}{ 2}(p^{P}(\mathbf{x}))^{2}-\frac{\|\mathbf{x}\|^{4}}{4N}-\frac{\lambda}{4N}(\| \mathbf{x}\|^{2}-N)^{2} \tag{13}\]
where, here and in the following
\[p^{\mu}(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\eta_{i}^{\mu}x_{i}\,. \tag{14}\]
The same reasoning as above applies to this second step.
In general, if the first \(R\) (\(=0,1,2,\ldots,P-1\)) patterns have already been estimated, the decimation assumes to produce the estimate of the \(R+1\)-th pattern sampling from the Boltzmann Gibbs measure
\[d\mu_{R}(\mathbf{x})=\frac{dP_{\xi}(\mathbf{x})}{\mathcal{Z}_{R}}\exp\big{(}- \beta E(\mathbf{x}|\mathbf{Y}_{R})\big{)} \tag{15}\]
where
\[\mathbf{Y}_{R}=\mathbf{Y}-\sum_{\mu=P-R+1}^{P}\frac{\boldsymbol{\eta}^{\mu} \boldsymbol{\eta}^{\mu\intercal}}{\sqrt{N}} \tag{16}\]
and
\[-E(\mathbf{x}|\mathbf{Y}_{R})=\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z _{ij}x_{i}x_{j}+\frac{N}{2}\sum_{\mu=1}^{P}(m^{\mu}(\mathbf{x}))^{2}-\frac{N}{ 2}\sum_{\mu=P-R+1}^{P}(p^{\mu}(\mathbf{x}))^{2}-\frac{\|\mathbf{x}\|^{4}}{4N} -\frac{\lambda}{4N}(\|\mathbf{x}\|^{2}-N)^{2}\,. \tag{17}\]
The energy function above has some desirable features. First, the summation of the squared Mattis' magnetizations attracts mass of the distribution towards those configurations that are most aligned with one of the columns of \(\boldsymbol{\xi}\), which are our goal. Secondly, if the \(R\) estimates \(\boldsymbol{\eta}^{\mu}\), with \(\mu=P-R+1,\ldots P\) are reliable, in a sense we shall specify later, the summation containing the squared \((p^{\mu}(\mathbf{x}))^{2}\) repels the mass of the probability distribution from those configurations that are similar to previously estimated patterns, preventing the sampling from finding a pattern more than once.
We notice at this point that there are three noise sources in this procedure:
1. the original Wigner matrix \(\mathbf{Z}\);
2. pattern interference whose strength, as discussed above, is increasing with the ratio \(\alpha=P/N\);
3. the imperfect retrieval of patterns in the previous steps of decimation.
(c) is maybe the least obvious one. At each step, we subtract a rank one contribution \(\boldsymbol{\eta}^{\mu}\boldsymbol{\eta}^{\mu\intercal}/\sqrt{N}\) that is not exactly \(\boldsymbol{\xi}^{\mu}\boldsymbol{\xi}^{\mu\intercal}/\sqrt{N}\). This introduces an additional form of noise that depends on the quality of the previous reconstructions.
In order to monitor the strength of this third noise, we introduce the _retrieval accuracy_ of a pattern \(\boldsymbol{\xi}^{\mu}\):
\[m^{\mu}=\frac{\boldsymbol{\xi}^{\mu}\cdot\boldsymbol{\eta}^{\mu}}{N}\,,\quad \mu=P-R+1,\ldots,P\,. \tag{18}\]
These quantities turn out to be order parameters of the previous decimation steps. Indeed, they are nothing but Mattis' magnetizations of typical samples from (15) with a pattern. Hence, each decimation step has its own free entropy and we will determine the new retrieval accuracy via consistency equations arising from the maximization of it, namely we look for those macrostates that dominate probability in the \(N\to\infty\) limit. In addition to \(m^{\mu}\) we will have other order parameters appearing. In particular, there will be one, denoted by \(r\), tuning the amplitude of the overall noise, that, according to the considerations above, must comprise the three contributions coming from sources (a), (b) and (c).
### An assumption on retrieval accuracy
In order to carry out the computations we need some information on the statistics of the retrieved configurations \(\boldsymbol{\eta}^{\mu}\). We assume that an "oracle" algorithm will produce \(\boldsymbol{\eta}^{\mu}\) with an asymptotic measure given by
\[\eta^{\mu}_{i}\,\sim\,\langle\cdot\rangle_{\xi^{\mu}_{i},Z}=\frac{\int dP_{ \xi}(x)e^{(Z\sqrt{r}+\beta m^{\mu}\xi^{\mu}_{i})x-\frac{r+u}{2}x^{2}}(\cdot)}{ \int dP_{\xi}(x)e^{(Z\sqrt{r}+\beta m^{\mu}\xi^{\mu}_{i})x-\frac{r+u}{2}x^{2}} }\,,\quad\xi^{\mu}_{i}\sim P_{\xi}\,,Z\sim\mathcal{N}(0,1)\text{ independent of other noises}\,, \tag{19}\]
where \(m^{\mu}\), _i.e._ the retrieval accuracy for \(\boldsymbol{\eta}^{\mu}\), and \(\,r,\,u\) must be determined self-consistently. (19) amounts to requiring that, asymptotically, the sites are decoupled and they feel an effective external random magnetic field, that is Gaussian with a mean shifted by the ground truth \(\xi^{\mu}_{i}\). Define for later convenience the quantities
\[\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[\eta^{\mu}_{i}]=m^{\mu}_{i}\,, \quad\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[(\eta^{\mu}_{i})^{2}]=v^{ \mu}_{i}\,. \tag{20}\]
Then (19) has the following implications:
\[\mathbb{E}_{\boldsymbol{\xi}}[\eta^{\mu}_{i}]=\mathbb{E}_{\boldsymbol{\xi}} \mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[\eta^{\mu}_{i}]=0\,,\quad \mathbb{E}_{\boldsymbol{\xi}}[\xi^{\mu}_{i}m^{\nu}_{i}]=m^{\mu}\delta_{\mu, \nu}\,,\quad\mathbb{E}_{\boldsymbol{\xi}}[v^{\mu}_{i}]=v^{\mu} \tag{21}\]
that will be self-consistent with the fixed point equations for each decimation step. We shall see from the replica computation that this assumption holds inductively: if it is true at the \(R\)-th decimation step, then we are able to decouple the site indices also for the step \(R+1\), and the resulting spin-glass model has an effective random magnetic field of the same form.
## 3 Decimation free entropies
In this section we compute the large \(N\) limit of the free entropy
\[\Phi=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\log\int dP_{\xi}(\mathbf{x})\exp \left[-\beta E(\mathbf{x}|\mathbf{Y}_{R})\right]\,, \tag{22}\]
where \(\mathbb{E}\) is taken w.r.t. all the disorder: \(\mathbf{Z},\boldsymbol{\xi},\boldsymbol{\eta}\), and recall that \(R\) is the number of patterns that were already estimated. This is done using the _replica method_[55]. We thus introduce
\[\mathbb{E}\mathcal{Z}^{n}_{N}:=\mathbb{E}_{\mathbf{Z}}\mathbb{E}_{\boldsymbol{ \xi},\boldsymbol{\eta}}\int\prod_{a=1}^{n}dP_{\xi}(\mathbf{x}_{a})\exp\left[- \beta\sum_{a=1}^{n}E(\mathbf{x}_{a}|\mathbf{Y}_{\mathbf{R}})\right]\,. \tag{23}\]
We decompose this computation and start with the first noise terms in (17), and the related \(\mathbb{E}_{\mathbf{Z}}\) average
\[\mathbb{E}_{\mathbf{Z}}\exp\left(\frac{\beta\sqrt{\Delta}}{2 \sqrt{N}}\sum_{i,j=1}^{N}Z_{ij}\sum_{a=1}^{n}x_{a,i}x_{a,j}\right)=\exp\left( \frac{\beta^{2}\Delta}{4N}\sum_{i,j=1}^{N}\sum_{a,b=1}^{n}x_{a,i}x_{a,j}x_{b,i} x_{b,j}\right)=\\ =\exp\left(\frac{N\beta^{2}\Delta}{4}\sum_{a\neq b}^{n}Q^{2}( \mathbf{x}_{a},\mathbf{x}_{b})+\beta^{2}\Delta\frac{\|\mathbf{x}_{a}\|^{4}}{4 N}\right)\,. \tag{24}\]
where \(Q({\bf x},{\bf x}^{\prime})=(1/N)\sum_{i}x_{i}x_{i}^{\prime}\). For future convenience, we introduce the "decimation time" \(t=R/P\), i.e. the fraction of patterns already estimated. Now we take care of the penalizing \(p\)-terms in (17). After replicating, their contribution to the partition function is
\[A:=\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}e^{-\frac{N\beta}{2}(p^{\mu}({\bf x }_{a}))^{2}}=\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{ \sqrt{2\pi}}e^{-\frac{(\kappa_{a}^{\mu})^{2}}{2}+i\sqrt{\frac{P}{N}}s_{a}^{\mu }\sum_{j=1}^{N}\eta_{j}^{\mu}x_{a,j}}\,. \tag{25}\]
Notice that, thanks to the introduction of the auxiliary Gaussian variables \((s_{a}^{\mu})_{a\leq n,P(1-t)<\mu\leq P}\), the exponential is now decoupled over the particle indices \(j\). Consider then the expectation of \(A\) w.r.t. \(\eta\), given \(\xi\) with the assumptions (21):
\[\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[A]=\prod_{\mu=P(1-t)+1}^{P} \prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{\sqrt{2\pi}}\exp\left(-\frac{(s_{a}^{ \mu})^{2}}{2}+\sum_{i=1}^{N}\log\mathbb{E}_{\eta_{i}^{\mu}|\xi_{i}^{\mu}}e^{i \sqrt{\frac{P}{N}}\eta_{i}^{\mu}\sum_{i=1}^{n}s_{a}^{\mu}x_{a,i}}\right)\,. \tag{26}\]
Now we can expand the exponential inside the log up to second order, the remaining terms will be of sub-leading order and thus neglected in the following:
\[\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[A]=\prod_{\mu=P(1 -t)+1}^{P}\prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{\sqrt{2\pi}}\exp\left(-\frac{ (s_{a}^{\mu})^{2}}{2}+\sum_{a=1}^{n}is_{a}^{\mu}\sqrt{\frac{\beta}{N}}\sum_{i= 1}^{N}m_{i}^{\mu}x_{a,i}-\frac{\beta}{2}\sum_{a,b=1}^{n}s_{a}^{\mu}s_{b}^{\mu} \sum_{i=1}^{N}\frac{(v_{i}^{\mu}-(m_{i}^{\mu})^{2})}{N}x_{a,i}x_{b,i}\right)\] \[=\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{ \sqrt{2\pi}}\exp\left[-\frac{1}{2}\sum_{a,b=1}^{n}s_{a}^{\mu}s_{b}^{\mu}\left( \delta_{ab}+\beta\sum_{i=1}^{N}\frac{(v_{i}^{\mu}-(m_{i}^{\mu})^{2})}{N}x_{a,i }x_{b,i}\right)+\sum_{a=1}^{n}is_{a}^{\mu}\sqrt{\frac{\beta}{N}}\sum_{i=1}^{N }m_{i}^{\mu}x_{a,i}\right]\,. \tag{27}\]
To continue, we assume condensation on a finite number of patterns, say the first \(k\). We focus now on the remaining ones, namely for \(\mu>k\):
\[B:=\exp\left[\frac{\beta N}{2}\sum_{a=1}^{n}\sum_{\mu=k+1}^{P}(m^{\mu}({\bf x }_{a}))^{2}\right]=\int\prod_{\mu=k+1}^{P}\prod_{a=1}^{n}\frac{dz_{a}^{\mu}}{ \sqrt{2\pi}}\exp\left[-\sum_{a=1}^{n}\sum_{\mu=k+1}^{P}\frac{(z_{a}^{\mu})^{2} }{2}+\sqrt{\frac{\beta}{N}}\sum_{a=1}^{n}\sum_{\mu=k+1}^{P}z_{a}^{\mu}\sum_{i =1}^{N}x_{a,i}\xi_{i}^{\mu}\right]\,. \tag{28}\]
Putting \(A\) and \(B\) together, their overall average over \((\boldsymbol{\xi}^{\mu})_{\mu>k}\) takes the form
\[\mathbb{E}_{(\boldsymbol{\xi}^{\mu})_{\mu>k}}[AB]=\int\prod_{\mu=P( 1-t)+1}^{P}\prod_{a=1}^{n}\frac{ds_{a}^{\mu}}{\sqrt{2\pi}}\int\prod_{\mu=k+1 }^{P}\prod_{a=1}^{n}\frac{dz_{a}^{\mu}}{\sqrt{2\pi}}e^{-\sum_{a=1}^{n}\left( \sum_{\mu=P(1-t)+1}^{P}\frac{(z_{a}^{\mu})^{2}}{2}+\sum_{\mu=k+1}^{P}\frac{(z _{a}^{\mu})^{2}}{2}\right)}\] \[\exp\left[\sum_{i=1}^{N}\sum_{\mu=k+1}^{P}\log\mathbb{E}_{\xi_{i} ^{\mu}}e^{\sqrt{\frac{P}{N}}\sum_{a=1}^{n}x_{a,i}(\xi_{i}^{\mu}z_{a}^{\mu}+i \theta(\mu-P+R)m_{i}^{\mu}s_{a}^{\mu})-\theta(\mu-P+R)\sum_{a,b=1}^{n}s_{a}^{ \mu}s_{b}^{\mu}\frac{\beta(v_{i}^{\mu}-(m_{i}^{\mu})^{2})x_{a,i}x_{b,i}}{2N}} \right]\,, \tag{29}\]
where \(\theta\) is Heaviside's step function. If we call \(\mathbb{E}_{\boldsymbol{\xi}}m_{i}^{\mu\,2}=:\bar{M}^{\mu\,2}\), a further expansion of the exponential yields:
\[\mathbb{E}_{(\boldsymbol{\xi}^{\mu})_{\mu>k}}[AB]=\int\prod_{\mu=P(1 -t)+1}^{P}\prod_{a=1}^{n}\frac{ds_{a}^{\rho}}{\sqrt{2\pi}}\exp\left[-\frac{1}{2 }\sum_{\mu=P(1-t)+1}^{P}{\bf s}^{\mu}\cdot\left(\mathbb{1}+\beta(v_{\tau\mu}- \bar{M}^{\mu\,2})Q\right){\bf s}^{\mu}\right]\] \[\int\prod_{\mu=k+1}^{P}\prod_{a=1}^{n}\frac{dz_{a}^{\mu}}{\sqrt{2 \pi}}\exp\left\{-\sum_{\mu=k+1}^{P}\sum_{a=1}^{n}\frac{(z_{a}^{\mu})^{2}}{2}+ \frac{\beta}{2}\sum_{\mu=k+1}^{P}\sum_{a,b=1}^{n}z_{a}^{\mu}z_{b}^{\mu}Q({\bf x }_{a},{\bf x}_{b})+\right. \tag{30}\] \[\left.+i\beta\sum_{\mu=P(1-t)+1}^{P}\mathbb{E}_{\boldsymbol{\xi}} [\xi_{1}^{\mu}m_{1}^{\mu}]\sum_{a,b=1}^{n}z_{a}^{\mu}s_{b}^{\mu}Q({\bf x}_{a},{ \bf x}_{b})-\frac{\beta}{\Delta}\sum_{\mu=P(1-t)+1}^{P}\sum_{a,b=1}^{n}(\bar{M} ^{\mu})^{2}s_{a}^{\mu}s_{b}^{\mu}Q({\bf x}_{a},{\bf x}_{b})\right\}\]
We can now perform a Gaussian integration over the variables \(\mathbf{z}^{\mu}=(z_{a}^{\mu})_{a\leq n}\):
\[\begin{split}\mathbb{E}_{(\mathbf{\xi}^{\mu})_{\mu>k}}[AB]& =\int\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}\frac{ds_{a}^{\rho}}{ \sqrt{2\pi}}\exp\left[-\frac{1}{2}\sum_{\mu=P(1-t)+1}^{P}\mathbf{s}^{\mu}\cdot \left(\mathbbm{1}+\beta v^{\mu}Q+\beta^{2}Q\frac{\mathbb{E}_{\mathbf{\xi}}^{2}[ \xi_{1}^{\mu}m_{1}^{\mu}]}{\mathbbm{1}-\beta Q}Q\right)\mathbf{s}^{\mu}\right] \\ &\times\exp\left[-\frac{\alpha N}{2}\log\det\left(\mathbbm{1}- \beta Q\right)\right]\,.\end{split} \tag{31}\]
Finally, after an integration over the remaining Gaussian variables \(\mathbf{s}^{\mu}\), and using (21), we get
\[\begin{split}\mathbb{E}_{(\mathbf{\xi}^{\mu})_{\mu>k}}[AB]& =\exp\left[-\frac{\alpha(1-t)N}{2}\log\det\left(\mathbbm{1}-\beta Q \right)-\frac{1}{2}\sum_{\mu=P(1-t)+1}^{P}\log\det\left(\mathbbm{1}+\beta Q(v _{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta^{2}Q^{2}\right) \right],\end{split} \tag{32}\]
where \(\tau^{\mu}=(1-(\mu-1)/P)\), and \(m_{\tau^{\mu}}=m^{\mu}\) are the previous retrieval accuracies. It remains to analyze the contribution given by \((\mathbf{\xi}^{\mu})_{\mu\leq k}\):
\[\begin{split} C:=\exp\left[\frac{\beta N}{2}\sum_{a=1}^{n}\sum_{ \mu=1}^{k}(m^{\mu}(\mathbf{x}_{a}))^{2}\right]&=\int\prod_{a=1}^ {n}\prod_{\mu=1}^{k}dm_{a}^{\mu}\sqrt{\frac{\beta N}{2\pi}}\exp\left[\sum_{a= 1}^{n}\sum_{\mu=1}^{k}\left(-N\beta\frac{(m_{a}^{\mu})^{2}}{2}+\beta m_{a}^{ \mu}\sum_{i=1}^{N}\xi_{i}^{\mu}x_{a,i}\right)\right]\,.\end{split} \tag{33}\]
Before plugging the contributions coming from \(A\), \(B\) and \(C\) into \(\mathbb{E}\mathcal{Z}_{N}^{n}\) we need to introduce a collection of Dirac deltas to fix the desired order parameters, that are organized in the overlap matrix \((Q(\mathbf{x}_{a},\mathbf{x}_{b}))_{a,b=1}^{n}\):
\[1=\int\prod_{a\leq b\leq n}dq_{ab}\delta(Q(\mathbf{x}_{a},\mathbf{x}_{b})-q_{ ab})=\int\prod_{a\leq b\leq n}\frac{Ndr_{ab}dq_{ab}}{4\pi i}\exp\left[-\frac{1}{2} \sum_{a,b=1}^{n}r_{ab}(Nq_{ab}-\sum_{i}x_{a,i}x_{b,i})\right]\,. \tag{34}\]
Hence, the averaged replicated partition function, at leading exponential order in \(N\), takes the form
\[\begin{split}\mathbb{E}\mathcal{Z}_{N}^{n}&=\int \prod_{a\leq b\leq n}\frac{Ndr_{ab}dq_{ab}}{4\pi i}\int\prod_{a=1}^{n}\prod_{ \mu=1}^{k}dm_{a}^{\mu}\sqrt{\frac{N\beta}{2\pi}}\exp\left[-\frac{N}{2}\sum_{a,b}r_{ab}q_{ab}-\frac{\beta N}{2}\sum_{a=1}^{n}\sum_{\mu=1}^{k}(m_{a}^{\mu})^{ 2}\right]\\ &\times\exp\left[-\frac{1}{2}\sum_{\mu=P(1-t)+1}^{P}\log\det\left( \mathbbm{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta^ {2}Q^{2}\right)\right]\\ &\times\exp\left[-\frac{\alpha(1-t)N}{2}\log\det\left(\mathbbm{1}- \beta Q\right)+N\beta^{2}\Delta\sum_{a\neq b,1}^{n}\frac{q_{ab}^{2}}{4}+N\beta \sum_{a=1}^{n}\Big{(}-\frac{\lambda}{4}(1-q_{aa})^{2}+\frac{\beta\Delta-1}{4} q_{aa}^{2}\Big{)}\right]\\ &\times\left(\int\prod_{\mu=1}^{k}dP_{\xi}(\xi^{\mu})\prod_{a=1}^ {n}dP_{\xi}(x_{a})\exp\left[\frac{1}{2}\sum_{a,b=1}^{n}r_{ab}x_{a}x_{b}+\beta \sum_{\mu=1}^{k}\sum_{a=1}^{n}m_{a}^{\mu}\xi^{\mu}x_{a}\right]\right)^{N}\,, \end{split} \tag{35}\]
where we denote \(Q=(q_{ab})_{a,b=1}^{n}\). We can finally express the replicated free entropy with a variational principle
coming from a saddle point argument applied to the formula above:
\[\Phi_{n}:=\lim_{N\to\infty}\Phi_{N,n}=\frac{1}{n}\text{Extr}\Big{\{}- \frac{1}{2}\sum_{a,b}r_{ab}q_{ab}-\frac{\beta}{2}\sum_{a=1}^{n}\sum_{\mu=1}^{k}( m_{a}^{\mu})^{2}-\frac{\alpha(1-t)N}{2}\log\det\left(\mathbb{1}-\beta Q\right)\] \[+\beta\sum_{a=1}^{n}\Big{(}\frac{\beta\Delta-1}{4}q_{aa}^{2}- \frac{\lambda}{4}(1-q_{aa})^{2}\Big{)}-\frac{\alpha t}{2R}\sum_{\mu=P(1-t)+1}^ {P}\!\log\det\left[\mathbb{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau ^{\mu}}^{2})\beta^{2}Q^{2}\right] \tag{36}\] \[+\beta^{2}\Delta\sum_{a\neq b,1}^{n}\frac{q_{ab}^{2}}{4}+\log\int \prod_{\mu=1}^{k}\mathbb{E}_{\xi^{\mu}}\int\prod_{a=1}^{n}dP_{\xi}(x_{a})\exp \left[\frac{1}{2}\sum_{a,b=1}^{n}r_{ab}x_{a}x_{b}+\beta\sum_{\mu=1}^{k}\sum_{ a=1}^{n}m_{a}^{\mu}\xi^{\mu}x_{a}\right]\Big{\}}\,.\]
The normalized sum over \(\mu=P(1-t)+1,\ldots,P\) on the second line can be turned into an integral \(\int_{0}^{t}\,d\tau\dots\) in the large \(N\) limit. The extremization is taken w.r.t. the collection of parameters \((r_{ab},q_{ab})_{a,b=1}^{n}\), \((m_{a}^{\mu})_{a=1,\mu=1}^{n,k}\). Within the replica symmetric ansatz
\[\begin{cases}r_{ab}=r\,,\quad a\neq b\\ r_{aa}=-u\end{cases}\quad\begin{cases}q_{ab}=q\,,\quad a\neq b\\ q_{aa}=v\end{cases}\quad m_{a}^{\mu}=m^{\mu}\,,\quad Q=\begin{pmatrix}v&q&q& \dots&q\\ q&v&\dots&q\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ q&q&q&\dots&v\end{pmatrix}\in\mathbb{R}^{n\times n}\,. \tag{37}\]
The determinants of \(\mathbb{1}-\beta Q\) and \(\mathbb{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta ^{2}Q^{2}\) are easily computed:
\[\det\left(\mathbb{1}-\beta Q\right)=\left(1-\beta(v-q)\right)^{ n}\left[1-n\frac{\beta q}{1-\beta(v-q)}\right] \tag{38}\] \[\det\left(\mathbb{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m _{\tau^{\mu}}^{2})\beta^{2}Q^{2}\right)=\left[1+\beta(v_{\tau^{\mu}}-1)(v-q)-( v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta^{2}(v-q)^{2}\right]^{n-1}\] (39) \[\qquad\times\left[1+\beta(v_{\tau^{\mu}}-1)(v-q+nq)-(v_{\tau^{ \mu}}-m_{\tau^{\mu}}^{2})\beta^{2}\left(v-q+nq\right)^{2}\right]\,.\]
Further simplifications occur for the other terms in the replicated free entropy. In particular the remaining log integral is:
\[\int\prod_{\mu=1}^{k}\mathbb{E}_{\xi^{\mu}}\int\prod_{a=1}^{n}dP _{\xi}(x_{a})\exp\left[\frac{r}{2}\sum_{a\neq b,1}^{n}x_{a}x_{b}-\frac{u}{2} \sum_{a=1}^{n}x_{a}^{2}+\beta\sum_{\mu=1}^{k}m^{\mu}\xi^{\mu}\sum_{a=1}^{n}x_ {a}\right]=\\ =\mathbb{E}_{Z}\int\prod_{\mu=1}^{k}\mathbb{E}_{\xi^{\mu}}\prod _{a=1}^{n}\int dP_{\xi}(x_{a})\exp\left[\sqrt{r}Zx_{a}-\frac{u+r}{2}x_{a}^{2}+ \beta\sum_{\mu=1}^{k}m^{\mu}\xi^{\mu}x_{a}\right]=\\ =\mathbb{E}_{Z}\mathbb{E}_{\mathbf{\xi}}\left[\int dP_{\xi}(x)\exp \left(\left(Z\sqrt{r}+\beta\mathbf{m}\cdot\mathbf{\xi}\right)x-\frac{u+r}{2}x^{2} \right)\right]^{n} \tag{40}\]
where \(Z\sim\mathcal{N}(0,1)\), \(\mathbf{\xi}=(\xi^{1},\dots,\xi^{k})\), \(\mathbf{m}=(m^{1},\dots,m^{k})\). Finally, expanding at first order in \(n\) one has:
\[\Phi:=\text{Extr}\Big{\{}\frac{rq+uv}{2}-\beta\sum_{\mu=1}^{k} \frac{(m^{\mu})^{2}}{2}-\frac{\beta^{2}\Delta q^{2}}{4}-\frac{\alpha(1-t)}{2} \left[\log\left(1-\beta(v-q)\right)-\frac{\beta q}{1-\beta(v-q)}\right]\] \[-\frac{\alpha t}{2}\int_{0}^{t}d\tau\left[\log\left(1+\beta(v_{ \tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}\right)+\frac{\beta q(v _{\tau}-1)-2\beta^{2}q(v-q)(v_{\tau}-m_{\tau}^{2})}{1+\beta(v_{\tau}-1)(v-q)-( v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}}\right] \tag{41}\]
The correct stationary parameters \(v,m,q,u,r\) will be those that maximize the free entropy. Hence it is clear that if \(\lambda\to\infty\) we recover the constraint \(v=1\).
### Fixed point equations
Let us introduce the following notation:
\[\langle\cdot\rangle_{t,\boldsymbol{\xi}}\equiv\langle\cdot\rangle_{t}:=\frac{ \int dP_{\xi}(x)\exp\big{(}(Z\sqrt{r}+\beta\mathbf{m}\cdot\boldsymbol{\xi})x- \frac{r+u}{2}x^{2}\big{)}(\cdot)}{\int dP_{\xi}(y)\exp\big{(}(Z\sqrt{r}+\beta \mathbf{m}\cdot\boldsymbol{\xi})y-\frac{r+u}{2}y^{2}\big{)}}\,, \tag{42}\]
where the subscript \(t\) emphasizes that we have already reconstructed \(R=tP\) patterns. The stationarity conditions coming from (41) are
\[v =\mathbb{E}_{\boldsymbol{\xi}}\langle X^{2}\rangle_{t} \tag{43}\] \[m^{\mu} =\mathbb{E}_{\boldsymbol{\xi}}\xi^{\mu}\langle X\rangle_{t}\,, \quad\mu=1,\ldots,k\] (44) \[q =\mathbb{E}_{\boldsymbol{\xi}}\langle X\rangle_{t}^{2}\] (45) \[r =\frac{\alpha(1-t)\beta^{2}q}{(1-\beta(v-q))^{2}}+\beta^{2} \Delta q+\alpha t\int_{0}^{t}\,d\tau\Big{[}\frac{2q\beta^{2}(v_{\tau}-m_{\tau} ^{2})}{1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}}\] (46) \[\qquad\qquad+q\frac{\beta^{2}[v_{\tau}-1-2\beta(v-q)(v_{\tau}-m_{ \tau}^{2})]^{2}}{[1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v- q)^{2}]^{2}}\Big{]}\] \[u =\beta\lambda(v-1)+\beta(1-\beta\Delta)v-\alpha(1-t)\beta\frac{1- \beta(v-2q)}{(1-\beta(v-q))^{2}}-\alpha t\int_{0}^{t}\,d\tau\Big{[}\frac{2v \beta^{2}(v_{\tau}-m_{\tau}^{2})-\beta(v_{\tau}-1)}{1+\beta(v_{\tau}-1)(v-q)- (v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}}\] \[\qquad\qquad+q\frac{\beta^{2}[v_{\tau}-1-2\beta(v-q)(v_{\tau}-m_ {\tau}^{2})]^{2}}{[1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v -q)^{2}]^{2}}\Big{]}\,. \tag{47}\]
Notice that the effect of decimation is visible only in the variables \(u\) and \(r\) that affect the local measure (19). With a close look to the expression of \(r\) we can recognize the three predicted independent noise contribution. The first term is due to pattern interference (noise (b)), and we see that it decreases as \(t\) approaches \(1\). The second term can be identified with the noise contribution (a), which is due to the original Gaussian noise \(\mathbf{Z}\). The decimation noise contribution (noise (c)) is instead given by the third term, that is expressed in integral form, which correctly takes into account all the history of the process. As anticipated above, the success of decimation is determined by the interplay between noises (b) and (c). Since, as we shall see in Section 6, the retrieval accuracies remain close to one in the range of parameters \(\alpha,\Delta\) were the first step of decimation is feasible, the noise contribution (c) will be small. In addition, solving the previous equations for each decimation step shows that the benefit we gain due to the reduction of pattern interference is higher than the penalty we pay for introducing noise with decimation. As a consequence, decimation proves to be a viable strategy for matrix factorization.
For all practical purposes, we will make finite size simulations and use the discretized form present in (36) of the integral accounting for decimation contributions, starting from step \(0\), when no pattern has been retrieved yet. Finally, notice that mixed states solutions are possible, with the estimates aligning to more than \(1\) pattern, _i.e._ several \(m^{\mu}\)'s in (44) are non-vanishing. This is not desirable in inference, since one wants to estimate one pattern at a time with the best possible performance.
### Remarks
First of all, we clarify the relation between our formula and the low-rank formula for the spiked Wigner model. Therefore, let us set \(\beta=1/\Delta\), \(P=1\), which means \(\alpha=0\), and \(\lambda=0\). In this case the free entropy reads
\[\Phi:=\text{Extr}\Big{\{}\frac{rq+uv}{2}-\frac{m^{2}}{2\Delta}-\frac{q^{2}}{4 \Delta}+\mathbb{E}_{Z,\boldsymbol{\xi}}\log\int dP_{\xi}(x)\exp\left(\left(Z \sqrt{r}+\frac{m}{\Delta}\xi\right)x-\frac{u+r}{2}x^{2}\right)\Big{\}} \tag{48}\]
Extremizing w.r.t. \(q\) and \(v\) we readily find:
\[r=\frac{q}{\Delta}\,,\quad u=0\,. \tag{49}\]
Plugging this result inside the free entropy yields
\[\Phi:=\text{Extr}\Big{\{}\frac{q^{2}}{4\Delta}-\frac{m^{2}}{2\Delta}+\mathbb{E }_{Z,\mathbf{\xi}}\log\int dP_{\xi}(x)\exp\left(\left(Z\sqrt{\frac{q}{\Delta}}+ \frac{m\xi}{\Delta}\right)x-\frac{q}{2\Delta}x^{2}\right)\Big{\}}\,. \tag{50}\]
Finally, extremization w.r.t. \(q\) and \(m\) yields two coupled equations
\[m=\mathbb{E}_{\xi}\xi\left.\left\langle X\right\rangle_{t}\right|_{r=\frac{q} {\Delta},u=0}\,,\quad q=\mathbb{E}_{\xi}\left.\left\langle X\right\rangle_{t}^ {2}\right|_{r=\frac{q}{\Delta},u=0} \tag{51}\]
that admit a self consistent solution satisfying a single equation
\[m=q=\mathbb{E}_{\xi}\xi\left.\left\langle X\right\rangle_{t}\right|_{r=\frac{ m}{\Delta},u=0} \tag{52}\]
which is exactly the known fixed point equation for the overlap in the spiked Wigner model.
Secondly, we need to ensure a proper scaling w.r.t. \(\beta\). In particular the limit \(\lim_{\beta\to\infty}\frac{\Phi}{\beta}\) must be well defined at any decimation step. The only terms in the free entropy that could give rise to overscalings in \(\beta\) are
\[\frac{rq+uv}{2}-\frac{\beta^{2}\Delta q}{4}+\frac{\beta^{2}\Delta v}{4}\,, \quad\frac{r+u}{2}\,. \tag{53}\]
The latter in particular appears at the exponent in the gas free entropy in the last line of (41). Both the fixed point equations for \(u\) and \(r\) contain terms proportional to \(\beta^{2}\). This issue though is only apparent, and the fixed point remains well defined. To show this let us rewrite the first problematic term as follows:
\[\frac{rq+uv}{2}-\frac{\beta^{2}\Delta q}{4}+\frac{\beta^{2}\Delta v}{4}=\frac {-r(v-q)+(u+r)v}{2}+\frac{\beta^{2}\Delta(v-q)}{4}. \tag{54}\]
In the limit \(\beta\to\infty\) the term
\[-\frac{\beta q}{1-\beta(v-q)} \tag{55}\]
arising from the square bracket in the first line of (41) forces \(q\to v\) in such a way that \(\beta(v-q)<1\) remains of order \(O(1)\). Hence \(\frac{\beta^{2}\Delta(v-q)}{4}\) and \(r(v-q)=(r/\beta)\beta(v-q)\) are at most of order \(O(\beta)\) as they should. It remains to verify that \(u+r=O(\beta)\):
\[u+r=\beta\lambda(v-1+\beta v)-\beta^{2}\Delta(v-q)-\frac{\alpha\beta}{1- \beta(v-q)}-\alpha t\int_{0}^{t}d\tau\Big{[}\frac{2\beta^{2}(v-q)(v_{\tau}-m_ {\tau}^{2})-\beta(v_{\tau}-1)}{1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2 })\beta^{2}(v-q)^{2}}\Big{]}\,. \tag{56}\]
Again, thanks to the fact that \(\beta(v-q)<1\), the correct scaling occurs.
Thirdly, we notice that for Gaussian prior, when patterns are generated from \(P_{\xi}=\mathcal{N}(0,1)\), retrieval is impossible if \(\alpha>0\). In fact, from the fixed point equation for \(m^{\mu}\), one can perform a Gaussian integration by parts on the \(\xi^{\mu}\) obtaining:
\[m^{\mu}=m^{\mu}\beta\big{(}\mathbb{E}\langle X^{2}\rangle_{t}-\mathbb{E} \langle X\rangle_{R}^{2}\big{)}=m^{\mu}\beta(v-q) \tag{57}\]
which entails \(m^{\mu}=0\) or \(\beta(v-q)=1\). The latter though is not possible because it would cause the free entropy to diverge to minus infinity. Hence, the only possibility is to have negligible alignment with all the patterns, \(m^{\mu}=0\). On the contrary if \(\alpha=0\), the diverging contribution disappears, and setting \(\beta=1/\Delta\) yields the usual PCA estimator overlap \(m=q=1-\Delta\).
## 4 Low temperature limits
### Sparse prior
Let us express the \(\beta\to\infty\) limit of the free entropy with a prior of the form
\[P_{\xi}=(1-\rho)\delta_{0}+\frac{\rho}{2}\left[\delta_{-1/\sqrt{\rho}}+\delta_{1/ \sqrt{\rho}}\right]\,,\quad\rho\in(0,1)\,. \tag{58}\]
The case \(\rho=1\) shall be discussed separately in the end. For future convenience we introduce the notations
\[C:=\beta(v-q)\,\in[0,1)\,,\quad\bar{r}:=r/\beta^{2}\,,\quad U:=\frac{u+r}{\beta} \tag{59}\]
where \(q\) is intended as the stationary value of the overlap solving the fixed point equations. Denote \(\mathbf{m}=(m^{\mu})_{\mu=1}^{k}\), where \(k\) is the maximum number of condensed patterns. In the low temperature limit the free entropy, re-scaled by \(\beta\), and evaluated at the stationary values of the parameters involved has the form
\[\frac{1}{\beta}\Phi=-\frac{\lambda(v-1)^{2}}{4}-\frac{\bar{r}C}{2 }+\frac{Uv}{2}+\frac{\alpha(1-t)v}{2(1-C)}-\frac{v^{2}}{4}-\frac{\mathbf{m}^{2 }}{2}+\frac{\Delta Cv}{2}+\psi+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v _{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^ {2}} \tag{60}\]
where
\[\psi=\frac{1}{\beta}\mathbb{E}_{\boldsymbol{\xi},Z}\log\left[1-\rho+\rho\cosh \frac{\beta}{\sqrt{\rho}}\left(Z\sqrt{\bar{r}}+\mathbf{m}\cdot\boldsymbol{\xi }\right)\exp\left(-\frac{\beta U}{2\rho}\right)\right]\,. \tag{61}\]
When \(\beta\to\infty\) we have to distinguish two cases in the \(Z\) average:
\[\psi=O\Big{(}\frac{1}{\beta}\Big{)}+\frac{1}{\beta}\mathbb{E}_{ \boldsymbol{\xi}}\left(\int_{-\mathbf{m}\cdot\boldsymbol{\xi}/\sqrt{\bar{r}}+ U/2\sqrt{\bar{r}\rho}}^{\infty}+\int_{-\infty}^{-\mathbf{m}\cdot \boldsymbol{\xi}/\sqrt{\bar{r}}-U/2\sqrt{\bar{r}\rho}}\right)\frac{dz\,e^{- \frac{z^{2}}{2}}}{\sqrt{2\pi}}\log\left[1-\rho+\rho\cosh\frac{\beta}{\sqrt{ \rho}}\left(z\sqrt{\bar{r}}+\mathbf{m}\cdot\boldsymbol{\xi}\right)e^{-\frac{ \beta U}{2\rho}}\right]. \tag{62}\]
The \(O(\beta^{-1})\) instead comes from integration on the interval \([-\mathbf{m}\cdot\boldsymbol{\xi}/\sqrt{\bar{r}}-U/2\sqrt{\bar{r}\rho},- \mathbf{m}\cdot\boldsymbol{\xi}/\sqrt{\bar{r}}+U/2\sqrt{\bar{r}\rho}]\) of the same integrand, that can be easily bounded.
Let us now focus on the first integral in (62). The hyperbolic cosine and the exponential in \(U\) dominate on the other terms in the log. Taking into account the exponential growth in the selected range of \(z\)-values the first integral can be approximated with:
\[\mathbb{E}_{\boldsymbol{\xi}}\int_{-\mathbf{m}\cdot\boldsymbol{ \xi}/\sqrt{\bar{r}}+U/2\sqrt{\bar{r}\rho}}^{\infty}\frac{dz}{\sqrt{2\pi}}e^{- \frac{z^{2}}{2}}\left(\frac{Z\sqrt{\bar{r}}+\mathbf{m}\cdot\boldsymbol{\xi}}{ \sqrt{\bar{\rho}}}-\frac{U}{2\rho}\right) =\sqrt{\frac{\bar{r}}{2\pi\rho}}\mathbb{E}_{\boldsymbol{\xi}}e^{ -\frac{1}{2\rho}\left(\frac{U}{2\sqrt{\bar{r}}}-\mathbf{m}\cdot\boldsymbol{ \xi}\right)^{2}}+\] \[+\mathbb{E}_{\boldsymbol{\xi}}\left(\frac{\mathbf{m}\cdot \boldsymbol{\xi}}{\sqrt{\bar{\rho}}}-\frac{U}{2\rho}\right)\int_{-\mathbf{m} \cdot\boldsymbol{\xi}/\sqrt{\bar{r}}+U/2\sqrt{\bar{r}\rho}}^{\infty}\frac{dz} {\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}\,. \tag{63}\]
The second integral in (62) can be treated similarly. Putting all the terms together one gets
\[\frac{1}{\beta}\Phi =-\frac{\bar{r}C}{2}+\frac{\Delta Cv}{2}+\frac{Uv}{2}+\frac{ \alpha(1-t)v}{2(1-C)}-\frac{v^{2}+\lambda(v-1)^{2}}{4}-\frac{\mathbf{m}^{2}}{ 2}+\sqrt{\frac{2\bar{r}}{\pi\rho}}\mathbb{E}_{\boldsymbol{\xi}}e^{-\frac{1}{2 \rho}\left(\frac{U}{2\sqrt{\bar{\rho}}}-\mathbf{m}\cdot\boldsymbol{\xi}\right)^ {2}}\] \[+\mathbb{E}_{\boldsymbol{\xi}}\frac{\mathbf{m}\cdot\boldsymbol{ \xi}}{\sqrt{\bar{\rho}}}\mathrm{erf}\left(\frac{\mathbf{m}\cdot\boldsymbol{ \xi}+\frac{U}{2\sqrt{\bar{\rho}}}}{\sqrt{2\bar{r}}}\right)-\frac{U}{2\rho} \mathbb{E}_{\boldsymbol{\xi}}\left[1-\mathrm{erf}\left(\frac{\mathbf{m}\cdot \boldsymbol{\xi}+\frac{U}{2\sqrt{\bar{\rho}}}}{\sqrt{2\bar{r}}}\right)\right]+ \frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{\tau}-m_{\tau}^{2})-(v_{ \tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}}\,. \tag{64}\]
Using the fact that all the parameters are evaluated at their stationary values, the previous formula can be further simplified by looking at the limiting version of the fixed point equations. In particular we have that
\[C=\sqrt{\frac{2}{\pi\rho\bar{r}}}\mathbb{E}_{\boldsymbol{\xi}}\exp\left(-\left( \frac{U/2\sqrt{\rho}-\mathbf{m}\cdot\boldsymbol{\xi}}{\sqrt{2\bar{r}}}\right)^ {2}\right)\,. \tag{65}\]
The value of \(\bar{r}\) can be found directly from (46) by multiplying it by \(\beta^{-2}\):
\[\bar{r}=\frac{\alpha(1-t)v}{(1-C)^{2}}+\Delta v+\alpha tv\int_{0}^{t}\,d\tau \left[\frac{2(v_{\tau}-m_{\tau}^{2})}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C ^{2}}+\frac{[v_{\tau}-1-2C(v_{\tau}-m_{\tau}^{2})]^{2}}{[1+(v_{\tau}-1)C-(v_{ \tau}-m_{\tau}^{2})C^{2}]^{2}}\right]\,. \tag{66}\]
Deriving w.r.t. \(v\) we get the equation for \(U=\frac{v+\tau}{\beta}\):
\[U=-\Delta C+v+\lambda(v-1)-\frac{\alpha(1-t)}{(1-C)}-\alpha t\int_{0}^{t}d\tau \frac{2C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{ \tau}^{2})C^{2}}\,. \tag{67}\]
From a derivative w.r.t. \(U\) we get an equations for \(v\):
\[v=\frac{1}{\rho}\mathbb{E}_{\boldsymbol{\xi}}\left[1-\mathrm{erf}\left(\frac{ \mathbf{m}\cdot\boldsymbol{\xi}+\frac{U}{2\sqrt{\rho}}}{\sqrt{2\bar{r}}} \right)\right]\,. \tag{68}\]
We can solve this equation in order to get \(U\) as a function of \(v\), for instance by dichotomy. Finally, from (44) and (61)
\[\mathbf{m}=\mathbb{E}\boldsymbol{\xi}\langle X\rangle_{Z,\boldsymbol{\xi}}= \frac{\partial\psi}{\partial\mathbf{m}}=\mathbb{E}_{\boldsymbol{\xi}}\frac{ \boldsymbol{\xi}}{\sqrt{\rho}}\mathrm{erf}\left(\frac{\mathbf{m}\cdot \boldsymbol{\xi}-U/2\sqrt{\rho}}{\sqrt{2\bar{r}}}\right)\,. \tag{69}\]
If we insert these conditions in (64) we get
\[\frac{\Phi}{\beta}=\frac{\alpha(1-t)v}{2(1-C)^{2}}+\Delta Cv-\frac{v^{2}+ \lambda(v-1)^{2}}{4}+\frac{\mathbf{m}^{2}}{2}+\frac{\alpha tv}{2}\int_{0}^{t} d\tau\frac{4C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)[1-(v_{\tau}-m_{\tau}^{2})C^{2}]}{ [1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}]^{2}}\,. \tag{70}\]
A numerical procedure to find a solution to the previous system of equations is to solve simultaneously (65) and (68) plugging into them the definitions of \(\bar{r}\) and \(U\) for a fixed \(m\). Then one can iterate (69).
Notice that, when \(\lambda\) is finite, the problem is not continuous in \(\rho=1\), namely sending \(\beta\to+\infty\) before or after setting \(\rho=1\) is different. This can be seen as a consequence of the non commutation of the two limits \(\lim_{\beta\to\infty}\) and \(\lim_{\rho\to 1}\) for the quantity \((1-\rho)^{1/\beta}\). In fact, for \(\rho=1\) the \(O(\beta^{-1})\) contribution in \(\psi\) that was discarded before, is no longer negligible. Considering that contribution too would yield a free entropy of the form:
\[\frac{1}{\beta}\Phi=-\frac{\bar{r}C}{2}+\frac{\Delta Cv}{2}+\frac {Uv}{2}+\frac{\alpha(1-t)v}{2(1-C)}-\frac{v^{2}+\lambda(v-1)^{2}}{4}-\frac{ \mathbf{m}^{2}}{2}+\sqrt{\frac{2\bar{r}}{\pi\rho}}\mathbb{E}_{\boldsymbol{\xi }}e^{-\frac{1}{2\rho}\left(\theta(1-\rho)\frac{U}{2\sqrt{\rho}}-\mathbf{m} \cdot\boldsymbol{\xi}\right)^{2}}\\ +\mathbb{E}_{\boldsymbol{\xi}}\frac{\mathbf{m}\cdot\boldsymbol{ \xi}}{\sqrt{\rho}}\mathrm{erf}\left(\frac{\mathbf{m}\cdot\boldsymbol{\xi}+ \theta(1-\rho)\frac{U}{2\sqrt{\rho}}}{\sqrt{2\bar{r}}}\right)-\frac{U}{2\rho} \mathbb{E}_{\boldsymbol{\xi}}\left[1-\mathrm{erf}\left(\frac{\mathbf{m} \cdot\boldsymbol{\xi}+\theta(1-\rho)\frac{U}{2\sqrt{\rho}}}{\sqrt{2\bar{r}}} \right)\right]\\ +\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{\tau}-m_{\tau}^{ 2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}}\,, \tag{71}\]
where we set \(\theta(0)=0\). We see quickly that now, if \(\rho=1\), \(v=1\) is automatically enforced, whereas it was not so before. This discontinuous behaviour disappears if one sends \(\lambda\to+\infty\) from the very beginning, as studied in [48].
### Continuous priors
Consider the same definitions of \(\bar{r},C,U\) as above. In this section we deal with priors that are symmetric and absolutely continuous over the Lebesgue measure, with density \(p(x)\). We require the density to be finite at the boundaries of the support \([-a,a]\), or to go to zero with at most polynomial speed, and to be non-vanishing in the interior of the support. An example is the uniform distribution over \([-\sqrt{3},\sqrt{3}]\). The prior dependent part in the free entropy is still
\[\psi:=\frac{1}{\beta}\mathbb{E}_{Z,\mathbf{\xi}}\log\int dP_{\xi}(x)e^{\beta(Z \sqrt{\bar{r}}+\mathbf{m}\cdot\mathbf{\xi})x-\frac{\beta U}{2}x^{2}}\,. \tag{72}\]
We separate the quenched Gaussian integral from the expectation w.r.t. \(\mathbf{\xi}\), and we perform the following changes of variables: \(z\mapsto z/\sqrt{\bar{r}}\), \(z\mapsto z-\mathbf{m}\cdot\mathbf{\xi}\). This yields
\[\psi=\frac{1}{\beta}\mathbb{E}_{\mathbf{\xi}}\int\frac{dz}{\sqrt{2 \pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\log\int_{-a}^ {a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{U}\right)^{2}+\frac{dz^{2}}{2U}}= \\ =\frac{\bar{r}+\mathbf{m}^{2}}{2U}+\frac{1}{\beta}\mathbb{E}_{ \mathbf{\xi}}\int\frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi} )^{2}}{2\bar{r}}}\log\int_{-a}^{a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{ U}\right)^{2}}=:\frac{\bar{r}+\mathbf{m}^{2}}{2U}+\bar{\psi}\,. \tag{73}\]
The integral inside the logarithm in \(\bar{\psi}\) can be computed by Laplace's approximation when \(\beta\) is large. However, the location of the maximum of the exponent depends on the value of \(z\). In particular if \(z\in[-Ua,Ua]\) then the maximum point falls inside the support of \(p(x)\). Otherwise, given the quadratic nature of the exponent, the maximum in \(x\) will be attained at the boundaries of the support \(-a\) and \(a\). Hence the \(z\)-integral must be divided into three segments. Let us first consider:
\[\mathrm{I}=\frac{1}{\beta}\mathbb{E}_{\mathbf{\xi}}\int_{-Ua}^{Ua}\frac{dz}{\sqrt {2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\log\int_{ -a}^{a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{U}\right)^{2}}\xrightarrow{ \beta\to\infty}0 \tag{74}\]
because the exponent equals \(0\) at the maximum. Hence no exponential contribution in \(\beta\) is given, that is able to constrast the \(1/\beta\) in front.
Let us turn to a second contribution:
\[\mathrm{II}=\frac{1}{\beta}\mathbb{E}_{\mathbf{\xi}}\int_{Ua}^{+\infty}\frac{dz}{ \sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\log \int_{-a}^{a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{U}\right)^{2}} \xrightarrow{\beta\to\infty}-\frac{U}{2}\mathbb{E}_{\mathbf{\xi}}\int_{Ua}^{+ \infty}\frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2} }{2\bar{r}}}\left(a-\frac{z}{U}\right)^{2} \tag{75}\]
From the square in the integrand we get three sub-contributions.
\[\mathrm{IIA}=-\frac{Ua^{2}}{2}\mathbb{E}_{\mathbf{\xi}}\int_{Ua}^{+\infty}\frac{ dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}=- \frac{Ua^{2}}{4}\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{m}\cdot\mathbf{\xi}}{\sqrt{2 \bar{r}}}\Big{)} \tag{76}\]
where the last step follows from a simple change of variables. The second one, with a shift in the integration variable, is
\[\mathrm{IIB}=a\mathbb{E}_{\mathbf{\xi}}\int_{Ua-\mathbf{m}\cdot\mathbf{\xi}}^{+\infty} \frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{z^{2}}{2\bar{r}}}(z+\mathbf{m}\cdot\bm {\xi})=a\sqrt{\frac{\bar{r}}{2\pi}}\mathbb{E}_{\mathbf{\xi}}e^{-\frac{(Ua-\mathbf{ m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}+a\mathbb{E}_{\mathbf{\xi}}\mathbf{m}\cdot\mathbf{\xi} \,\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{m}\cdot\mathbf{\xi}}{\sqrt{2\bar{r}}} \Big{)}\,. \tag{77}\]
Finally, with the same shift in the integration variable, we get a third contribution:
\[\mathrm{IIC}=-\frac{1}{2U}\mathbb{E}_{\mathbf{\xi}}\int_{Ua-\mathbf{m }\cdot\mathbf{\xi}}^{+\infty}\frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{z^{2}}{2\bar{r }}}(z^{2}+2z\mathbf{m}\cdot\mathbf{\xi}+(\mathbf{m}\cdot\mathbf{\xi})^{2})=-\frac{1}{2U }\sqrt{\frac{\bar{r}}{2\pi}}\mathbb{E}_{\mathbf{\xi}}(Ua+\mathbf{m}\cdot\mathbf{\xi})e ^{-\frac{(Ua-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\\ -\frac{1}{4U}\mathbb{E}_{\mathbf{\xi}}(\mathbf{m}\cdot\mathbf{\xi})^{2} \,\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{m}\cdot\mathbf{\xi}}{\sqrt{2\bar{r}}}\Big{)} -\frac{\bar{r}}{4U}\mathbb{E}_{\mathbf{\xi}}\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{ m}\cdot\mathbf{\xi}}{\sqrt{2\bar{r}}}\Big{)}\,. \tag{78}\]
Now, it remains to compute the last gaussian integral:
\[\text{III}=\frac{1}{\beta}\mathbb{E}_{\boldsymbol{\xi}}\int_{-\infty}^{Ua}\frac{ dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(\epsilon-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}} \log\int_{-a}^{a}dxp(x)e^{-\frac{\partial U}{2}\left(x-\frac{\boldsymbol{ \xi}}{\boldsymbol{\xi}}\right)^{2}}\,. \tag{79}\]
Thanks to the parity of \(p(x)\), if we perform the changes of variables \(z\mapsto-z\), \(\boldsymbol{\xi}\mapsto-\boldsymbol{\xi}\), \(x\mapsto-x\) we find that II=III. Hence we can finally recompose \(\psi\):
\[\psi=\frac{\bar{r}+\mathbf{m}^{2}}{2U}+2\text{II}=-\frac{Ua^{2}}{2}+\frac{1}{U }\sqrt{\frac{\bar{r}}{2\pi}}\mathbb{E}_{\boldsymbol{\xi}}(Ua-\mathbf{m}\cdot \boldsymbol{\xi})e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}+ \mathbb{E}_{\boldsymbol{\xi}}\frac{\bar{r}+(Ua-\mathbf{m}\cdot\boldsymbol{ \xi})^{2}}{2U}\text{erf}\Big{(}\frac{Ua-\mathbf{m}\cdot\boldsymbol{\xi}}{ \sqrt{2\bar{r}}}\Big{)}\,. \tag{80}\]
and the final form of the asymptotic free entropy is
\[\frac{\Phi}{\beta}\xrightarrow{\beta\to\infty}-\frac{\bar{r}C}{2 }+\frac{U(v-a^{2})}{2}-\frac{\mathbf{m}^{2}}{2}+\frac{\alpha(1-t)v}{2(1-C)}+ \frac{\Delta Cv}{2}-\frac{v^{2}+\lambda(v-1)^{2}}{4}+\frac{1}{U}\sqrt{\frac{ \bar{r}}{2\pi}}\mathbb{E}_{\boldsymbol{\xi}}(Ua-\mathbf{m}\cdot\boldsymbol{ \xi})e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}\\ +\mathbb{E}_{\boldsymbol{\xi}}\frac{\bar{r}+(Ua-\mathbf{m}\cdot \boldsymbol{\xi})^{2}}{2U}\text{erf}\Big{(}\frac{Ua-\mathbf{m}\cdot\boldsymbol {\xi}}{\sqrt{2\bar{r}}}\Big{)}+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v _{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^ {2}}\,. \tag{81}\]
The saddle point equations can be obtained by deriving the previous formula. The gradient w.r.t. \(\mathbf{m}\) yields:
\[\mathbf{m}=\mathbb{E}_{\boldsymbol{\xi}}\frac{\boldsymbol{\xi}}{U}\Big{[}- \sqrt{\frac{2\bar{r}}{\pi}}e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2} }{2r}}+(Ua-\mathbf{m}\cdot\boldsymbol{\xi})\text{erf}\Big{(}\frac{\mathbf{m} \cdot\boldsymbol{\xi}-Ua}{\sqrt{2\bar{r}}}\Big{)}\Big{]}\,. \tag{82}\]
The derivative w.r.t. \(\bar{r}\) gives the equation for \(C\):
\[C=\frac{1}{U}\mathbb{E}_{\boldsymbol{\xi}}\text{erf}\Big{(}\frac{Ua-\mathbf{ m}\cdot\boldsymbol{\xi}}{\sqrt{2\bar{r}}}\Big{)}\,. \tag{83}\]
Deriving w.r.t. \(U\) yields an equation for \(v\):
\[\frac{a^{2}-v}{2}=\frac{1}{U^{2}}\sqrt{\frac{\bar{r}}{2\pi}} \mathbb{E}_{\boldsymbol{\xi}}(Ua+\mathbf{m}\cdot\boldsymbol{\xi})e^{-\frac{(Ua -\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}-\mathbb{E}_{\boldsymbol{\xi}} \Big{[}\frac{\bar{r}+(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2U^{2}}- \frac{a}{U}(Ua-\mathbf{m}\cdot\boldsymbol{\xi})\Big{]}\text{erf}\Big{(}\frac{U a-\mathbf{m}\cdot\boldsymbol{\xi}}{\sqrt{2\bar{r}}}\Big{)}\,. \tag{84}\]
In all the previous equations \(\bar{r}\) and \(U\) must be considered as the following functions:
\[\bar{r} =\frac{\alpha(1-t)v}{(1-C)^{2}}+\Delta v+\alpha tv\int_{0}^{t}d \tau\left[\frac{2(v_{\tau}-m_{\tau}^{2})}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{ 2})C^{2}}+\frac{[v_{\tau}-1-2C(v_{\tau}-m_{\tau}^{2})]^{2}}{[1+(v_{\tau}-1)C-( v_{\tau}-m_{\tau}^{2})C^{2}]^{2}}\right] \tag{85}\] \[U =-\Delta C+v+\lambda(v-1)-\frac{\alpha(1-t)}{(1-C)}-\alpha t\int_ {0}^{t}d\tau\frac{2C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{ \tau}-m_{\tau}^{2})C^{2}}\,. \tag{86}\]
Equations (83) and (84) shall be solved simultaneously at any iteration step for \(\mathbf{m}\). This will yield a convergent algorithm to solve the system of equations.
To evaluate the free entropy at the solution of the previous system of saddle point equations we first enforce equation (84), obtaining:
\[\frac{\Phi}{\beta}\xrightarrow{\beta\to\infty}-\frac{\bar{r}C}{2 }+\frac{U(v-a^{2})}{2}-\frac{\mathbf{m}^{2}}{2}+\frac{\alpha(1-t)v}{2(1-C)}+ \frac{\Delta Cv}{2}-\frac{v^{2}+\lambda(v-1)^{2}}{4}+\frac{1}{U}\sqrt{\frac{ \bar{r}}{2\pi}}\mathbb{E}_{\boldsymbol{\xi}}(Ua-\mathbf{m}\cdot\boldsymbol{\xi} )e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}\\ +\mathbb{E}_{\boldsymbol{\xi}}\frac{\bar{r}+(Ua-\mathbf{m}\cdot \boldsymbol{\xi})^{2}}{2U}\text{erf}\Big{(}\frac{Ua-\mathbf{m}\cdot\boldsymbol{ \xi}}{\sqrt{2\bar{r}}}\Big{)}+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{ \tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2} }\,. \tag{87}\]
Using the equation for \(C\) (83) we see that the first term in the first line and the first term in the second line can be summed together. After some algebra, imposing also (82) we get
\[\frac{\Phi}{\beta}\xrightarrow{\beta\to\infty}\frac{\bar{r}C}{2}+ \frac{\mathbf{m}^{2}}{2}+\frac{\alpha(1-t)v}{2(1-C)}+\frac{\Delta Cv}{2}-\frac{ v^{2}+\lambda(v-1)^{2}}{4}+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{\tau}-m_{ \tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}}\,. \tag{88}\]
Finally, inserting also (85) we get
\[\frac{\Phi}{\beta}=\frac{\alpha(1-t)v}{2(1-C)^{2}}+\Delta Cv-\frac{v^{2}+ \lambda(v-1)^{2}}{4}+\frac{\mathbf{m}^{2}}{2}+\frac{\alpha tv}{2}\int_{0}^{t}d \tau\frac{4C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)[1-(v_{\tau}-m_{\tau}^{2})C^{ 2}]}{[1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}]^{2}}\,. \tag{89}\]
which surprisingly coincides with (70).
## 5 Phase diagrams for the first decimation step
The starting point of the decimation process is of crucial importance for its success. In fact, if we were to subtract an estimate \(\boldsymbol{\eta}\boldsymbol{\eta}^{\intercal}/\sqrt{N}\) from the observations \(\boldsymbol{Y}\) where \(\boldsymbol{\eta}\) had a negligible alignment with all the patterns, we would actually introducing further noise without decreasing the rank of the hidden matrix: decimation would be bound to fail.
At the 1-st step (\(R=0\) or \(t=0\)) the replica symmetric decimation free entropy is simply that of a Hopfield model with Gaussian noise:
\[\Phi(t=0):=\text{Extr}\Big{\{}\frac{rq+uv}{2}-\beta\sum_{\mu=1}^ {k}\frac{(m^{\mu})^{2}}{2}-\frac{\beta^{2}\Delta q^{2}}{4}-\frac{\alpha}{2} \left[\log\left(1-\beta(v-q)\right)-\frac{\beta q}{1-\beta(v-q)}\right] \tag{90}\] \[\quad+\beta\Big{(}\frac{\beta\Delta-1}{4}v^{2}-\frac{\lambda}{4}( 1-v)^{2}\Big{)}+\mathbb{E}_{Z,\boldsymbol{\xi}}\log\int dP_{\xi}(x)\exp\left( \left(Z\sqrt{r}+\beta\mathbf{m}\cdot\boldsymbol{\xi}\right)x-\frac{u+r}{2}x^{ 2}\right)\Big{\}}\,. \tag{91}\]
The set of fixed point equations then simplifies remarkably to
\[v=\mathbb{E}_{\boldsymbol{\xi}}\langle X^{2}\rangle_{t}\,,\quad m ^{\mu}=\mathbb{E}_{\xi}\xi\langle X\rangle_{t}\,,\quad q=\mathbb{E}_{ \boldsymbol{\xi}}\langle X\rangle_{t}^{2} \tag{92}\] \[r=\frac{\alpha\beta^{2}q}{(1-\beta(v-q))^{2}}+\beta^{2}\Delta q \,,\quad u=\beta\lambda(v-1)+\beta(1-\beta\Delta)v-\alpha\beta\frac{1-\beta(v -2q)}{(1-\beta(v-q))^{2}}\,. \tag{93}\]
where we have assumed condensation onto only one pattern.
Starting from these equations, one can specialize to the different 0 temperature limits that exhibit interesting features. For instance in the left panel of Figure 1, we see how the phase diagram at 0 temperature changes as sparsity increases when \(\lambda\to\infty\) for the sparse Ising prior. It appears that sparsity increases the retrival region and also the storage capacity. From the right panel we indeed see that the critical storage capacity in the noiseless limit \(\Delta=0\) diverges when \(\rho\to 0\). This observation can be turned into an analytical statement as follows. To begin with, we notice that
\[C=\frac{2(1-\rho)}{\sqrt{2\pi\bar{r}\rho}}e^{-\frac{V^{2}}{8r\rho}}+\frac{\rho }{\sqrt{2\pi\bar{r}\rho}}\left[e^{-\left(\frac{U/2+m}{\sqrt{2\rho}\rho}\right) ^{2}}+e^{-\left(\frac{U/2-m}{\sqrt{2\rho}\rho}\right)^{2}}\right]\xrightarrow{ \rho\to 0}0\,, \tag{94}\]
exponentially fast, and
\[\bar{r}\xrightarrow{\rho\to 0}v(\alpha+\Delta)\,. \tag{95}\]
As a consequence the equation (67) for \(U\) reduces to:
\[U=v+\lambda(v-1)-\alpha\quad\Rightarrow\quad v=\frac{U+\alpha+\lambda}{ \lambda+1}\,. \tag{96}\]
We argue that \(U\) is always positive, as it serves as a norm regulator on the estimator, and we verified this statement numerically. This implies that \(v\) is always strictly positive. Equation (68) can thus be rewritten as an equation for \(U\) that reads as:
\[\frac{U+\alpha+\lambda}{\lambda+1}=\frac{1}{\rho}-\frac{1-\rho}{\rho}\text{erf} \Big{(}\frac{U}{2\sqrt{2\rho\bar{r}}}\Big{)}-\frac{1}{2}\Big{[}\text{erf} \Big{(}\frac{U/2-m}{\sqrt{2\bar{r}\rho}}\Big{)}+\text{erf}\Big{(}\frac{U/2+m}{ \sqrt{2\bar{r}\rho}}\Big{)}\Big{]}\,. \tag{97}\]
The error function saturates exponentially fast to \(1\) when \(\rho\to 0\), and this entails
\[\frac{U+\alpha+\lambda}{\lambda+1}=1-\frac{1}{2}\Big{[}\text{erf}\Big{(}\frac{ U/2-m}{\sqrt{2\bar{r}\rho}}\Big{)}+\text{erf}\Big{(}\frac{U/2+m}{\sqrt{2\bar{ r}\rho}}\Big{)}\Big{]}+O\big{(}e^{-K/\rho}\big{)} \tag{98}\]
for some positive constant \(K\), and up to logarithmic corrections at the exponent in the remainder. The argument in the square brackets can go either to \(0\) or to \(2\) depending on the signs of the arguments in the error functions. However, the second possibility, that would correspond to \(U/2>|m|\), is not possible, since the l.h.s. cannot converge to \(0\) thanks to the positivity of \(U\). Hence, the only alternative we have is that \(U/2<|m|\), which is also verified numerically. This implies that the limiting equation for \(\rho\to 0\) appears as
\[\frac{U+\alpha+\lambda}{\lambda+1}=1\quad\Rightarrow\quad\lim_{\rho\to 0}U=1- \alpha\quad\Rightarrow\quad\lim_{\rho\to 0}v=1\,. \tag{99}\]
Finally, using the condition \(U/2<|m|\), the limit of the magnetization can be easily computed from (69):
\[m=\frac{1}{2}\Big{[}\text{erf}\Big{(}\frac{m-U/2}{\sqrt{2\bar{r}\rho}}\Big{)} +\text{erf}\Big{(}\frac{U/2+m}{\sqrt{2\bar{r}\rho}}\Big{)}\Big{]}\xrightarrow{ \rho\to 0}1\,. \tag{100}\]
The behaviour depicted so far of the variables \(m,C,v,\bar{r}\) and \(U\) has been verified numerically for various values of \(\lambda\), \(\alpha\) and \(\Delta\).
In Figure 2 we plot the phase diagram for a continuous uniform prior supported on \([-\sqrt{3},\sqrt{3}]\) with \(\lambda=0\). We verified that once that a magnetization \(m\neq 0\) is a solution to the fixed point equations, then it is also thermodynamically stable, namely its free entropy is automatically bigger than that of the \(m=0\) solution, contrary to what happens for the discrete priors discussed above. The dashed line here does not signal a proper
Figure 1: **Left panel**: Phase diagram for the first step of decimation in the case of sparse Ising prior. The lines show the zero temperature phase diagram for different values of the sparsity parameter \(\rho\) (using \(\lambda\to\infty\)). Dashed lines plot the storage capacity as a function of \(\Delta\). Solid lines signal the thermodynamic transition from the glassy phase to the retrieval phase, when configurations with non vanishing magnetizations with the patterns become thermodynamically stable. The blue and red lines are for \(\rho=1\); cyan and magenta for \(\rho=0.1\); green and yellow for \(\rho=0.05\). **Right panel**: zero temperature storage capacity \(\alpha_{c}\) and critical thermodynamic storage \(\alpha_{F}\), in dashed blue and solid red lines respectively, versus sparsity \(\rho\) in the case \(\Delta=0\) (using \(\lambda\to\infty\)). This plot tracks the behaviour of the intersection of the dashed and solid lines with the \(x\)-axis in the left panel as \(\rho\) varies in \((0,1]\).
phase transition, but it is the location of the phase space where the mean square error in the reconstruction of the single pattern outperforms the null estimator \(\mathbf{\eta}_{null}=0\), namely when:
\[\text{MSE}(\mathbf{\eta};\mathbf{\xi})=\frac{1}{N}\|\mathbf{\xi}-\langle\mathbf{\eta}\rangle\|^{ 2}\simeq 1+v-2m<1\,, \tag{101}\]
where the approximate equality holds true in the \(N\to\infty\) and \(\beta\to\infty\) limit. Notice that the performance of a Bayes-optimal estimator is always upper bounded by \(1\) thanks to the Nishimori identities, hence it is always at least as good as the null estimator.
## 6 Numerical tests
### Testing the saddle point equations with AMP
In order to test our theoretical predictions, we need an algorithm that is able to sample from the Botlmann-Gibbs measure, or at least that can estimate its marginals, namely the local magnetizations. Approximate message passing is an algorithm that serves the purpose. Furthermore, one needs to integrate the decimation scheme into it. The resulting algorithm was called _decimated AMP_ (see Algorithm 1), which first appeared informally in [56], and then refined in [57].
It is possible to derive a suitable AMP from the set of belief propagation equations for the Boltzmann-Gibbs measure:
\[\hat{m}^{t}_{(ij)\to i}(x_{i}) \propto\int dx_{j}\hat{m}^{t}_{j\to(ij)}(x_{j})\exp\Big{[}\frac{ \beta}{\sqrt{N}}Y_{ij}x_{i}x_{j}-\frac{\beta(1+\lambda)}{2N}x_{i}^{2}x_{j}^{2 }\Big{]} \tag{102}\] \[m^{t+1}_{i\to(ij)}(x_{i}) \propto dP_{\xi}(x_{i})\exp\Big{(}\frac{\beta\lambda x_{i}^{2}}{2 }\Big{)}\prod_{k\neq i,j}\hat{m}^{t}_{(ki)\to i}(x_{i})\,, \tag{103}\]
by expanding in \(N\) and keeping the leading order. The resulting algorithm, which takes as input an appropriate
Figure 2: Zero temperature phase diagram for uniform prior supported on \([-\sqrt{3},\sqrt{3}]\) and \(\lambda=0\). The solid line represents the thermodynamic phase transition. Below it, probability is dominated by those ’retrieval’ states that have a non vanishing Mattis magnetization with one pattern. The dashed blue line represents a performance transition: below it the mean configuration of the Boltzmann-Gibbs measure has a better performance in reconstructing the pattern than the null estimator \(\mathbf{\eta}_{null}=0\).
initialization and the data, reads:
\[\mathbf{x}^{t+1}=f(\mathbf{A}^{t},\mathbf{B}^{t})\,,\quad\mathbf{v}^ {t+1}=\partial_{a}f(\mathbf{A}^{t},\mathbf{B}^{t}) \tag{104}\] \[\mathbf{A}^{t}=\frac{\beta}{\sqrt{N}}\mathbf{Y}\mathbf{x}^{t}- \frac{\beta^{2}}{N}\mathbf{x}^{t-1}\circ(\mathbf{Y}^{\circ 2}\mathbf{v}^{t})\] (105) \[\mathbf{B}^{t}=\frac{\beta}{N}\big{(}(1-\mathbf{Y}^{\circ 2}) \mathbf{v}+\|\mathbf{x}^{t}\|^{2}\big{)}+\frac{\beta\lambda}{N}\sum_{i=1}^{N} \big{(}v_{i}^{t}+(x_{i}^{t})^{2}-1\big{)} \tag{106}\]
where constants are summed element/component-wise, \(\circ\) is the Hadamard entry-wise product (or power), and as denoisers we have chosen the local means
\[f(a,b)=\frac{\int dP_{\xi}(x)x\exp(ax-\frac{bx^{2}}{2})}{\int dP_{\xi}(y)\exp( ay-\frac{by^{2}}{2})} \tag{107}\]
that are also applied component-wise to vectors. We denote this algorithm in a compact way by \(\text{AMP}(\mathbf{Y},\mathbf{x}^{0},\mathbf{v}^{0})\), and it is run until the marginals stabilize with a certain tolerance. The above AMP is used to estimate the first and second moment marginals of the Boltzmann-Gibbs measure: \(x_{i}^{\infty}\simeq\langle x_{i}\rangle\), \(v_{i}^{\infty}\simeq\langle x_{i}^{2}\rangle-\langle x_{i}\rangle^{2}\). Of course the very same algorithm can be run on the set of modified observations \(\mathbf{Y}_{R}\) in (16), which is accessible to the statistician at every decimation step.
```
0: N, P or \(\alpha\), \(\mathbf{Y}\), \(\boldsymbol{\xi}\), \(\epsilon\) while\(\mu\leq P\)do \(\mathbf{g}\leftarrow\mathcal{N}(0,1_{N})\) \(\mathbf{x}^{0}\leftarrow\sqrt{1-\epsilon^{2}}\mathbf{g}+\epsilon\boldsymbol{ \xi}^{\mu}\) \(\mathbf{v}^{0}\gets 1-0.9(\mathbf{x}^{0})^{\circ 2}\) \(\langle\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1},(\langle\boldsymbol{\eta}^{ \mu}\rangle^{\circ 2})_{R=\mu-1}-\langle\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1}^{ \circ 2}\leftarrow\text{AMP}(\mathbf{Y}_{R=\mu-1},\mathbf{x}^{0},\mathbf{v}^{0})\) \(\mathbf{Y}_{R=\mu}=\mathbf{Y}_{R=\mu-1}-\frac{\langle\boldsymbol{\eta}^{\mu} \rangle_{R=\mu-1}^{\circ}\setminus\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1}^{ \circ}}{\sqrt{N}}\) endwhile Return\((\langle\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1},(\langle\boldsymbol{\eta}^{ \mu}\rangle^{\circ 2})_{R=\mu-1})_{1\leq\mu\leq P}\).
```
**Algorithm 1** Decimated AMP (DAMP)
It is a known fact, that in the Hopfield model AMP needs to be initialized sufficiently close to the patterns to converge, and here we experience the same behavior starting from the first step of decimation until the end.
Figure 3: Mean Square Error of decimation in the case of sparse Ising priors: theory versus Decimated AMP algorithm. The red solid curves are the expected pattern MSE predicted by theory as a function of the decimation time (i.e. the number of decoded patterns). The blue data points and error bars are obtained by running DAMP over \(n=300\) independent instances. \(N=1500\), \(\lambda=0\) in all plots. **Left panel**: \(\rho=1\), \(\alpha=0.03\) namely \(P=45\), \(\Delta=0.08\) and \(\beta=10\). **Middle panel**: \(\rho=0.2\), \(\alpha=0.04\) namely \(P=60\), \(\Delta=0.09\) and \(\beta=8\). **Right panel**: \(\rho=0.15\), \(\alpha=0.06\) namely \(P=90\), \(\Delta=0.1\) and \(\beta=8\).
Hence DAMP is not suitable as an inference algorithm as it needs an informative initialization, whose correlation with the pattern sought is \(\epsilon\) in Algorithm 1. Nevertheless, DAMP can be considered as a tool to verify that our replica computations are correct and that decimation is able to retrieve all the patterns, which means it does not corrupt itself too much.
In Figure 3 we plot the predicted theoretical curves of the expected MSE on the reconstruction on the single pattern
\[\mathbb{E}\text{MSE}(\mathbf{\xi}^{\mu};\mathbf{\eta}^{\mu})=\frac{1}{N} \|\mathbf{\xi}^{\mu}-\langle\mathbf{\eta}^{\mu}\rangle_{t|tP=\mu-1}\|^{2}\simeq 1+q_{t}-2m _{t} \tag{108}\]
in red, where the subscript \(t\) indicates that we at the decimation time \(t\). The blue data points and error bars are obtained from an average of 300 instances of DAMP run on independently generated data. We considered different values of sparsity and the regularization parameter \(\lambda\) was always set to 0. In every case the theoretical curve seems to reproduce accurately the behaviour of the pattern MSE, yielding a good confirmation of our RS theory.
### Expected decimation performance
In this section, we compare the expected denoising performance of decimation with the typical performance of a Rotation Invariant Estimator (RIE) introduced in [49]. A RIE is characterized by the fact that it provides an estimate of the original matrix \(\mathbf{\xi}\mathbf{\xi}^{T}\) which has the same eigenbasis as the one of the data matrix \(\mathbf{Y}\). Once the eigenbasis is established, one only has to produce an estimate on the specturem based on that of \(\mathbf{Y}\). As such, the RIE is a purely spectral estimator and it does not exploit the prior knowledge on the signal components. Among the possible RIEs, the one that acts optimally on the spectrum of \(\mathbf{Y}\) is
\[\hat{\mathbf{\lambda}}=\mathbf{\lambda}_{\mathbf{Y}}-2\Delta\mathcal{H}[ \rho_{\mathbf{Y}}](\mathbf{\lambda}_{\mathbf{Y}}) \tag{109}\]
where \(\hat{\mathbf{\lambda}}\) and \(\mathbf{\lambda}_{\mathbf{Y}}\) are the vector of the eigenvalues of the estimate and of \(\mathbf{Y}\sqrt{N}\) respectively, \(\mathcal{H}[\rho_{\mathbf{Y}}]\) is the Hilbert transform of the spectral density of \(\mathbf{Y}/\sqrt{N}\).
We shall measure the performance of an estimator \(\mathbf{S}\), whose eignevalues are of order 1 by convention, with the matrix MSE:
\[\text{mMSE}(\mathbf{S};\mathbf{\xi})=\frac{1}{N}\mathbb{E}\Big{\|} \mathbf{S}-\frac{\mathbf{\xi}\mathbf{\xi}^{\intercal}}{\sqrt{NP}}\Big{\|}_{F}^{2}\,, \tag{110}\]
and the matrix norm is the Frobenius' norm. The estimator produced by decimation would thus be
\[\mathbf{S}_{\text{dec}}:=\sum_{\mu=1}^{P}\frac{\langle\mathbf{\eta}^ {\mu}\rangle_{R=\mu-1}\langle\mathbf{\eta}^{\mu}\rangle_{R=\mu-1}^{\intercal}}{ \sqrt{NP}} \tag{111}\]
In order to make the comparison we need to connect the mMSE predicted by the theory for the decimation estimator with the definition (110), namely to re-express the latter in terms of the order parameters of the decimation free entropies. This can be done as follows, leveraging the assumption (19). By expanding the square in the mMSE definition evaluated at \(\mathbf{S}_{\text{dec}}\) we recognize three main contributions:
\[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[ \xi_{i}^{\mu}\xi_{j}^{\mu}\xi_{i}^{\nu}\xi_{j}^{\nu}]=\frac{1+\alpha}{2}+o_{N} (1) \tag{112}\] \[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[ \xi_{i}^{\mu}\langle\eta_{j}^{\mu}\rangle_{i}^{\xi}\langle\eta_{j}^{\nu}\rangle]\] (113) \[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[ \langle\eta_{i}^{\mu}\rangle\langle\eta_{j}^{\mu}\rangle\langle\eta_{i}^{\nu }\rangle\langle\eta_{j}^{\nu}\rangle] \tag{114}\]
where we dropped the subscrpts in the Gibbs brackets for convenience. While the first one can be computed right away using the properties of the prior, the other two require some extra effort. Concerning (113) we have:
\[\begin{split}\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P} \mathbb{E}[\xi_{i}^{\mu}\langle\eta_{j}^{\mu}\rangle\xi_{i}^{\nu}\langle\eta_{ j}^{\nu}\rangle]&=\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P} \big{[}\delta_{\mu\nu}\xi_{i}^{\mu}\langle\eta_{j}^{\mu}\rangle\xi_{i}^{\mu} \langle\eta_{j}^{\mu}\rangle+\delta_{ij}\mathbb{E}(\xi_{i}^{\mu})^{2}\langle \eta_{i}^{\nu}\rangle^{2}\big{]}=\\ &=\frac{1}{P}\sum_{\mu=1}^{P}(m^{\mu})^{2}+\frac{\alpha}{P}\sum_ {\mu=1}^{P}q^{\mu}+o_{N}(1)\end{split} \tag{115}\]
where we have enforced (19) and \(q^{\mu}\) and \(m^{\mu}\) are the overlap and Mattis magnetization respectively coming from the \(\mu\)-th decimation step. Let us now turn to (114). Using similar arguments one can argue that:
\[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[\langle\eta_{i}^ {\mu}\rangle\langle\eta_{j}^{\mu}\rangle\langle\eta_{i}^{\nu}\rangle\langle \eta_{j}^{\nu}\rangle]=\frac{1}{P}\sum_{\mu=1}^{P}(q^{\mu})^{2}+\alpha\Big{(} \frac{1}{P}\sum_{\mu=1}^{P}q^{\mu}\Big{)}^{2}+o_{N}(1) \tag{116}\]
Therefore, collecting all the contributions one gets the asymptotic prediction:
\[\text{mMSE}(\mathbf{S}_{\text{dec}};\mathbf{\xi})\simeq\frac{1}{P}\sum_{\mu=1}^{P} \big{(}1+(q^{\mu})^{2}-2(m^{\mu})^{2}\big{)}+\alpha\Big{(}1-\frac{1}{P}\sum_{ \mu=1}^{P}q^{\mu}\Big{)}^{2}\,. \tag{117}\]
In Figure 4 we compare the performance of the RIE, in green, against the theoretical performance predicted for decimation in red, and the blue data points are obtained using the estimator produced by decimation (DAMP). As we can see there is a good agreement between DAMP and the theory, and both outperform the RIE as we expected. The RIE appears more robust to both noises (a) and (b), tuned by \(\Delta\) and \(\alpha\) respectively. On the contrary, the performance of decimation deteriorates quickly as soon as we get out of the retrieval region in the phase diagrams Figure 1-2, and the amount of noise it can bear is strongly affected by the nature of the signal (sparse Ising or continuous). However, one must bear in mind that RIEs are suitable only for matrix denoising, and no information is reconstructed on the signal factor \(\mathbf{\xi}\). Moreover, we notice that the performance of the RIE does not change sensibly from the left to the right panel (\(\rho=1\) to \(\rho=0.15\)), and this is coherent with its purely spectral nature. In fact, the empirical spectral distribution of \(\mathbf{\xi}\mathbf{\xi}^{\intercal}/\sqrt{NP}\) always converges to a Marchenko-Pastur law because of the completely factorized prior on the elements of \(\mathbf{\xi}\). Hence, the small changes from the left to the right panel are mostly due to the slight increment in the noise level \(\Delta\) and the aspect ratio (or load) \(\alpha\).
Figure 4: Matrix MSE as a function of \(\Delta\) for sparse Ising priors with various sparsities. In green the denoising performance of a RIE, obtained by averaging over 30 independent samples. Error bars, corresponding to one standard deviation, are too small to be seen. In red, the performance predicted for an algorithm implementing decimation. The blue data points are obtained averaging over 30 DAMP’s outputs, run on independently generated data. Error bars correspond to one standard deviation. In all cases \(\lambda=0\), \(\beta=8\) and \(N=1500\). **Left panel**: \(\rho=1\), \(\alpha=0.03\) namely \(P=45\) and \(\Delta=0.08\). **Middle panel**: \(\rho=0.2\), \(\alpha=0.07\) namely \(P=105\) and \(\Delta=0.09\). **Left panel**: \(\rho=0.15\), \(\alpha=0.07\) namely \(P=105\) and \(\Delta=0.1\).
### A ground state oracle for sparse Ising priors
Our ground state oracle is based on an iterated simulated annealing (SA) routine that can be found in Algorithm 2, which is a refinement of the one in [48].
```
0:\(N\), \(\mathbf{Y}\), threshold, \(\beta_{\max}\in\mathbb{R}\), niter (\(\in\mathbb{N}\)), maxr (\(\in\mathbb{N}\)), restarts (\(\in\mathbb{N}\)) \(\leftarrow\)0 found\(\leftarrow\)False whileity\(<300\)doandfound==False stop\(\leftarrow\)0 \(\beta\gets 0\) \(\mathbf{s}\leftarrow\) random sample from \(\prod_{i=1}^{N}P_{\xi}\) \(\leftarrow\)ity\(\leftarrow\)ity+1 iftry+restarts\(>\)maxrthen returns,ity endif ifity\(\%20=0\)then threshold\(\leftarrow\)threshold\(\cdot\)0.9975 endif while\(k<\)niter do \(k\gets k+1\) \(\beta\gets 1+\frac{k}{\text{niter}}\) \(\cdot\)\(\beta_{\max}\) \(\mathbf{h}\leftarrow\frac{\mathbf{Y}}{\sqrt{N}}\mathbf{s}\) \(V\leftarrow\frac{\|\mathbf{s}\|^{2}}{N}+\frac{\lambda}{N}(\|\mathbf{s}\|^{2}-1)\) \(\mathbf{Z}_{\text{loc}}\leftarrow(1-\rho)\mathbf{1}+\rho\cosh(\beta\mathbf{h })e^{-\frac{\beta V}{2}}\) (Scalar functions are applied component-wise to vectors.) sample ss from \(\exp\left(\beta\mathbf{h}\cdot(\cdot)-\frac{\beta V}{2}\right)/\mathbf{Z}_{ \text{loc}}\) if\(\|\mathbf{s}-\mathbf{s}\mathbf{s}\|<10^{-3}\)then \(\mathbf{s}\leftarrow\)ss stop+1 (Updates become negligible.) ifstop\(>5\)then if\(-E(\mathbf{s}\mid\mathbf{Y})>\)thresholdthen returns,ity else break (wrong energy, try again) endif endif else stop\(\leftarrow\)0 \(\mathbf{s}\leftarrow\)ss endif endwhile
```
**Algorithm 2** Simulated annealing (SA)
The energy landscape at the various steps of decimation is very similar to that of the Hopfield model. Consequently, algorithms that search for minima get frequently stuck in metastable states, which have a low overlap with the patterns. SA is not immune to this phenomenon. Therefore, we equip our SA routine with an acceptance criterion of the configuration output by the algorithm, that is based on the computation of the energy:
\[-E(\mathbf{s}\mid\mathbf{Y}_{R})=\frac{1}{2\sqrt{N}}\mathbf{s}^{\intercal} \mathbf{Y}_{R}\mathbf{s}-\frac{\|\mathbf{s}\|^{4}}{4N}-\frac{\lambda}{4N}\big{(} \|\mathbf{s}\|^{2}-1\big{)}^{2} \tag{118}\]
which is nothing the energy of our model at the \(R\)-th decimation step. Notice that this quantity is accessible by the Statistician and it is thus correct to use it as an input for a candidate algorithm. In Algorithm 2 niter is the maximum number of temperature updates we allow, maxr is instead the maximum number of restarts allowed, considering also the restarts coming from previous pattern searches. The reason why we introduced this additional control is that typically when a bad configuration is accepted as a pattern estimate by mistake,
the ensuing searches for other patterns require even more restarts. The above SA routine has to be combined with decimation, so once a configuration is accepted as a pattern the observations are modified \(\mathbf{Y}\leftarrow\mathbf{Y}-\frac{\mathbf{s}\mathbf{s}^{\intercal}}{\sqrt{N}}\) and the routine is restarted. In order to make sure we really find patterns, we thus run all the algorithm (SA plus decimation) multiple times, typically five, and then we accept the output that required the least number of restarts to be produced. This procedure is costly, and as noticed already in [48], it requires an exponential number of restarts.
Algorithm 2 suffers from the same issues as the one in [48]. For instance, the overall decimation procedure still requires an exponential (in \(N\)) number of restarts. However, the presence of sparsity introduces further non-trivial complications. In fact, the signal components are no longer constrained on the hypercube, and this allows for fluctuations in the norm of the outputs that reflect in fluctuations on the average energy of the patterns. Specifically, the more sparse the signal is, the wider the gap between the highest and the lowest energy of the patterns. These fluctuations can challenge the energy restarting criterion in our SA routine, that can thus confuse a metastable state for a pattern.
Furthermore, one observes that when too few patterns are stored or remain in \(\mathbf{Y}\), it is harder for the SA routing to find them. If, for instance, we only have one pattern left, the Hebbian matrix \(\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}\), which is supposed to attract the \(\mathbf{x}\)-configurations towards the pattern, has only a fraction \(\rho^{2}\) of non-zero components. This gives rise to a large number of configurations that have degenerate energy, close to \(0\). The energy landscape thus appears as a golf course, flat almost everywhere, except for a pit, corresponding to the pattern left. From our numerical experiments, this effect seems to hold also for more than one, but still few, patterns stored. See Figure 5.
### Reversed decimation
In all the tests we have run, the performance of decimation in reconstructing the patterns improves along the procedure itself. The last patterns are always better estimated than the first ones, and this supports the idea that decimation effectively decreases the pattern interference. In particular, it is clear that the quality of reconstruction of one pattern depends on the previous "history" of the process.
Once the procedure exhausts the patterns, one can imagine to run it again backwards, keeping the last half of the patterns that were reconstructed with higher accuracy. As illustrated in Figure 6, this improves the reconstruction performance also for the first half of the patterns. One can then re-iterate the same procedure, keeping only the first \(1/2\) and the last \(1/4\) of the patterns, that are now the best reconstructed ones. This in
Figure 5: Energy landscape exploration of the Simulated Annealing applied to sparse Ising priors. On the vertical axis we have the energy value as a function of the number of iterations (temperature updates) of SA on the horizontal axis. For all the three plots \(N=1500\), \(\alpha=0.01\) (namely only \(15\) patterns to be found), \(\Delta=0.05\) and \(\lambda=-0.08\). From the left to the right: \(\rho=1,0.3,0.15\). The patterns were reconstructed exactly in all thr cases. SA finds immediately the patterns for low sparsities \(\rho\sim 1\). As soon as sparsity increases, a lot of configurations start to exhibit an almost vanishing energy (recall that the noise shifts this value). The dashed blue lines mark the highest and the lowest pattern energy. As we can see the band they identify is narrow with low sparsity, and it becomes wider for higher values of sparsity due to more intense fluctuations.
turn leads to a further improvement in the reconstruction also for the middle patterns. This reasoning can be iterated ad libitum.
In Figure 6 we see how performance improves in the various rounds of decimation, and we compare it to the performance predicted by the rank-one formula, i.e. what we should have for any sub-linear rank (\(\alpha=0\), see Section 7). We see that, little by little, the performance approaches that of the rank-one formula.
## 7 Related works
### Unlearning and dreaming
As evident from Figure 1, without having strong sparsity, the storage capacity of the model is not very large, and the network is far from being able to store an over-complete basis of \(\mathbb{R}^{N}\). In an attempt to solve this issue one can pre-process the observation matrix with Hebbian unlearning [58, 59], with which decimation itself bears some similarity. Unlearning consists in iterating a zero temperature dynamics until convergence, which is likely to occur at a spurious state \(\mathbf{\eta}\) that is then removed from the observations \(\mathbf{Y}\leftarrow\mathbf{Y}-\varepsilon\mathbf{\eta}\mathbf{\eta}^{\intercal}/ \sqrt{N}\), with a small \(\varepsilon\). If run for an appropriate number of times, unlearning acts on the energy landscape penalizing spurious metastable states. This procedure has two fundamental parameters to be tuned: \(\varepsilon\) and the number of times \(D\) it is iterated [60]. If \(\varepsilon\) or \(D\) are too large one risks to remove also the wanted patterns.
Apart from numerical evidence, there is little theoretical understanding of the unlearning procedure as illustrated above. However, there are other convenient iterative ways of modifying the Hebbian matrix [61, 62, 63, 64] that converge to the so called pseudo-inverse learning rule (or modifications of it) [65, 66, 67], which in turn is able to increase the storage capacity to \(\alpha_{c}=1\).
Despite the apparent similarities, the goal of decimation is very different from that of unlearning. Its aim is to find a pattern, and not a metastable state, and to remove it completely (or almost completely) from \(\mathbf{Y}\), which amounts to set \(\varepsilon=1\) (or close to \(1\)) above. Furthermore, it is worth stressing that, unlike classical unlearning,
Figure 6: Improvement in performance obtained re-iterating decimation for Rademacher prior. In this example \(\Delta=0.08\), \(\alpha=0.03\), \(\rho=1\) and \(\beta=10\). The blue line is the first run, where the expected MSE on the reconstruction of the single patterns decreases along decimation. The magenta curve is instead obtained by fixing the last half of pattern MSEs, and running decimation backwards. Starting from the magenta line, we obtained the green solid line by fixing the first half and the last quarter of MSEs, and then running decimation for finding the third quarter of MSEs. Finally, the red dashed line was obtained from the green line running decimation again, with fixed first quarter and last half of MSEs. The blue dashed line is the expected MSE predicted by the rank one formula. Coherently, the last decimation steps approach the rank-one formula MSE from above, because the interference noise has been almost completely eliminated, except for noise of decimation itself, that is responsible for the final small gap.
we have a theoretical control on decimation, namely we can track its behaviour step by step.
### Sub-linear rank
In a recent work [57] the authors discuss the denoising of large matrices in the same setting as ours, with a main focus on the case \(P=N^{\delta}\), \(\delta\in(0,1)\), i.e. a sub-linear rank regime. In the mentioned paper, it is stated that, as long as the prior on the \(N\times P\) matrix \(\boldsymbol{\xi}\) is completely factorized over the matrix elements, the mutual information between \(\boldsymbol{\xi}\) and the data is given by the rank-one replica formula for _any_ sub-linear rank regime, in agreement with [68]. Though not explicitly stated in our previous work [48], our findings indeed suggest the same result, as it can be deduced from Section 3.2. In fact our free entropy, which is in close relation with the mutual information between observations and signal, takes the same form for any \(P\) such that \(P/N\to 0\). Furthermore, for \(\alpha=0\) and \(\beta=1/\Delta\), the fixed point equations admit a self-consistent solution that satisfies the Nishimori identities, which suggests that Bayes-optimality is recovered. From the form of the free entropy (41), it is also evident that the effect of decimation is visible only for truly extensive rank. The reason is that, if we penalize a finite number of directions in a space of dimension growing to infinity, the system can easily find other favoured directions to formalize in. In other words, the \(p^{\mu}(\mathbf{x})\)'s in (17) give a sub-extensive contribution that can be neglected in any sub-linear rank regime.
Another delicate point is the definition of DAMP. We stress that in (105) and (106) the presence of a high-rank spike inside \(\mathbf{Y}\) can induce non-trivial modifications both in \(\mathbf{A}\) and \(\mathbf{B}\). More specifically, it is known that, for instance, the Onsager reaction in (105) containing \(\mathbf{Y}^{\circ 2}\) has different asymptotically equivalent formulations. In the case of a Gaussian channel with a low-rank spike \(\mathbf{Y}^{\circ 2}\) can be replaced by an all-ones matrix. This is due to the fact that the rank of the spike is not large enough to induce modifications in the spectrum of the noise matrix. In the high-rank regime, on the contrary, the extensive rank starts to play a role and gives rise to important contributions in the reaction term. Moreover, the reaction term changes also along the decimation procedure, in which one further perturbs the data matrix with the high rank matrix of the decimation estimates \(\sum_{\mu=P-R+1}^{P}\frac{\boldsymbol{\eta}^{\mu}\boldsymbol{\eta}^{\mu+}}{ \sqrt{N}}\). Hence, the formulation in (105)-(106) turns out to be convenient. The low-rank regime is insensitive to the aforementioned changes.
Despite we were not able to prove it, Figure 6 suggests that re-iterating decimation in a proper way could lead to a performance similar to that predicted by the low rank replica symmetric formula. One may be led to think that reversed decimation yields Bayes-optimal performance. This is however not true. In fact, in the high rank case the spike induces a non-negligible perturbation of the spectrum of the noise matrix that can be used to perform inference (this deformation is captured by the RIE for instance) especially for large \(\alpha\)'s, where decimation fails.
### Channel universality properties
Low-rank spiked models are known to fulfill channel universality [69, 70, 71], namely for any well-behaved \(P_{\text{out}}(y\mid x)\) and data generated with the rule
\[Y_{ij}\sim P_{\text{out}}\Big{(}\cdot\mid\sum_{\mu=1}^{P}\frac{\xi_{i}^{\mu} \xi_{j}^{\mu}}{\sqrt{N}}\Big{)} \tag{119}\]
the mutual information between the data \(\mathbf{Y}\) and \(\boldsymbol{\xi}\) can be computed through an equivalent Gaussian channel as in (1) with a properly tuned noise intensity \(\Delta\). The proof of this equivalence requires two concomitant behaviours, _i)_ universality in the likelihood, and _ii)_ universality in the quenched disorder (i.e. the law of the data \(\mathbf{Y}\)), and holds as long as \(P^{3}/\sqrt{N}\to 0\)[70]. Informally, the main idea is to expand \(P_{\text{out}}\Big{(}\cdot\mid\sum_{\mu=1}^{P}\frac{\xi_{i}^{\mu}\xi_{j}^{\mu} }{\sqrt{N}}\Big{)}\) around \(0\) in its second entry up to second order, since for low-rank spikes \(\sum_{\mu=1}^{P}\frac{\xi_{i}^{\mu}\xi_{j}^{\mu}}{\sqrt{N}}\) is small for any fixed couple of indices \(i,j\). On the contrary, in the high-rank setting the higher moments of the spike start to matter, meaning that the previous expansion fails, and universality breaks down.
In our mismatched setting one can still count on the universality of the likelihood _for a single decimation step_. In fact, here the Statistician assumes to observe a low-rank spike, that is they consider
\[Y_{ij}\sim P_{\text{out}}\Big{(}\cdot\mid\frac{x_{i}x_{j}}{\sqrt{N}}\Big{)} \tag{120}\]
whereas the data are generated through (1). The free entropy of the related model reads as
\[\frac{1}{N}\mathbb{E}[\log\mathcal{Z}_{R}-\sum_{i,j}\log P_{\text{out}}(Y_{ij} \mid 0)]=\frac{1}{N}\mathbb{E}\log\int dP_{\xi}(\mathbf{x})\exp\Big{[}\sum_{i,j} \Big{(}\log P_{\text{out}}\Big{(}Y_{ij}\mid\frac{x_{i}x_{j}}{\sqrt{N}}\Big{)} -\log P_{\text{out}}(Y_{ij}\mid 0)\Big{)}\Big{]} \tag{121}\]
where \(\sum_{i,j}\log P_{\text{out}}(Y_{ij}\mid 0)\) has been subtracted to have a proper scaling. From the above equation one readily realizes that an expansion up to second order of \(P_{\text{out}}\) yields the desired equivalent quadratic model, for which our computations hold. However, we stress that exploiting this universality produces errors of \(O(N^{-1/2})\). These errors accumulate along the \(P=O(N)\) steps of decimation resulting in potentially non-negligible deviations from the original model towards the end of the procedure.
## 8 Conclusion and outlooks
Building on the results of [48], we have extended the analysis of the decimation procedure to a wide class of priors on the matrix elements of the factors \(\boldsymbol{\xi}\) for symmetric matrix factorization. We provided exhaustive numerical evidence in support of our replica theory, via the introduction of DAMP, whose performance in pattern retrieval, and matrix denoising matches the one predicted by the theory. Our numerical experiments confirm that decimation is a viable strategy for matrix factorization. In particular, as long as the first step is feasible, i.e. the procedure is started at a point of the phase diagram where there is a non-vanishing Mattis magnetization with one of the patterns, decimation is able to find all of them, up to a permutation. We stress again that DAMP is not an appropriate algorithm for inference, since it needs a strongly informative initialization. Nevertheless, in the case of sparse Ising priors, we were able to find a ground state oracle that is able to find all the patterns in suitable regions of the phase space of the decimation neural network models. The latter still suffers from an exponential complexity: it needs an exponential number of restarts (in \(N\)) in order to find all the patterns and discard correctly the spurious states it may get stuck in.
The idea of reversed decimation and unlearning are insightful perspectives. In fact, in order to increase the storage capacity of the neural networks, or equivalently to widen the region of the phase space where we can perform matrix factorization, one could pre-process the Hebbian interaction matrix using a local updating rule, as the ones described in [63, 72]. In these works, besides the usual "forgetting" mechanism, the authors also consider a consolidation of the memories, which avoids the risk of corrupting the Hebbian interaction too much. This pre-processing could be combined with reversed decimation in order to obtain a better performing procedure that is also more robust to pattern interference.
Finally, in an upcoming work, we shall tackle the asymmetric problem, which is closer to practical applications. Here, the Statistician has to reconstruct two independent matrices \(\mathbf{F}\in\mathbb{R}^{N\times P}\) and \(\mathbf{X}\in\mathbb{R}^{P\times M}\) from the observations
\[\mathbf{Y}=\frac{1}{\sqrt{N}}\mathbf{F}\mathbf{X}+\sqrt{\Delta}\mathbf{Z}\in \mathbb{R}^{N\times M} \tag{122}\]
in the scaling limit \(N,M,P\rightarrow\infty\) with \(P/N=\alpha>0\) and \(P/M=\gamma>0\).
## Acknowledgments
We would like to thank Enzo Marinari and Federico Ricci-Tersenghi for their suggestions on the reversed decimation, Enzo Marinari and Marco Benedetti for discussions on unlearning, as well as Florent Krzakala, Lenka Zdeborova and Jean Barbier for many fruitful discussionson matrix factorization. MM acknowledges financial support by the PNRR-PE-AI FAIR project funded by the NextGeneration EU program. |
2309.09856 | Polarized Hardy--Stein identity | We prove the Hardy--Stein identity for vector functions in $L^p(\mathbb
R^d;\mathbb R^n)$ with $1<p<\infty$ and for the canonical paring of two real
functions in $L^p(\mathbb R^d)$ with $2\le p<\infty$. To this end we propose a
notion of Bregman co-divergence and study the corresponding integral forms. | Krzysztof Bogdan, Michał Gutowski, Katarzyna Pietruska-Pałuba | 2023-09-18T15:16:38Z | http://arxiv.org/abs/2309.09856v1 | # Polarized Hardy-Stein identity
###### Abstract.
We prove the Hardy-Stein identity for vector functions in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) with \(1<p<\infty\) and for the canonical paring of two real functions in \(L^{p}(\mathbb{R}^{d})\) with \(2\leq p<\infty\). To this end we propose a notion of Bregman co-divergence and study the corresponding integral forms.
Key words and phrases:Bregman co-divergence, Markovian semigroup, calculus in \(L^{p}\) 2010 Mathematics Subject Classification: Primary 46E35; Secondary 31C05 The research was supported by the NCN grant 2018/31/B/ST1/03818
Introduction
Let \(\mathcal{F}_{p}(\mathbb{R}^{d})\) be a smooth smooth \(p\)-dimensional space with smooth boundary \(\partial\mathcal{F}_{p}(\mathbb{R}^{d})\). We consider the following elliptic operator
\[\mathcal{F}_{p}(\mathbb{R}^{d}):=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{F}_{p}(\mathbb{R}^{d})\,\mathrm{d}x\,\mathrm{d}y, \tag{1.1}\]
where \(\mathcal{F}_{p}(\mathbb{R}^{d})\) is the _Brownian derivative_ of \(\mathbb{R}^{d}\). The _Brownian derivative_ of \(\mathbb{R}^{d}\) is the _Brownian derivative_ of \(\mathbb{R}^{d}\).
particular Lemma B.5 gives the derivative of \(\int_{\mathbb{R}^{d}}P_{t}fP_{t}g|P_{t}g|^{p-2}\,\mathrm{d}x\). In Appendix C we discuss convexity properties related to the Bregman co-divergence \(\mathcal{J}_{p}\). In Appendix D we give a simpler proof of Theorem 4.1, but only for \(p\geq 3\). In Appendix F, complementing Appendix E, we discuss the \(L^{p}\) generator of the Gaussian semigroup. For technical reasons, our main results are restricted to a class of convolution Markovian semigroups on \(\mathbb{R}^{d}\), but some arguments are presented in a more general setting and further extensions are forthcoming. For instance, Gutowski [20] and Gutowski and Kwasnicki [21] extend (1.1) to general symmetric Markovian semigroups with nonlocal Dirichlet forms; see also Bogdan, Kutek, and Pietruska-Paluba [9] for a recent probabilistic approach, based on stochastic integrals.
**Acknowledgements.** We thank Wlodzimierz Bak, Bartomiej Dyda, Mateusz Kwasnicki, Agnieszka Kalamajska, and Bartomiej Wrobel for helpful discussions.
## 2. Preliminaries
### Standing assumptions
All the considered sets, functions, and measures are assumed Borel. For nonnegative functions \(f\) and \(g\), we write \(f(x)\asymp g(x)\) to indicate that there exist _constants_, i.e., numbers \(0<c\leq C<\infty\) such that \(cf(x)\leq g(x)\leq Cf(x)\) for all the considered arguments \(x\). Without warning, the symbols \(c\) or \(C\) may denote different constants even within a single line of text. The symbol \(:=\) means definition, e.g., \(a\lor b:=\max\{a,b\}\), \(a\wedge b:=\min\{a,b\}\), \(a_{+}:=a\lor 0\), and \(a_{-}:=(-a)\lor 0\). We denote the Euclidean norm of a vector \(z\in\mathbb{R}^{n}\) as \(|z|\) and the standard scalar product of vectors \(w\) and \(z\) in \(\mathbb{R}^{n}\) as \((w,z)\) or \(w\cdot z\). The unit sphere in \(\mathbb{R}^{n}\) centered at the origin is denoted by \(\mathbb{S}^{n-1}\). As usual, \(\|f\|_{L^{q}(\mathbb{R}^{d})}\) denotes the \(L^{q}(\mathbb{R}^{d})\) norm of the (extended real-valued) function \(f\), \(1\leq q\leq\infty\). More specifically, \(\|f\|_{L^{q}(\mathbb{R}^{d})}:=\big{(}\int_{\mathbb{R}^{d}}|f(x)|^{q}\,\mathrm{ d}x\big{)}^{1/q}\) for \(1\leq q<\infty\), where \(\mathrm{d}x\) refers to the Lebesgue measure on \(\mathbb{R}^{d}\) and \(\|f\|_{L^{\infty}(\mathbb{R}^{d})}:=\operatorname{ess\,sup}|f|\).
Let \(d=1,2,\ldots\). Consider a symmetric, absolutely continuous Levy measure \(\nu\) on the Euclidean space \(\mathbb{R}^{d}\). Thus, \(\nu(\mathrm{d}z)=\nu(z)\,\mathrm{d}z\), where \(\nu\colon\mathbb{R}^{d}\setminus\{0\}\to(0,\infty)\), \(\nu(-z)=\nu(z)\), \(z\in\mathbb{R}^{d}\setminus\{0\}\), and
\[\int_{\mathbb{R}^{d}}\big{(}|z|^{2}\wedge 1\big{)}\,\nu(z)\,\mathrm{d}z<\infty.\]
The corresponding Levy-Khinchine exponent is
\[\psi(\xi):=\int_{\mathbb{R}^{d}}\left(1-\cos(\xi\cdot x)\right)\nu(x)\,\mathrm{ d}x,\quad\xi\in\mathbb{R}^{d}. \tag{2.1}\]
We further assume the following Hartman-Wintner condition on \(\psi\) (and \(\nu\)):
\[\lim_{|\xi|\to\infty}\frac{\psi(\xi)}{\log|\xi|}=\infty. \tag{2.2}\]
In particular, \(\int_{\mathbb{R}^{d}}\nu(z)\,\mathrm{d}z=\infty\). This gives rise to a convolution semigroup of probability densities \(p_{t}\) by the Levy-Khintchine formula, or Fourier inversion, as follows:
\[p_{t}(x):=(2\pi)^{-d}\int_{\mathbb{R}^{d}}e^{-i\xi\cdot x}e^{-t\psi(\xi)}\, \mathrm{d}\xi,\quad t>0,\ x\in\mathbb{R}^{d}. \tag{2.3}\]
The function \(p_{t}(x)\) is continuous and attains its maximum at \(x=0\), which is
\[p_{t}(0)=(2\pi)^{-d}\int_{\mathbb{R}^{d}}e^{-t\psi(\xi)}\,\mathrm{d}\xi,\quad t>0.\]
By (2.2), \(p_{t}(0)\) is finite for every \(t>0\) and, by the Dominated Convergence Theorem, \(p_{t}(0)\) converges to zero as \(t\to\infty\), so \(\left\|p_{t}\right\|_{L^{\infty}(\mathbb{R}^{d})}\to 0\).
We shall also write \(p_{t}(x,y):=p_{t}(y-x)\) and \(\nu(x,y):=\nu(y-x)\). Note that \(p_{t}(x,y)\) is a transition density of a pure-jump Levy stochastic process \(\{X_{t},t\geq 0\}\) in \(\mathbb{R}^{d}\) with Levy-Khintchine exponent \(\psi\) (see Sato [29]) and \(\nu(x,y)\) is the kernel of the corresponding Dirichlet form; see, e.g., Fukushima, Oshima, and Takeda [19] and (2.24). (In the following discussion, the process does not play a significant role.) Encouraged by [8], we also assume the following technical conditions:
( **P1)** \[p_{t}(x,y)/t\leq c\nu(x,y),\quad t>0,\ x,y\in\mathbb{R}^{d},\]
with some constant \(c\), and
( **P2)** \[p_{t}(x,y)/t\to\nu(x,y)\ \text{as}\ t\to 0^{+},\ x,y\in\mathbb{R}^{d}.\]
For instance, the transition density corresponding to the fractional Laplacian satisfies (**P1**) and (**P2**); see examples provided by Bogdan, Grzywny, and Ryznar [7, Corollary 23] and Cygan, Grzywny, and Trojan [16, Proof of Theorem 6]. The conditions are convenient in limiting procedures based on the Dominated Convergence Theorem, but they can certainly be relaxed, as evidenced by Appendix E, [20], [21], and [9].
### Elementary functions and inequalities
Throughout we use the notation
\[a^{\langle\kappa\rangle}:=\left|a\right|^{\kappa}\operatorname{sgn}a=a|a|^{ \kappa-2},\quad a,\kappa\in\mathbb{R},\]
where \(0^{\langle\kappa\rangle}:=0\) and, as usual, \(\operatorname{sgn}0=0\), \(0^{0}:=1\), \(0^{\kappa}:=\infty\) for \(\kappa<0\), and \(0\cdot\infty:=0\). Note that
\[(|x|^{\kappa})^{\prime}=\kappa x^{\langle\kappa-1\rangle}\quad\text{if $x\in \mathbb{R}$ and $\kappa>1$ or $x\in\mathbb{R}\setminus\{0\}$ and $\kappa\in\mathbb{R}$.}\]
Furthermore,
\[\left(x^{\langle\kappa\rangle}\right)^{\prime}=\kappa|x|^{\kappa-1}\quad \text{if $x\in\mathbb{R}$ and $\kappa>1$ or $x\in\mathbb{R}\setminus\{0\}$ and $\kappa\in\mathbb{R}$.}\]
This has a vector counterpart: for \(\kappa>0\), we let
\[z^{\langle\kappa\rangle}:=|z|^{\kappa-1}z,\quad z\in\mathbb{R}^{n}, \tag{2.4}\]
again with the convention \(0^{\langle\kappa\rangle}:=0\). Note that
\[\nabla|z|^{\kappa}=\kappa z^{\langle\kappa-1\rangle}\quad\text{if $z\in \mathbb{R}^{n}$ and $\kappa>1$ or $z\in\mathbb{R}^{n}\setminus\{0\}$ and $\kappa\in\mathbb{R}$.} \tag{2.5}\]
Furthermore, the Jacobi matrix \(J_{\langle\kappa\rangle}\) for the mapping \(z\mapsto z^{\langle\kappa\rangle}\) equals
\[J_{\langle\kappa\rangle}(z)=|z|^{\kappa-1}\left((\kappa-1)\left(\frac{z}{|z|} \otimes\frac{z}{|z|}\right)+\mathrm{Id}\right)\in\mathbb{R}^{n}\times\mathbb{ R}^{n}\quad\text{ if $z\in\mathbb{R}^{n}\setminus\{0\}$} \tag{2.6}\]
and we let \(J_{\langle\kappa\rangle}(0):=0\).
In the following, unless otherwise specified, we consider exponents \(p\in(1,\infty)\).
**Definition 2.1**.: The _Bregman divergence_\(\mathcal{F}_{p}\colon\mathbb{R}^{n}\times\mathbb{R}^{n}\to(0,\infty)\) is given by
\[\mathcal{F}_{p}(w,z):=|z|^{p}-|w|^{p}-pw^{\langle p-1\rangle}\cdot(z-w) \tag{2.7}\]
and the _symmetrized Bregman divergence_ is
\[\mathcal{H}_{p}(w,z):=\frac{1}{2}\left(\mathcal{F}_{p}(w,z)+\mathcal{F}_{p}(z, w)\right)=\frac{p}{2}(z-w)\cdot\left(z^{\langle p-1\rangle}-w^{\langle p-1 \rangle}\right). \tag{2.8}\]
For instance,
\[\mathcal{F}_{2}(w,z)=\mathcal{H}_{2}(w,z)=|z-w|^{2},\quad w,z\in\mathbb{R}^{n}. \tag{2.9}\]
Note that \(\mathcal{F}_{p}(w,z)=|z|^{p}\) if \(w=0\), but \(\mathcal{F}_{p}(w,z)=(p-1)|w|^{p}\) if \(z=0\). Of course, the mapping \(\mathbb{R}^{n}\ni z\mapsto|z|^{p}\) is convex, since \(p>1\). Its second-order Taylor remainder is \(\mathcal{F}_{p}\), so \(\mathcal{F}_{p}\geq 0\) and \(\mathcal{H}_{p}\geq 0\). Also, if \(Q\) is an \(n\times n\) orthogonal matrix, then
\[\mathcal{F}_{p}(Qw,Qz)=\mathcal{F}_{p}(w,z). \tag{2.10}\]
For notational convenience in what follows, we also let
\[\mathcal{G}_{p}(w,z):=|z-w|^{2}(|w|\vee|z|)^{p-2},\quad z,w\in\mathbb{R}^{n}. \tag{2.11}\]
We further introduce the second-order Taylor remainders of the vector functions \(\mathbb{R}^{n}\ni z\mapsto z^{\langle\kappa\rangle}\in\mathbb{R}^{n}\) for \(\kappa>1\). More precisely, we define \(\mathcal{F}_{\langle\kappa\rangle}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}^{n}\) by
\[\mathcal{F}_{\langle\kappa\rangle}(w,z):=z^{\langle\kappa\rangle}-w^{\langle \kappa\rangle}-J_{\langle\kappa\rangle}(w)(z-w),\quad\ w,z\in\mathbb{R}^{n}.\]
Of course, the mapping \(\mathbb{R}\ni x\mapsto x^{\langle\kappa\rangle}\in\mathbb{R}\) is in general not convex and \(F_{\langle\kappa\rangle}\) changes sign. In fact, \(F_{\langle\kappa\rangle}(-a,-b)=-F_{\langle\kappa\rangle}(a,b)\).
The scalar versions of the above functions (for \(n=1\)) are denoted \(F_{\kappa}\) (see Introduction), \(H_{\kappa}\), \(G_{\kappa}\), and \(F_{\langle\kappa\rangle}\), respectively. In particular,
\[F_{\langle\kappa\rangle}(a,b)=b^{\langle\kappa\rangle}-a^{\langle\kappa \rangle}-\kappa|a|^{\kappa-1}(b-a),\quad a,b\in\mathbb{R}.\]
The following estimates are quite important for our analysis.
**Lemma 2.1**.: _Let \(p\in(1,\infty)\). We have_
\[\mathcal{F}_{p}(w,z)\asymp\mathcal{G}_{p}(w,z),\quad w,z\in\mathbb{R}^{n}, \tag{2.12}\]
_and_
\[\mathcal{H}_{p}(w,z)\asymp\mathcal{G}_{p}(w,z),\quad w,z\in\mathbb{R}^{n}. \tag{2.13}\]
Of, course (2.13) follows from (2.12). It seems that (2.12) was first proved in Pinchover, Tertikas, and Tintarev [28, (2.19)], but one of the one-sided bounds was given earlier in Shafrir [30, Lemma 7.4] for \(p\geq 2\) and the other in Barbatis, Filippas, and Tertikas [2, Lemma 3.1]. The one-dimensional case, \(F_{p}(a,b)\asymp(b-a)^{2}(|a|\vee|b|)^{p-2}\), \(a,b\in\mathbb{R}\), is crucial in [3, Lemma 6], with [3, (10), (12)] therein being a part of the comparison for \(n=2\) (the setting of (2.12) is essentially two-dimensional). Optimal constants are known in some cases: for the lower bound of \(F_{p}\) with \(p\in(1,2)\) and for the upper bound with \(p\in(2,\infty)\); see [8] and [30, Lemma 7.4]. The quadratic factor in (2.11) is the reason why Bregman divergence \(\mathcal{F}_{p}\) is integrable against Levy measures in (1.3), which is crucial in analysis of nonlocal equations of parabolic and elliptic type;
see the applications of [6, (2.14)] therein. See also [8] for a martingale setting. In passing we note another important estimate:
\[\mathcal{F}_{p}(w,z)\asymp\left|z^{\langle p/2\rangle}-w^{\langle p/2\rangle} \right|^{2},\quad w,z\in\mathbb{R}^{n}. \tag{2.14}\]
We refer to [8, Subsection 1.3] for a discussion of the estimate when \(n=1\); the case of arbitrary \(n=1,2,\ldots\) can be found in Huang [22]. Further estimates concerning functions \(\mathcal{F}_{p}\), \(\mathcal{F}_{\langle\kappa\rangle}\), and their cousins are collected and proved in Appendix A.
### Semigroups, generators, and forms
The semigroup is defined by
\[P_{t}f(x):=\int_{\mathbb{R}^{d}}f(y)p_{t}(x,y)\,\mathrm{d}y,\quad t>0,\]
and by \(P_{0}f(x):=f(x)\), where \(x\in\mathbb{R}^{d}\) and \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is nonnegative or integrable. We briefly mention a well known probability connection: \(P_{t}f(x)=\mathbb{E}_{x}f(X_{t})\), where \((X_{t},\mathbb{P}_{x})_{t\geq 0,x\in\mathbb{R}^{d}}\) is our Levy process, considered as a Markov process with transition density on \(\mathbb{R}^{d}\) given by \(p_{t}(\cdot,\cdot)\), and \(\mathbb{E}_{x}\) is the expectation with respect to the distribution \(\mathbb{P}_{x}\) of the process starting from \(x\).
Since \(\int_{\mathbb{R}^{d}}p_{t}(x,y)\,\mathrm{d}y=1\) for \(t>0\), \(x\in\mathbb{R}^{d}\) (conservativeness of the semigroup \(P_{t}\)), and \(p_{t}\) is symmetric in \(x,y\), Fubini-Tonelli theorem yields
\[\int_{\mathbb{R}^{d}}P_{t}f(x)\,\mathrm{d}x=\int_{\mathbb{R}^{d}}f(x)\, \mathrm{d}x. \tag{2.15}\]
Recall that \(1<p<\infty\). It is well known that \((P_{t})_{t\geq 0}\) is a strongly continuous Markovian semigroup of symmetric operators on \(L^{p}(\mathbb{R}^{d})\); see for example [29, E 34.10]. For all \(x\in\mathbb{R}^{d}\) and \(f\in L^{p}(\mathbb{R}^{d})\), by (2.3) and Holder's inequality with exponents \(p\) and \(q=p/(p-1)\), we get
\[|P_{t}f(x)| =\left|\int_{\mathbb{R}^{d}}f(y)p_{t}(x,y)\,\mathrm{d}y\right| \leq\|f\|_{L^{p}(\mathbb{R}^{d})}\left(\int_{\mathbb{R}^{d}}p_{t}(x,y)^{q}\, \mathrm{d}y\right)^{1/q}\] \[\leq\|f\|_{L^{p}(\mathbb{R}^{d})}\left(\sup_{x,y\in\mathbb{R}^{d }}p_{t}(x,y)^{q-1}\right)^{1/q}=\|f\|_{L^{p}(\mathbb{R}^{d})}\left\|p_{t}\right\| _{L^{\infty}(\mathbb{R}^{d})}^{1/p}\xrightarrow[t\to\infty]{}0. \tag{2.16}\]
We also need the following maximal inequality of Stein for symmetric Markovian semigroups; see Stein [31, p. 73] and recall that \(1<p<\infty\).
**Lemma 2.2** (Stein inequality).: _If \(f\in L^{p}(\mathbb{R}^{d})\), \(f^{*}(x):=\sup_{t\geq 0}|P_{t}f(x)|\), \(x\in\mathbb{R}^{d}\), then,_
\[\|f^{*}\|_{L^{p}(\mathbb{R}^{d})}\leq\frac{p}{p-1}\|f\|_{L^{p}(\mathbb{R}^{d} )}. \tag{2.17}\]
By (2.17) and (2.16), the semigroup is _strongly stable_ in \(L^{p}(\mathbb{R}^{d})\): If \(f\in L^{p}(\mathbb{R}^{d})\), then
\[\|P_{t}f\|_{L^{p}(\mathbb{R}^{d})}\to 0\text{ as }t\to\infty. \tag{2.18}\]
Indeed, since for every \(x\in\mathbb{R}^{d}\) we have \(|P_{t}f(x)|\to 0\) and \(|P_{t}f(x)|\leq f^{*}(x)\) with \(f^{*}\in L^{p}(\mathbb{R}^{d})\), we get \(\|P_{t}f\|_{L^{p}(\mathbb{R}^{d})}\to 0\) by the Dominated Convergence Theorem.
Let \(L\) be the generator of the semigroup \((P_{t})_{t\geq 0}\), when acting on \(L^{p}(\mathbb{R}^{d})\). Its natural domain, denoted \(\mathcal{D}_{p}(L)\), consists of those \(f\in L^{p}(\mathbb{R}^{d})\) for which there is a \(g\in L^{p}(\mathbb{R}^{d})\) such that \((P_{h}f-f)/h\to g\) in \(L^{p}(\mathbb{R}^{d})\) as \(h\to 0^{+}\); we then write \(Lf=g\).
We next discuss issues related to the \(L^{p}\)-differentiability of semigroups. (To make the exposition self-contained, we include a primer on the \(L^{p}\) calculus in Appendix B.) Thus, for \(f\in L^{p}(\mathbb{R}^{d})\) and \(t\geq 0\), we write \(u(t):=P_{t}f\). Of course, \(u(t)\in L^{p}(\mathbb{R}^{d})\). Furthermore, if \(f\in\mathcal{D}_{p}(L)\) then \(u^{\prime}(t)=LP_{t}f=P_{t}Lf=Lu(t)\), \(t\geq 0\). By Lemma B.3 with \(n=1\), we obtain the following result.
**Corollary 2.3**.: _Let \(f\in\mathcal{D}_{p}(L)\). If \(1<\kappa\leq p\) then:_
* \(|u(t)|^{\kappa}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _and_ (2.19) \[(|u(t)|^{\kappa})^{\prime}=\kappa u(t)^{\langle\kappa-1\rangle}u^{\prime}(t)= \kappa u(t)^{\langle\kappa-1\rangle}P_{t}Lf,\quad t\geq 0,\]
* \(u^{\langle\kappa\rangle}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _and_ (2.20) \[(u(t)^{\langle\kappa\rangle})^{\prime}=\kappa|u(t)|^{\kappa-1}u^{\prime}(t)= \kappa|u(t)|^{\kappa-1}P_{t}Lf,\quad t\geq 0.\]
Moreover, since \((P_{t})_{t\geq 0}\) is symmetric, it is an analytic semigroup on \(L^{p}(\mathbb{R}^{d})\) for \(p\in(1,\infty)\); see Liskevich and Perel'muter [25, Corollary 3.2]. Therefore, for all \(t>0\) and \(f\in L^{p}(\mathbb{R}^{d})\), \(\frac{\,\mathrm{d}}{\,\mathrm{d}t}P_{t}f=u^{\prime}(t)\) exists in \(L^{p}(\mathbb{R}^{d})\), so \(P_{t}f\in\mathcal{D}_{p}(L)\) and \(u^{\prime}(t)=LP_{t}f=Lu(t)\).
As a special case of (1.3), in what follows we consider the integral form
\[\mathcal{E}_{p}[u]=\frac{1}{p}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}F_{p}( u(x),u(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y. \tag{2.21}\]
Of course, the form is well-defined (possibly infinite) for every \(u:\mathbb{R}^{d}\to\mathbb{R}\) because \(F_{p}\geq 0\). By the symmetry of \(\nu\),
\[\mathcal{E}_{p}[u] = \frac{1}{p}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}H_{p}(u(x),u (y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[= \frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(u(y)-u(x)) \left(u(y)^{\langle p-1\rangle}-u(x)^{\langle p-1\rangle}\right)\nu(x,y)\, \mathrm{d}x\mathrm{d}y. \tag{2.22}\]
The natural domain of \(\mathcal{E}_{p}\) is
\[\mathcal{D}(\mathcal{E}_{p}):=\{u\in L^{p}(\mathbb{R}^{d}):\ \mathcal{E}_{p}[u]<\infty\}. \tag{2.23}\]
When \(p=2\), we get the usual Dirichlet form of the semigroup,
\[\mathcal{E}_{2}[u]=\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(u(y)- u(x))^{2}\nu(x,y)\,\mathrm{d}x\mathrm{d}y, \tag{2.24}\]
with domain \(\mathcal{D}(\mathcal{E}_{2})\). We write \(\mathcal{E}:=\mathcal{E}_{2}\).
For \(t>0\), \(u\in L^{p}(\mathbb{R}^{d})\), and \(v\in L^{q}(\mathbb{R}^{d})\), we define, as usual,
\[\mathcal{E}^{(t)}(u,v):=\frac{1}{t}\langle u-P_{t}u,v\rangle=\frac{1}{t}\int_ {\mathbb{R}^{d}}(u(x)-P_{t}u(x))v(x)\,\mathrm{d}x. \tag{2.25}\]
Here and below we use the following notation for the canonical paring:
\[\langle u,v\rangle:=\int_{\mathbb{R}^{d}}u(x)v(x)\,\mathrm{d}x.\]
The next result was established in [8, Lemma 7] for the fractional Laplacian, but since its proof requires only symmetry and the conditions (**P1**) and (**P2**), it applies verbatim in the present setting.
**Proposition 2.4**.: _Let \(p>1\). For every \(u\in L^{p}(\mathbb{R}^{d})\), we have_
\[\mathcal{E}_{p}[u]=\lim_{t\to 0}\mathcal{E}^{(t)}(u,u^{\langle p-1\rangle}). \tag{2.26}\]
_Furthermore,_
\[\mathcal{D}(\mathcal{E}_{p}) = \{u\in L^{p}(\mathbb{R}^{d}):\sup_{t>0}\mathcal{E}^{(t)}(u,u^{ \langle p-1\rangle})<\infty\} \tag{2.28}\] \[= \{u\in L^{p}(\mathbb{R}^{d}):\text{ finite }\lim_{t\to 0} \mathcal{E}^{(t)}(u,u^{\langle p-1\rangle})\text{ exists}\}. \tag{2.27}\]
_For arbitrary \(u\colon\mathbb{R}^{d}\to\mathbb{R}\), we have_
\[\frac{4(p-1)}{p^{2}}\mathcal{E}[u^{\langle p/2\rangle}]\leq\mathcal{E}_{p}[u ]\leq 2\mathcal{E}[u^{\langle p/2\rangle}] \tag{2.29}\]
_and \(\mathcal{D}(\mathcal{E}_{p})=\mathcal{D}(\mathcal{E})^{\langle 2/p\rangle}:=\{v^{ \langle 2/p\rangle}:v\in\mathcal{D}(\mathcal{E})\}\). Finally, \(\mathcal{D}_{p}(L)\subset\mathcal{D}(\mathcal{E}_{p})\) and_
\[\mathcal{E}_{p}[u]=-\langle Lu,u^{\langle p-1\rangle}\rangle,\quad u\in \mathcal{D}_{p}(L). \tag{2.30}\]
The disscusion extends to functions with values in \(\mathbb{R}^{n}\), \(n=1,2,\ldots\). Namely, let \(f_{1},\ldots,f_{n}\in L^{p}(\mathbb{R}^{d})\), so \(F:=(f_{1},\ldots,f_{n})\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). We denote
\[P_{t}F:=(P_{t}f_{1},\ldots,P_{t}f_{n}),\quad t\geq 0, \tag{2.31}\]
thus \(P_{t}F\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), \(t\geq 0\). If, furthermore, \(f_{1},\ldots,f_{n}\in\mathcal{D}_{p}(L)\), then we define
\[LF:=(Lf_{1},\ldots,Lf_{n}). \tag{2.32}\]
Then, letting \(U(t):=P_{t}F(t)\), \(t\geq 0\), we get \(U^{\prime}(t)=LU(t)\) and the following multidimensional extension of Corollary 2.3, an easy consequence of Lemma B.3.
**Corollary 2.5**.: _Let \(n=1,2,\ldots\), \(f_{1},\ldots,f_{n}\in\mathcal{D}_{p}(L)\), \(F:=(f_{1},\ldots,f_{n})\), and \(U(t)=P_{t}F\), \(t\geq 0\). If \(1<\kappa\leq p\), then for \(t\geq 0\),_
* \(|U(t)|^{\kappa}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _with_ \[\left(|U(t)|^{\kappa}\right)^{\prime}=\kappa U(t)^{\langle\kappa-1\rangle} \cdot LU(t),\]
* \(U(t)^{\langle\kappa\rangle}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})\) _with_ \[\left(U(t)^{\langle\kappa\rangle}\right)^{\prime}=\left(J_{\langle\kappa \rangle}\circ U(t)\right)LU(t).\]
The following result will be useful in limiting procedures later on.
**Lemma 2.6**.: _[_8_, Lemma 6]_ _If nonnegative functions \(f,f_{k}\colon\mathbb{R}^{d}\to\mathbb{R}\), \(k=1,2,\ldots\), satisfy \(f_{k}\leq cf\) and \(f=\lim_{k\to\infty}f_{k}\), then \(\lim_{k\to\infty}\int f_{k}\,\mathrm{d}\mu=\int f\,\mathrm{d}\mu\) for each measure \(\mu\)._
## 3. Hardy-Stein identity
Below we work under the assumptions on \(\nu\) formulated in Subsection 2.1. We will extend (1.1) to arbitrary dimension \(n=1,2,\ldots\). We recall that the proof given in [1, Theorem 3.2] for \(n=1\) relies on approximations and pointwise calculus in \(\mathbb{R}^{d}\). Here, instead, we use a more synthetic differential calculus in \(L^{p}\).
**Theorem 3.1**.: _Let \(p>1\), \(n=1,2,\ldots\), and \(F=(f_{1},\ldots,f_{n})\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). Then,_
\[\int_{\mathbb{R}^{d}}|F(x)|^{p}\,\mathrm{d}x=\int_{0}^{\infty}\int_{\mathbb{R }^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F(x),P_{t}F(y))\nu(x,y)\, \mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{3.1}\]
Proof.: Let first \(F=(f_{1},\ldots,f_{n})\in(\mathcal{D}_{p}(L))^{n}\) and \(0\leq t\leq T<\infty\). Then \(U(t):=P_{t}F\in(\mathcal{D}_{p}(L))^{n}\) and \(LP_{t}F=LU(t)=(LP_{t}f_{1},\ldots,LP_{t}f_{n})\). From Corollary 2.5, \(|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) and \((|U(t)|^{p})^{\prime}=pU(t)^{\langle p-1\rangle}\cdot LF(t)\). As \(f\mapsto\int_{\mathbb{R}^{d}}f\,\mathrm{d}x\) is a continuous linear functional on \(L^{1}(\mathbb{R}^{d})\),
\[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}|U(t)|^{p} \,\mathrm{d}x = \int_{\mathbb{R}^{d}}\frac{\,\mathrm{d}}{\,\mathrm{d}t}|U(t)|^{p} \,\mathrm{d}x=\int_{\mathbb{R}^{d}}pU(t)^{\langle p-1\rangle}\cdot LU(t)\, \mathrm{d}x\] \[= \langle LU(t),pU(t)^{\langle p-1\rangle}\rangle. \tag{3.2}\]
Since \(LU(t)=\lim_{h\to 0^{+}}(P_{h}U(t)-U(t))/h\) strongly in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), \(U(t)^{\langle p-1\rangle}\) belongs to the (dual) space \(L^{\frac{p}{p-1}}(\mathbb{R}^{d};\mathbb{R}^{n})\), and the semigroup \((P_{t})_{t\geq 0}\) is conservative, we get
\[\langle LU(t),pU(t)^{\langle p-1\rangle}\rangle\] \[= \lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}pU(t)(x)^{ \langle p-1\rangle}\cdot(U(t)(y)-U(t)(x))\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y\] \[= \lim_{h\to 0^{+}}\left[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}pP_ {t}F(x)^{\langle p-1\rangle}\cdot(P_{t}F(y)-P_{t}F(x))\frac{p_{h}(x,y)}{h}\, \mathrm{d}x\mathrm{d}y\right.\] \[+\left.\frac{1}{h}\int_{\mathbb{R}^{d}}|P_{t}F(x)|^{p}\,\mathrm{d }x-\frac{1}{h}\int_{\mathbb{R}^{d}}P_{h}(|P_{t}F|^{p})(x)\,\mathrm{d}x\right]\] \[= -\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{F}_{p}(P_{t}F(x),P_{t}F(y))\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y\] \[= -\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F (x),P_{t}F(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y. \tag{3.3}\]
The last equality (3.3) is justified by Lemma 2.6, the nonnegativity of \(\mathcal{F}_{p}\), and assumptions (**P1**), (**P2**). Summarizing,
\[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}|U(t)|^{p}\mathrm{d}x= -\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F(x),P_{t}F(y) )\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\]
Since \(|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) on \([0,\infty)\) and the integration is a continuous linear functional on \(L^{1}(\mathbb{R}^{d})\), \(\int_{\mathbb{R}^{d}}|U(t)|^{p}\,\mathrm{d}x\) is continuously differentiable.
Integrating from \(0\) to \(T\) we obtain
\[\int_{\mathbb{R}^{d}}|F|^{p}\,\mathrm{d}x-\int_{\mathbb{R}^{d}}|U(T) |^{p}\,\mathrm{d}x = -\int_{0}^{T}\left(\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{ d}}|U(t)|^{p}\,\mathrm{d}x\right)\,\mathrm{d}t\] \[= \int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F} _{p}(P_{t}F(x),P_{t}F(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\]
We let \(T\to\infty\) and obtain \(\int_{\mathbb{R}^{d}}|U(T)|^{p}\,\mathrm{d}x\to 0\) from the strong stability (2.18).
We now relax the assumption \(f_{j}\in\mathcal{D}_{p}(L)\). Let \(F=(f_{1},\ldots,f_{n})\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) be arbitrary and let \(s>0\). Since \((P_{t})_{t\geq 0}\) is an analytic semigroup on \(L^{p}(\mathbb{R}^{d})\), \(P_{s}f_{j}\in\mathcal{D}_{p}(L)\) for all \(j=1,\ldots,n\), so \(U(s)\in(\mathcal{D}_{p}(L))^{n}\). By (3.1) and a change of variables,
\[\int_{\mathbb{R}^{d}}|U(s)|^{p}\,\mathrm{d}x=\int_{s}^{\infty}\int_{\mathbb{R }^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F(x),P_{t}F(y))\nu(x,y)\, \mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{3.4}\]
Let \(s\) decrease to \(0\). Since \(\mathcal{F}_{p}\geq 0\), the right-hand side of (3.4) increases to \(\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_ {t}F(x),P_{t}F(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\). By the strong continuity of \((P_{t})_{t\geq 0}\) in \(L^{p}(\mathbb{R}^{d})\), \(P_{s}f_{j}\to f_{j}\), \(j=1,\ldots,n\), in \(L^{p}(\mathbb{R}^{d})\), so \(U(s)=P_{s}F\to F\) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), in particular \(\left\|U(s)\right\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{p}\to\left\|F\right\| _{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{p}\). The proof is complete.
_Remark 3.2_.: Since \(\nu\) is symmetric, by (2.8) we get a symmetrized version of the Hardy-Stein identity for every \(F\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\):
\[\int\limits_{\mathbb{R}^{d}}|F|^{p}\,\mathrm{d}x=\frac{p}{2}\int\limits_{0}^{ \infty}\!\!\int\limits_{\mathbb{R}^{d}}\!\!\int\limits_{\mathbb{R}^{d}}(P_{t}F (y)-P_{t}F(x))\!\cdot\!\Big{(}P_{t}F(y)^{\langle p-1\rangle}-P_{t}F(x)^{\langle p -1\rangle}\Big{)}\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\]
## 4. Polarized Hardy-Stein identity
Having proved the Hardy-Stein identity for a vector of \(L^{p}(\mathbb{R}^{d})\) functions, we can establish a disintegration of \(\int_{\mathbb{R}^{d}}f(x)g(x)^{\langle p-1\rangle}\,\mathrm{d}x\) for \(f,g\in L^{p}(\mathbb{R}^{d})\) with \(p\in[2,\infty)\).
To this end we introduce the function \(\mathcal{J}_{p}\colon\mathbb{R}^{2}\times\mathbb{R}^{2}\to\mathbb{R}\), defined as follows:
\[\mathcal{J}_{p}(w,z)= \mathcal{J}_{p}(w_{1},w_{2};z_{1},z_{2}):=z_{1}z_{2}^{\langle p-1 \rangle}-w_{1}w_{2}^{\langle p-1\rangle}\] \[-w_{2}^{\langle p-1\rangle}(z_{1}-w_{1})-(p-1)w_{1}|w_{2}|^{p-2}( z_{2}-w_{2}), \tag{4.1}\]
where \(w=(w_{1},w_{2})\), \(z=(z_{1},z_{2})\), and \(w_{1},w_{2},z_{1},z_{2}\in\mathbb{R}\). For instance,
\[\mathcal{J}_{2}(w,z)=z_{1}z_{2}-w_{1}w_{2}-w_{2}(z_{1}-w_{1})-w_{1}(z_{2}-w_{2 })=(z_{1}-w_{1})(z_{2}-w_{2}). \tag{4.2}\]
As complicated as it looks, \(\mathcal{J}_{p}\) is just the second-order Taylor remainder of the mapping \(\mathbb{R}^{2}\ni(z_{1},z_{2})\mapsto z_{1}z_{2}^{\langle p-1\rangle}\), when the argument changes from \(w\) to \(z\). Below we mostly apply \(\mathcal{J}_{p}\) to \(w_{1}=P_{t}f(x)\), \(w_{2}=P_{t}g(x)\), \(z_{1}=P_{t}f(y)\), and \(z_{2}=P_{t}g(y)\), so \(w\) corresponds to the argument \(x\) of the vector function \(\Phi=(f,g)\), \(z\) corresponds to \(y\), the subscript \(1\) indicates the first function, \(f\), and \(2\) indicates the second function, \(g\).
Here is the main result of the paper, which we prove below in this section.
**Theorem 4.1** (Polarized Hardy-Stein identity).: _Let \(p\geq 2\). For \(f,g\in L^{p}(\mathbb{R}^{d})\), denote_
\[\Phi(x):=(f(x),g(x))\quad\text{and}\quad P_{t}\Phi(x):=(P_{t}f(x),P_{t}g(x)), \quad t\geq 0,\;x\in\mathbb{R}^{d}. \tag{4.3}\]
_Then,_
\[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|\mathcal{J}_{p}(P_{t} \Phi(x),P_{t}\Phi(y))|\,\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t<\infty \tag{4.4}\]
_and_
\[\int_{\mathbb{R}^{d}}fg^{\langle p-1\rangle}\,\mathrm{d}x=\int_{0}^{\infty} \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}(P_{t}\Phi(x),P_{t} \Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{4.5}\]
Note that if \(w_{1}=w_{2}=:a\) and \(z_{1}=z_{2}=:b\), then \(\mathcal{J}_{p}(w,z)=F_{p}(a,b)\), so (4.5) with \(f=g\) agrees with (1.1), at least for \(p\geq 2\).
_Remark 4.2_.: If \(p=2\) then (4.5) reads
\[\int_{\mathbb{R}^{d}}fg\,\mathrm{d}x=\int_{0}^{\infty}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}[P_{t}f(x)-P_{t}f(y)][P_{t}g(x)-P_{t}g(y)]\nu(x,y)\, \mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{4.6}\]
In this case (4.4) and (4.5) are obtained by polarization from the one-dimensional Hardy-Stein identity (1.1) and Cauchy-Schwarz inequality, by considering \(f+g\) and \(f-g\). Therefore below we let \(p>2\).
Had \(\mathcal{J}_{p}\) been nonnegative, the proof of (4.5) would follow as that of (3.1). Unfortunately, this is not the case, so the proof is more complicated. Indeed, the function \((z_{1},z_{2})\mapsto z_{1}z_{2}^{\langle p-1\rangle}\) is not convex, even when restricted to \(z_{2}>0\). To see this, we compute its gradient and Hessian matrix for \(z_{2}>0\):
\[\nabla\left(z_{1}z_{2}^{p-1}\right)=\begin{bmatrix}z_{2}^{p-1}\\ (p-1)z_{1}z_{2}^{p-2}\end{bmatrix},\]
\[\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)=\begin{bmatrix}0&(p-1)z_{2}^{p-2}\\ (p-1)z_{2}^{p-2}&(p-1)(p-2)z_{1}z_{2}^{p-3}\end{bmatrix}. \tag{4.7}\]
Thus, \(\det\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)=-(p-1)^{2}z_{2}^{2p-4}<0\), so the Hessian matrix \(\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)\) is not positive semi-definite and \(z_{1}z_{2}^{p-1}\) is not convex. We will rectify this situation by decomposing the mapping
\[[0,\infty)\times\mathbb{R}\ni z=(z_{1},z_{2})\mapsto z_{1}z_{2}^{\langle p-1 \rangle}\]
into a difference of two convex mappings. Then (the Taylor remainder) \(\mathcal{J}_{p}\) will be a difference of two nonnegative functions. To this end, we recall that \(a_{+}:=a\lor 0\), \(a_{-}:=(-a)\lor 0\) and introduce the functions:
\[Y^{(+)}(z) := z_{1}\left((z_{2})_{+}\right)^{p-1}+|z|^{p},\] \[Y^{(-)}(z) := z_{1}\left((z_{2})_{-}\right)^{p-1}+|z|^{p},\quad z=(z_{1},z_{2} )\in\mathbb{R}^{2}.\]
Lemma C.3 in Appendix C verifies that these functions are convex on \([0,\infty)\times\mathbb{R}\) indeed. Since \(p>2\), they are differentiable everywhere and their Taylor remainders are nonnegative on \([0,\infty)\times\mathbb{R}\).
Let \(\mathcal{J}_{p}^{(+)}\) and \(\mathcal{J}_{p}^{(-)}\) be the second-order Taylor remainders of the differentiable mappings \(\mathbb{R}^{2}\ni z\mapsto z_{1}\left((z_{2})_{+}\right)^{p-1}\) and \(\mathbb{R}^{2}\ni z\mapsto z_{1}\left((z_{2})_{-}\right)^{p-1}\). Thus, for \(z_{1},z_{2},w_{1},w_{2}\in\mathbb{R}\),
\[\mathcal{J}_{p}^{(+)}(w,z) = z_{1}\left((z_{2})_{+}\right)^{p-1}-w_{1}\left((w_{2})_{+}\right) ^{p-1}-\left((w_{2})_{+}\right)^{p-1}\left(z_{1}-w_{1}\right)\] \[-(p-1)w_{1}\left((w_{2})_{+}\right)^{p-2}\left(z_{2}-w_{2}\right)\]
and
\[\mathcal{J}_{p}^{(-)}(w,z) = z_{1}\left((z_{2})_{-}\right)^{p-1}-w_{1}\left((w_{2})_{-}\right) ^{p-1}-\left((w_{2})_{-}\right)^{p-1}\left(z_{1}-w_{1}\right)\] \[+(p-1)w_{1}\left((w_{2})_{-}\right)^{p-2}\left(z_{2}-w_{2}\right).\]
Since \(z_{1}((z_{2})_{+})^{p-1}-z_{1}((z_{2})_{-})^{p-1}=z_{1}z_{2}^{(p-1)}\), it follows that
\[\mathcal{J}_{p}=\mathcal{J}_{p}^{(+)}-\mathcal{J}_{p}^{(-)}=\left(\mathcal{J }_{p}^{(+)}+\mathcal{F}_{p}\right)-\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p }\right), \tag{4.8}\]
where we consider \(\mathcal{F}_{p}\) given by (2.7) with \(n=2\) and we have \(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\geq 0\) and \(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\geq 0\) on \(\left([0,\infty)\times\mathbb{R}\right)^{2}\). Note also that, if we denote \(\bar{z}:=(z_{1},-z_{2})\), then
\[\mathcal{J}_{p}^{(+)}(\bar{w},\bar{z})=\mathcal{J}_{p}^{(-)}(w,z). \tag{4.9}\]
Here is an preliminary version of Theorem 4.1.
**Proposition 4.3**.: _For \(p>2\), \(f,g\in L^{p}(\mathbb{R}^{d})\), \(f\geq 0\), and \(\Phi(x)\), \(P_{t}\Phi(x)\) as in (4.3),_
\[\int_{\mathbb{R}^{d}}\left(f(g_{+})^{p-1}+|\Phi|^{p}\right)\, \mathrm{d}x\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d} }\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(P_{t}\Phi(x),P_{t} \Phi(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t \tag{4.10}\]
_and_
\[\int_{\mathbb{R}^{d}}\left(f(g_{-})^{p-1}+|\Phi|^{p}\right)\, \mathrm{d}x\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d} }\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\right)\left(P_{t}\Phi(x),P_{t} \Phi(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{4.11}\]
Proof.: We only prove (4.10) since (4.11) follows by substituting \(-g\) in place of \(g\), see (2.10) and (4.9). The proof of (4.10) is much alike that of Theorem 3.1. We use the convexity of \(Y^{(+)}\), resulting in the nonnegativity of its Taylor remainder. the function \(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\). As before, we first consider \(f,g\in\mathcal{D}_{p}(L)\). Fix some \(0\leq t\leq T<\infty\). Let \(u(t):=P_{t}f\), \(v(t):=P_{t}g\), and \(U(t):=P_{t}\Phi=(u(t),v(t))\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{2})\). Actually, \(U(t)\in(\mathcal{D}_{p}(L))^{2}\). As seen in the proof of Theorem 3.1, the function \(t\mapsto|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) and \(\left(|U(t)|^{p}\right)^{\prime}=pU(t)^{(p-1)}\cdot LU(t)\). Since \((v(t)_{+})^{p-1}=(|v(t)|^{p-1}+v(t)^{\langle p-1\rangle})/2\), from Corollary 2.3 with \(\kappa=p-1>1\), we obtain that \((v(t)_{+})^{p-1}\) is continuously differentiable in \(L^{\frac{p}{p-1}}(\mathbb{R}^{d})\) and
\[\left((v(t)_{+})^{p-1}\right)^{\prime} = \left(\frac{|v(t)|^{p-1}+v(t)^{\langle p-1\rangle}}{2}\right)^{ \prime}=\frac{p-1}{2}\left(v(t)^{\langle p-2\rangle}Lv(t)+|v(t)|^{p-2}Lv(t)\right)\] \[= (p-1)(v(t)_{+})^{p-2}Lv(t).\]
By Lemma B.4, \(u(t)(v(t)_{+})^{p-1}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) and
\[\left(u(t)(v(t)_{+})^{p-1}\right)^{\prime}=(v(t)_{+})^{p-1}Lu(t)+(p-1)u(t)(v(t)_{ +})^{p-2}Lv(t). \tag{4.12}\]
In particular, \(\left(u(t)(v(t)_{+})^{p-1}\right)^{\prime}\) is well-defined and continuous in \(L^{1}(\mathbb{R}^{d})\). As in (3.2),
\[W(t) :=\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^{d}} \left(u(t)(v(t)_{+})^{p-1}+|U(t)|^{p}\right)\,\mathrm{d}x=\int\limits_{ \mathbb{R}^{d}}\frac{\mathrm{d}}{\mathrm{d}t}\left[u(t)(v(t)_{+})^{p-1}+|U(t)|^ {p}\right]\,\mathrm{d}x \tag{4.13}\] \[=\langle Lu(t),(v(t)_{+})^{p-1}\rangle+\langle Lv(t),(p-1)u(t)(v( t)_{+})^{p-2}\rangle+\langle LU(t),pU(t)^{(p-1)}\rangle.\]
Since the limits defining \(Lu\), \(Lv\) (respectively, \(LU\)) exist strongly in \(L^{p}(\mathbb{R}^{d})\) (respectively, in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{2})\)) and \((v(t)_{+})^{p-1}\), \(u(t)(v(t)_{+})^{p-2}\) (respectively, \(U(t)^{(p-1)}\)) belong to \(L^{q}(\mathbb{R}^{d})\) (respectively, to \(L^{q}(\mathbb{R}^{d};\mathbb{R}^{2})\)), we get
\[W(t)=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \Big{(}(u(t)(y)-u(t)(x))(v(t)(x)_{+})^{p-1}\] \[\qquad\qquad+(p-1)(v(t)(y)-v(t)(x))u(t)(x)(v(t)(x)_{+})^{p-1}\] \[\qquad\qquad+p(U(t)(y)-U(t)(x))\cdot U(t)(x)^{(p-1)}\Big{)}\frac{ p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y.\]
As \((P_{t})_{t\geq 0}\) is conservative, for every \(h>0\), we have
\[\int_{\mathbb{R}^{d}}|U(t)|^{p}\,\mathrm{d}x=\int_{\mathbb{R}^{d}}P_{h}\left( |U(t)|^{p}\right)\,\mathrm{d}x\]
and
\[\int_{\mathbb{R}^{d}}u(t)(v(t)_{+})^{p-1}\,\mathrm{d}x=\int_{ \mathbb{R}^{d}}P_{h}\left(u(t)(v(t)_{+})^{p-1}\right)\,\mathrm{d}x.\]
Taking this into account and rearranging, we get
\[W(t)=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left( \mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(U(t)(x),U(t)(y)\right)\frac {p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y.\]
Because of the assumption \(f\geq 0\), we have \(U(t)\in[0,\infty)\times\mathbb{R}\) for all \(x\in\mathbb{R}^{d}\) and \(t\geq 0\), so that \(\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(U(t)(x),U(t)(y)\right)\) is nonnegative (see the discussion preceding the proposition). Therefore from (**P1**), (**P2**), and Lemma 2.6, we conclude that
\[W(t)=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left(\mathcal{J}_{p}^{(+)}+ \mathcal{F}_{p}\right)\left(U(t)(x),U(t)(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{ d}y. \tag{4.14}\]
Since \(u(t)(v(t)_{+})^{p-1}+|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) for \(t\in[0,\infty)\), \(W(t)\) is a continuous (real) function on \((0,\infty)\). Thus,
\[\int_{\mathbb{R}^{d}}\left(u(0)(v(0)_{+})^{p-1}+|U(0)|^{p}\right) \mathrm{d}x-\int_{\mathbb{R}^{d}}\left(u(T)((v(T))_{+})^{p-1}+|U(T)|^{p}\right) \,\mathrm{d}x\] \[= -\int_{0}^{T}W(t)\,\mathrm{d}t=\int_{0}^{T}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(U( t)(x),U(t)(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\] \[= \int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left( \mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(P_{t}\Phi(x),P_{t}\Phi(y) \right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\]
We now let \(T\to\infty\). As the integrand in the right-hand side is nonnegative, \(u(0)=f\), \(v(0)=g\), and \(U(0)=\Phi\), to prove (4.10) it is enough to show that
\[\int_{\mathbb{R}^{d}}\left(u(T)((v(T))_{+})^{p-1}+|U(T)|^{p}\right)\,\mathrm{d}x =\int_{\mathbb{R}^{d}}\left(P_{T}f((P_{T}g)_{+})^{p-1}+|P_{T}\Phi|^{p}\right)\, \mathrm{d}x\to 0.\]
While proving Theorem 3.1 we have already shown that \(\int_{\mathbb{R}^{d}}|U(T)|^{p}\,\mathrm{d}x\to 0\). Further, since \(|P_{T}f(x)|\leq f^{*}(x)\) and \(|P_{T}g(x)|\leq g^{*}(x)\) for every \(x\in\mathbb{R}^{d}\) and \(T>0\) and \(f^{*},g^{*}\in L^{p}(\mathbb{R}^{d})\) by (2.17), we get \(\int_{\mathbb{R}^{d}}P_{T}f((P_{T}g)_{+})^{p-1}\,\mathrm{d}x\to 0\) by the Dominated Convergence Theorem. This yields (4.10) for \(f,g\in\mathcal{D}_{p}(L)\).
It remains to get rid of the assumption \(f,g\in\mathcal{D}_{p}(L)\). We proceed as in the proof of Theorem 3.1. Take \(f,g\in L^{p}(\mathbb{R}^{d})\) arbitrary and let \(s>0\). Since \((P_{t})_{t\geq 0}\) is an analytic semigroup on \(L^{p}(\mathbb{R}^{d})\), \(P_{s}f,P_{s}g\in\mathcal{D}_{p}(L)\) as well. Consequently, by (4.10),
\[\int_{\mathbb{R}^{d}}\left(P_{s}f((P_{s}g)_{+})^{p-1}+|P_{s}\Phi |^{p}\right)\,\mathrm{d}x\] \[\qquad=\int_{s}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d }}\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)(P_{t}\Phi(x),P_{t}\Phi(y) )\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\]
Let \(s\to 0^{+}\). As the integrand of the right-hand side is nonnegative, the integrals tend to the right-hand side of (4.10).
To get the convergence of the left-hand side we use the strong continuity of \((P_{t})_{t\geq 0}\) in \(L^{p}(\mathbb{R}^{d})\). The convergence \(|P_{s}\Phi|^{p}\to|\Phi|^{p}\) in \(L^{1}(\mathbb{R}^{d})\) was shown in proof of Theorem 3.1. Since \(P_{s}f\to f\) and \((P_{s}g)_{+}\to g_{+}\) in \(L^{p}(\mathbb{R}^{d})\), by Lemma B.1, \(((P_{s}g)_{+})^{p-1}\to(g_{+})^{p-1}\) in \(L^{\frac{p}{p-1}}(\mathbb{R}^{d})\). Moreover, by Lemma B.2, \(P_{s}f((P_{s}g)_{+})^{p-1}\to f(g_{+})^{p-1}\) in \(L^{1}(\mathbb{R}^{d})\). Thus, \(\int_{\mathbb{R}^{d}}\left(P_{s}f((P_{s}g)_{+})^{p-1}+|P_{s}\Phi|^{p}\right) \,\mathrm{d}x\to\int_{\mathbb{R}^{d}}\left(f(g_{+})^{p-1}+|\Phi|^{p}\right) \,\mathrm{d}x\). The proof of (4.10) is complete.
Proof of Theorem 4.1.: Thanks to Remark 4.2, we only need to consider \(p>2\). Let first \(f\geq 0\). By Lemma 4.3 and (4.8),
\[\int_{\mathbb{R}^{d}} fg^{(p-1)}\,\mathrm{d}x=\int_{\mathbb{R}^{d}}\left(f(g_{+})^{p-1}+ |\Phi|^{p}\right)\,\mathrm{d}x-\int_{\mathbb{R}^{d}}\left(f(g_{-})^{p-1}+| \Phi|^{p}\right)\,\mathrm{d}x\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d }}\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)(P_{t}\Phi(x),P_{t}\Phi(y) )\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d }}\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y \mathrm{d}t,\]
where all the integrals are absolutely convergent. Therefore (4.4) holds in this case.
To get rid of the assumption \(f\geq 0\), we consider an arbitrary \(f\in L^{p}(\mathbb{R}^{d})\) and write \(f=f_{+}-f_{-}\). The result holds for pairs \(\Phi^{(+)}:=(f_{+},g)\) and \(\Phi^{(-)}:=(f_{-},g)\). Of course, \(\Phi=\Phi^{(+)}-\Phi^{(-)}\). The operators \(P_{t}\) are linear and the function \(\mathcal{J}_{p}(w,z)\) is linear in
and \(z_{1}\), so
\[\int_{\mathbb{R}^{d}}fg^{(p-1)}\,\mathrm{d}x = \int_{\mathbb{R}^{d}}\left(f_{+}\right)g^{(p-1)}\,\mathrm{d}x-\int_ {\mathbb{R}^{d}}\left(f_{-}\right)g^{(p-1)}\,\mathrm{d}x\] \[= \int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\nu(x,y)\,\mathrm{d}x \mathrm{d}y\mathrm{d}t\] \[-\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi^{(-)}(x),P_{t}\Phi^{(-)}(y))\nu(x,y)\,\mathrm{d}x \mathrm{d}y\mathrm{d}t\] \[= \int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y \mathrm{d}t.\]
The absolute convergence of the integrals is clear from our previous arguments.
We next present a quantitative version of (4.4).
**Proposition 4.4**.: _Under the assumptions of Theorem 4.1,_
\[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}^{d}}\int\limits_{\mathbb{R}^{d }}\left|\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\right|\nu(x,y)\,\mathrm{d}x \mathrm{d}y\mathrm{d}t\leq(1+2^{p/2})\|f\|_{L^{p}(\mathbb{R}^{d})}\|g\|_{L^{p} (\mathbb{R}^{d})}^{p-1}. \tag{4.15}\]
Proof.: As in the proof of Theorem 4.1, we let \(\Phi^{(+)}=(f_{+},g)\) and \(\Phi^{(-)}=(f_{-},g)\). Then,
\[\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))=\mathcal{J}_{p}(P_{t}\Phi^{(+)}(x), P_{t}\Phi^{(+)}(y))-\mathcal{J}_{p}(P_{t}\Phi^{(-)}(x),P_{t}\Phi^{(-)}(y)),\]
so
\[\left|\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\right|\leq\left|\mathcal{J}_{ p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\right|+\left|\mathcal{J}_{p}(P_{t} \Phi^{(-)}(x),P_{t}\Phi^{(-)}(y))\right|.\]
Because of (4.8),
\[\left|\mathcal{J}_{p}\right|\leq\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p} \right)+\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\right),\]
both terms being nonnegative on \(\left([0,\infty)\times\mathbb{R}\right)^{2}\). As \(P_{t}\Phi^{(+)},P_{t}\Phi^{(-)}\in\left([0,\infty)\times\mathbb{R}\right)^{2}\),
\[\left|\mathcal{J}_{p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\right| \leq \left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)(P_{t}\Phi^{(+)} (x),P_{t}\Phi^{(+)}(y))\] \[+\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\right)(P_{t}\Phi^{( +)}(x),P_{t}\Phi^{(+)}(y)),\]
and a similar inequality holds for \(P_{t}\Phi^{(-)}\). From Proposition 4.3,
\[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}^{d}}\int\limits_{\mathbb{R}^{d }}(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p})(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y ))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t=\int\limits_{\mathbb{R}^{d}} \left(f_{+}(g_{+})^{p-1}+|\Phi^{(+)}|^{p}\right)\,\mathrm{d}x.\]
A similar identity holds for \(\mathcal{J}_{p}^{(-)}\). Summing up, we get
\[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left|\mathcal{J}_ {p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\right|\nu(x,y)\,\mathrm{d}x\mathrm{d }y\mathrm{d}t\leq\int_{\mathbb{R}^{d}}\left(f_{+}|g|^{p-1}+2|\Phi^{(+)}|^{p} \right)\,\mathrm{d}x\]
and
\[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left|\mathcal{J}_ {p}(P_{t}\Phi(x),P_{t}\Phi(y))\right|\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d }t\leq\int_{\mathbb{R}^{d}}\left(f|g|^{p-1}+2|\Phi|^{p}\right)\,\mathrm{d}x.\]
By Holder inequality,
\[\int_{\mathbb{R}^{d}}f|g|^{p-1}\,\mathrm{d}x\leq\|f\|_{L^{p}(\mathbb{R}^{d})}^{1/ p}\|g\|_{L^{p}(\mathbb{R}^{d})}^{(p-1)/p}.\]
On the other hand,
\[|\Phi|^{p}=(f^{2}+g^{2})^{p/2}\leq 2^{p/2-1}(|f|^{p}+|g|^{p}).\]
Therefore, if \(\|f\|_{L^{p}(\mathbb{R}^{d})}=\|g\|_{L^{p}(\mathbb{R}^{d})}=1\), then (4.15) is true if we replace its right-hand side by \(1+2^{p/2}\). If \(\|f\|_{L^{p}(\mathbb{R}^{d})}=0\) or \(\|g\|_{L^{p}(\mathbb{R}^{d})}=0\), then (4.15) is obvious. Otherwise, we observe that \(\mathcal{J}_{p}\) is homogeneous in the first coordinates, and \((p-1)\)-homogeneous in the second, to wit,
\[\mathcal{J}_{p}((\lambda w_{1},\mu w_{2}),(\lambda z_{1},\mu z_{2}))=\lambda \mu^{\langle p-1\rangle}\mathcal{J}_{p}((w_{1},w_{2}),(z_{1},z_{2})),\qquad \lambda,\mu>0.\]
Then, by considering \(f/\|f\|_{L^{p}(\mathbb{R}^{d})}\) and \(g/\|g\|_{L^{p}(\mathbb{R}^{d})}\), we get the result.
## 5. Polarized Sobolev-Bregman form \(\mathcal{E}_{p}(u,v)\)
The integral expression appearing in (1.6) and Theorem 4.1, namely
\[\mathcal{E}_{p}(u,v):=\frac{1}{p}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y,\]
where \(\Phi(x)=(u(x),v(x))\), \(u,v\colon\mathbb{R}^{d}\to\mathbb{R}\), \(p\in[2,\infty)\), and \(\mathcal{J}_{p}\) is given by (4.1), deserves further attention. If \(u=v\) then \(\mathcal{E}_{p}(u,v)=\mathcal{E}_{p}(u,u)=\mathcal{E}_{p}[u]\). For \(p=2\), we get \(\mathcal{E}_{2}(u,v)\), the usual (bilinear) Dirichlet form [19], in particular, it is symmetric. For \(p>2\), in general \(\mathcal{E}_{p}(v,u)\neq\mathcal{E}_{p}(u,v)\) and we are even puzzled by the question whether the integral in (1.6) is well-defined for general enough functions \(u,v\), for instance for \(u,v\in\mathcal{D}(\mathcal{E}_{p})\). The next theorem asserts that for \(p\geq 2\) and \(u,v\in\mathcal{D}_{p}(L)\), (1.6) is well-defined; we also get an extension of the single-function formula (2.30) from Proposition 2.4.
**Theorem 5.1**.: _Let \(p\geq 2\). If \(u,v\in\mathcal{D}_{p}(L)\), then \(\mathcal{E}_{p}(u,v)\) is well-defined and_
\[\mathcal{E}_{p}(u,v)=-\frac{1}{p}\langle Lu,v^{\langle p-1\rangle}\rangle- \frac{1}{p}\langle Lv,(p-1)u|v|^{p-2}\rangle. \tag{5.1}\]
Note that this agrees with (2.30) if \(u=v\). Before we prove (5.1), we need to further decompose \(\mathcal{J}_{p}^{(+)}\) (and \(\mathcal{J}_{p}\)) into a difference of nonnegative functions.
Let \(\mathbf{1}(a):=(1+\mathrm{sgn}(a))/2\) be the Heaviside step function. We define
\[\mathcal{J}_{p}^{(++)}(w,z) := (z_{1})_{+}\left((z_{2})_{+}\right)^{p-1}-(w_{1})_{+}\left((w_{2} )_{+}\right)^{p-1}-\mathbf{1}(w_{1})\left((w_{2})_{+}\right)^{p-1}\left(z_{1}- w_{1}\right)\] \[-(p-1)(w_{1})_{+}\left((w_{2})_{+}\right)^{p-2}\left(z_{2}-w_{2}\right),\] \[\mathcal{J}_{p}^{(-+)}(w,z) := (z_{1})_{-}\left((z_{2})_{+}\right)^{p-1}-(w_{1})_{-}\left((w_{2} )_{+}\right)^{p-1}+\mathbf{1}(-w_{1})\left((w_{2})_{+}\right)^{p-1}\left(z_{1}- w_{1}\right) \tag{5.3}\] \[-(p-1)(w_{1})_{-}\left((w_{2})_{+}\right)^{p-2}\left(z_{2}-w_{2}\right), \tag{5.2}\]
where \(w:=(w_{1},w_{2}),z:=(z_{1},z_{2})\in\mathbb{R}^{2}\). We may view these functions as the second-order Taylor remainders of the mappings \(\mathbb{R}^{2}\ni z\mapsto\left(z_{1}\right)_{+}\left((z_{2})_{+}\right)^{p-1}\) and \(\mathbb{R}^{2}\ni z\mapsto\left(z_{1}\right)_{-}\left((z_{2})_{+}\right)^{p-1}\), respectively, except for nondifferentiability of the mappings on the vertical positive semi-axis (for more details, see the proof of Lemma C.4 in Appendix C).
Similarly to (4.8) and (4.9), we get a decomposition of \(\mathcal{J}_{p}^{(+)}\):
\[\mathcal{J}_{p}^{(+)}=\mathcal{J}_{p}^{(++)}-\mathcal{J}_{p}^{(-+)} \tag{5.4}\]
and the identity
\[\mathcal{J}_{p}^{(++)}(-\bar{w},-\bar{z})=\mathcal{J}_{p}^{(-+)}(w,z). \tag{5.5}\]
In Lemma C.4 in Appendix C we prove that
\[\mathcal{J}_{p}^{(++)}(w,z)+\mathcal{F}_{p}(w,z)\geq 0,\quad\mathcal{J}_{p}^{(-+)} (w,z)+\mathcal{F}_{p}(w,z)\geq 0\]
for all \(z,w\in\mathbb{R}^{2}\). Therefore, by adding and subtracting \(\mathcal{F}_{p}\) in (5.4), we get the desired decomposition of \(\mathcal{J}_{p}^{(+)}\) and we can proceed from there. Let us mention that it is crucial to define the Heaviside function so that \(\mathbf{1}(0)=1/2\). This is because we use the identity \(\mathbf{1}(a)+\mathbf{1}(-a)=1\) for all \(a\in\mathbb{R}\) to derive (5.4).
Proof of Theorem 5.1.: Let \(u,v\in\mathcal{D}_{p}(L)\). If \(p=2\), then, again, the identity is evident from classical polarization, (2.24) and (2.30).
Thus, we let \(p>2\). Denote \(\Phi(x):=(u(x),v(x))\). First we prove the following:
\[l_{++} := -\langle Lu,\mathbf{1}(u)(v_{+})^{p-1}\rangle-\langle Lv,(p-1)u_{ +}(v_{+})^{p-2}\rangle-\langle L\Phi,p\Phi^{\langle p-1\rangle}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(++) }+\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\]
and
\[l_{-+} := \langle Lu,\mathbf{1}(-u)(v_{+})^{p-1}\rangle-\langle Lv,(p-1)u_{ -}(v_{+})^{p-2}\rangle-\langle L\Phi,p\Phi^{\langle p-1\rangle}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(-+)} +\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\]
We start with the proof of (5.6). By the definition of \(L\),
\[\langle L\Phi,p\Phi^{\langle p-1\rangle}\rangle=\lim_{h\to 0^{+}}\frac{1}{h}\int_ {\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\Phi(x)-\Phi(y))\cdot p\Phi(x)^{\langle p -1\rangle}\frac{p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y. \tag{5.8}\]
Since the limits defining \(Lu\), \(Lv\) exist in the strong sense in \(L^{p}(\mathbb{R}^{d})\), we have
\[l_{++}=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \left[(u(x)-u(y))\mathbf{1}(u(x))(v(x)_{+})^{p-1}\right.\] \[\left.\qquad\qquad+(v(x)-v(y))(p-1)u(x)_{+}(v(x)_{+})^{p-2}\right.\] \[\left.\qquad\qquad+(\Phi(x)-\Phi(y))\cdot p\Phi(x)^{\langle p-1 \rangle}\right]\frac{p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y.\]
Then, similarly as in the proofs of Theorems 3.1 and 4.1, we take advantage of the conservativeness of the semigroup \((P_{t})_{t\geq 0}\):
\[\int_{\mathbb{R}^{d}}u_{+}(v_{+})^{p-1}\,\mathrm{d}x = \int_{\mathbb{R}^{d}}P_{h}\left(u_{+}(v_{+})^{p-1}\right)\, \mathrm{d}x,\] \[\int_{\mathbb{R}^{d}}|\Phi|^{p}\,\mathrm{d}x = \int_{\mathbb{R}^{d}}P_{h}\left(|\Phi|^{p}\right)\,\mathrm{d}x, \quad\text{ for }h>0.\]
Taking this into account and rearranging, we obtain
\[l_{++}=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^ {(++)}+\mathcal{F}_{p})(\Phi(x),\Phi(y))\,\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y.\]
From Lemma C.4 in Appendix C, \(\mathcal{J}_{p}^{(++)}+\mathcal{F}_{p}\geq 0\), hence we can pass to the limit as \(h\to 0^{+}\) and by Lemma 2.6 we obtain (5.6). By substituting \(-u\) in place of \(u\), we obtain (5.7), too; see (2.10) and (5.5).
Further, we claim that for all \(u,v\in\mathcal{D}_{p}(L)\),
\[l_{+} := -\langle Lu,(v_{+})^{p-1}\rangle-\langle Lv,(p-1)u(v_{+})^{p-2}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}^{(+)}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\]
and
\[l_{-} := -\langle Lu,(v_{-})^{p-1}\rangle+\langle Lv,(p-1)u(v_{-})^{p-2}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}^{(-)}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\]
Indeed, using (5.6), (5.7), and (5.4), we get
\[l_{+}=l_{++}-l_{-+} = \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(++ )}+\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(-+) }+\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}^{(+)}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\]
Note that the integral on the right-hand side is well-defined as a difference of finite integrals with nonnegative integrands. This yields (5.9). Equality (5.10) follows from (5.9) by substituting \(-v\) in place of \(v\); see (2.10) and (4.9).
To conclude, using (5.9), (5.10), and (4.8), we obtain
\[-\langle Lu,v^{\langle p-1\rangle}\rangle-\langle Lv,(p-1)u|v|^{p -2}\rangle=l_{+}-l_{-}= \tag{5.11}\] \[\quad=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y=p\mathcal{E}_{p}(u,v).\]
Again, the integral defining \(\mathcal{E}_{p}(u,v)\) is absolutely convergent as a difference of two absolutely convergent integrals. The proof is complete.
_Remark 5.2_.: By the above and Lemma B.5,
\[p\mathcal{E}_{p}(f,g) := \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}(f(x),g (x);f(y),g(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[= \langle-Lf,g^{\langle p-1\rangle}\rangle+\langle-Lg,(p-1)f|g|^{p-2}\rangle\] \[= -\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}}P_{t}f(x)(P_{ t}g(x))^{\langle p-1\rangle}\,\mathrm{d}x\Big{|}_{t=0},\]
at least for \(f,g\in\mathcal{D}_{p}(L)\) and \(p\geq 2\). At this moment, Lemma B.5 offers a simplifying perspective on (1.5) and Theorem 4.1, but we should emphasize the importance of
absolute integrability asserted in Theorem 4.1 for arbitrary \(f,g\in L^{p}(\mathbb{R}^{d})\) when \(p\geq 2\); see also Proposition 4.4.
## Appendix A Estimates for Bregman divergence
The following lemma extends Lemma 5 of [8], where scalar versions of (A.1), (A.3), (A.4) were given. The inequality (A.2) seems new.
**Lemma A.1**.: _There are constants \(C_{\kappa},C^{\prime}_{\kappa},C^{\prime\prime}_{\kappa},C^{\prime\prime \prime}_{\kappa}\in(0,\infty)\) such that for all \(w,z\in\mathbb{R}^{n}\),_
(A.1) \[0\leq\mathcal{F}_{\kappa}(w,z) \leq C_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa-\lambda}, \quad\lambda\in[0,2],\kappa>1,\] (A.2) \[|\mathcal{F}_{\langle\kappa\rangle}(w,z)| \leq C^{\prime}_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa- \lambda}, \quad\lambda\in[0,2],\kappa>1,\] (A.3) \[||z|^{\kappa}-|w|^{\kappa}| \leq C^{\prime\prime}_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa -\lambda}, \quad\lambda\in[0,1],\kappa>0,\] (A.4) \[|z^{\langle\kappa\rangle}-w^{\langle\kappa\rangle}| \leq C^{\prime\prime}_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa -\lambda}, \quad\lambda\in[0,1],\kappa>0.\]
Proof.: It suffices to prove the inequalities for the maximal value of \(\lambda\) (equal to \(2\) in (A.1), (A.2), and equal to \(1\) in (A.3), (A.4)). Indeed, for other values of \(\lambda\), it is enough to use the inequality \(|z-w|\leq|z-w|^{\mu}(|z|+|w|)^{1-\mu}\), \(\mu\in(0,1)\), \(w,z\in\mathbb{R}^{n}\).
Inequality (A.1) follows from (2.12). In particular, for \(a,b\in\mathbb{R}\), \(\lambda=2\), we have
(A.5) \[0\leq F_{\kappa}(a,b)=|b|^{\kappa}-|a|^{\kappa}-\kappa a^{\langle\kappa-1 \rangle}(b-a)\leq C_{\kappa}|b-a|^{2}(|b|\vee|a|)^{\kappa-2}.\]
To get the other inequalities, observe that they are obvious for \(w=0\). For \(w\neq 0\), we divide by \(|w|^{\kappa}\) and, denoting \(t:=|z|/|w|\in[0,\infty)\), \(v_{1}:=w/|w|\in\mathbb{S}^{n-1}\), \(v_{2}:=z/|z|\in\mathbb{S}^{n-1}\), we arrive at the following equivalent statements of (A.2), (A.3), (A.4):
(A.6) \[|t^{\kappa}v_{2}-v_{1}-\left((\kappa-1)v_{1}\otimes v_{1}+\text{ Id}\right)(tv_{2}-v_{1})| \leq C^{\prime}_{\kappa}|tv_{2}-v_{1}|^{2}(1\lor t)^{\kappa-2},\] (A.7) \[|t^{\kappa}-1| \leq C^{\prime\prime}_{\kappa}|tv_{2}-v_{1}|(1\lor t)^{\kappa-1},\] (A.8) \[|t^{\kappa}v_{2}-v_{1}| \leq C^{\prime\prime\prime}_{\kappa}|tv_{2}-v_{1}|(1\lor t)^{ \kappa-1}.\]
We have
(A.9) \[|tv_{2}-v_{1}|^{2}(1\lor t)^{\kappa-2}=\left((1-t)^{2}+2t(1-(v_{1},v_{2})) \right)(1\lor t)^{\kappa-2}.\]
If we square the right-hand sides of (A.7) and (A.8) then, up to a constant, we get
(A.10) \[|tv_{2}-v_{1}|^{2}(1\lor t)^{2\kappa-2}=\left((1-t)^{2}+2t(1-(v_{1},v_{2})) \right)(1\lor t)^{2\kappa-2}.\]
Denote \(\beta=1-(v_{1},v_{2})\in[0,2]\), so that (A.6) becomes
(A.11) \[|(t^{\kappa}-t)v_{2}+\left(\kappa-1\right)\left((1-t)+\beta t\right)v_{1}|\leq C ^{\prime}_{\kappa}\left((1-t)^{2}+2\beta t\right)(1\lor t)^{\kappa-2}.\]
This inequality is evident when \(t\) is away from \(1\), say, \(t\in[0,\frac{1}{2}]\) or \(t\in[2,\infty)\). Indeed, for \(t\leq 1/2\), we estimate the left-hand side by \(2\kappa\), while the function on the right-hand side is not smaller than \(\left(\frac{1}{2}\right)^{2}\), and (A.6) follows. When \(t\geq 2\), then the left-hand side is not greater than \((2\kappa-1)t^{\kappa}\), and for the right-hand side, we get
\[\left((1-t)^{2}+2\beta t\right)(1\lor t)^{\kappa-2}\geq\left(\frac{t}{2} \right)^{2}t^{\kappa-2}.\]
To deal with the remaining range \(t\in(1/2,2)\), we square both sides of (A.11). The left-hand side yields
(A.12) \[|(t^{\kappa}-t)v_{2}+(\kappa-1)((1-t)+t(1-(v_{1},v_{2})))v_{1}|^{2}\] \[= |F_{\kappa}(1,t)+(\kappa-1)\beta t|^{2}-2(t^{\kappa}-t)(\kappa-1) ((1-t)+t\beta))\beta.\]
In view of (A.5), the first term on the right-hand side of (A.12) is bounded above by \((C_{\kappa}(1-t)^{2}+(\kappa-1)t\beta)^{2}\). Since the right-hand side of (A.6) is then not smaller than a constant multiple of \(((1-t)^{2}+2\beta t)\), we get the estimate of this part.
For the other term, we use the estimate \(\left|1-t^{\kappa-1}\right|\leq C(\kappa)\left|1-t\right|\), \(t\in(1/2,2]\), so
\[2(t-t^{\kappa})(\kappa-1)((1-t)+t\beta)\beta\leq C[(t-1)^{2}t\beta+4t^{2} \beta^{2}]\leq C((1-t)^{2}+2\beta t)^{2}.\]
The estimate (A.6) follows.
After squaring its sides, the proof of (A.8) is reduced to verifying
\[(t^{\kappa}-1)^{2}+2\beta t^{\kappa}\leq C\left((1-t)^{2}+2\beta t\right)(1 \lor t)^{2\kappa-2},\]
with a constant \(C\), uniformly in \(\beta\in[0,2]\). This is done like before. For \(t\geq 1\),
\[(t^{\kappa}-1)^{2}\leq C(1-t)^{2}t^{2\kappa-2}.\]
For \(0\leq t\leq 1/2\), the left-hand side is bounded and the right-hand side is bounded away from zero (uniformly in \(\beta\in[0,2]\)), while for \(t\in(1/2,1]\) we use the inequality \(t^{\kappa}\leq C(\kappa)t\), \(t\in(1/2,1)\). The square of the left-hand side of (A.7) is smaller than \((1-t)^{2}+2\beta t^{\kappa}\), i.e., the square of the left-hand side of (A.8). The proof is complete.
## Appendix B Calculus in \(L^{p}\)
Let \(p\in[1,\infty)\) be fixed. In the discussion of the multivariate Hardy-Stein identity above we use the differential calculus in the Banach space
\[L^{p}(\mathbb{R}^{d};\mathbb{R}^{n}):=\left\{\Upsilon\colon\mathbb{R}^{d} \to\mathbb{R}^{n}\text{ measurable, }\int_{\mathbb{R}^{d}}|\Upsilon(x)|^{p}\,\mathrm{d}x<\infty \right\},\quad n=1,2,\ldots,\]
with the norm \(\|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}:=\left(\int_{\mathbb{R}^{ d}}|\Upsilon(x)|^{p}\,\mathrm{d}x\right)^{1/p}\), or
\[\|\Upsilon\|_{\ell^{n}_{2}(L^{p}(\mathbb{R}^{d}))}:=\left(\sum_{i=1}^{n}\|v_{ i}\|_{L^{p}(\mathbb{R}^{d})}^{2}\right)^{1/2},\]
where \(\Upsilon=(v_{1},\ldots,v_{n})\), \(v_{1},\ldots,v_{n}\in L^{p}(\mathbb{R}^{d})\). The norms are comparable:
\[\left(\int_{\mathbb{R}^{d}}|\Upsilon(x)|^{p}\,\mathrm{d}x\right)^ {\frac{1}{p}} = \left(\int_{\mathbb{R}^{d}}\left(\sum_{i=1}^{n}|v_{i}(x)|^{2} \right)^{\frac{p}{2}}\,\mathrm{d}x\right)^{\frac{1}{p}}\leq\left(\int_{ \mathbb{R}^{d}}\left(\sum_{i=1}^{n}|v_{i}(x)|\right)^{p}\,\mathrm{d}x\right)^ {\frac{1}{p}}\] \[= \||v_{1}|+\ldots+|v_{n}|\|_{L^{p}(\mathbb{R}^{d})}\leq\sum_{i=1}^ {n}\|v_{i}\|_{L^{p}(\mathbb{R}^{d})}\leq\sqrt{n}\|\Upsilon\|_{\ell^{n}_{2}(L^{p }(\mathbb{R}^{d}))}.\]
Let \(\Upsilon\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \(\Psi\in L^{q}(\mathbb{R}^{d};\mathbb{R}^{n})\), where \(p,q\in(1,\infty)\) with \(\frac{1}{p}+\frac{1}{q}=1\). We consider the canonical pairing
(B.1) \[\langle\Upsilon,\Psi\rangle:=\int_{\mathbb{R}^{d}}\Upsilon(x)\cdot\Psi(x)\, \mathrm{d}x=\sum_{j=1}^{n}\int_{\mathbb{R}^{d}}\upsilon_{j}(x)\psi_{j}(x)\, \mathrm{d}x.\]
For a mapping \([0,\infty)\ni t\mapsto\Upsilon(t)\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), we denote
\[\Delta_{h}\Upsilon(t)=\Upsilon(t+h)-\Upsilon(t)\quad\text{provided }t,t+h\geq 0.\]
As usual, \(\Upsilon\) is called _continuous_ at \(t_{0}\geq 0\) if \(\Delta_{h}\Upsilon(t_{0})\to 0\) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) as \(h\to 0\). Furthermore, \(\Upsilon\) is called _differentiable_ at \(t_{0}\geq 0\) if the limit
(B.2) \[\lim_{h\to 0}\frac{1}{h}\Delta_{h}\Upsilon(t_{0})=:\Upsilon^{\prime}(t_{0})\]
exists in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). If \(\Upsilon^{\prime}(t)\) defined by (B.2) is continuous at \(t=t_{0}\), then we say that \(\Upsilon\) is _continuously differentiable_ at \(t_{0}\). In other words, \(\Upsilon^{\prime}(t_{0})\) is the Frechet derivative of the mapping \([0,\infty)\ni t\mapsto\Upsilon(t)\) at \(t_{0}\); \(\Upsilon^{\prime}(0)\) denotes the right-hand side derivative at \(0\). Clearly, if \(\Upsilon\) is continuously differentiable on \([0,\infty)\), then \(\Upsilon\) is continuous on \([0,\infty)\).
Of course, \(\Upsilon=(\upsilon_{1},\ldots,\upsilon_{n})\) is continuous (respectively, differentiable, continuously differentiable) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) if and only if all the functions \(\upsilon_{i}\), \(i=1,\ldots,n\), are continuous (respectively, differentiable, continuously differentiable) in \(L^{p}(\mathbb{R}^{d})\).
We next present a series of auxiliary lemmas.
**Lemma B.1**.: _Let \(\kappa\in(0,p]\). Then the following mappings are continuous:_
(B.3) \[L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\ni\Upsilon \mapsto \Upsilon^{\langle\kappa\rangle}\in L^{p/\kappa}(\mathbb{R}^{d}; \mathbb{R}^{n}),\] (B.4) \[L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\ni\Upsilon \mapsto |\Upsilon|^{\kappa}\in L^{p/\kappa}(\mathbb{R}^{d}).\]
Proof.: First, observe that \(|\Upsilon|^{\kappa}\) and \(\Upsilon^{\langle\kappa\rangle}\) are in \(L^{\frac{p}{\kappa}}(\mathbb{R}^{d})\) and \(L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})\), respectively, if \(\Upsilon\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\).
To prove (B.3), choose \(\lambda\in(0,1)\) such that \(\kappa-\lambda>0\) and suppose \(\Upsilon_{k}\to\Upsilon\) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) as \(k\to\infty\). From (A.4) we get, for every \(x\in\mathbb{R}^{d}\),
\[|\Upsilon_{k}(x)^{\langle\kappa\rangle}-\Upsilon(x)^{\langle \kappa\rangle}| \leq C_{\kappa}^{\prime\prime\prime}|\Upsilon_{k}(x)-\Upsilon(x)|^{ \lambda}(|\Upsilon_{k}(x)|\vee|\Upsilon(x)|)^{\kappa-\lambda}.\]
Using Holder's inequality with exponents \(\kappa/\lambda\) and \(\kappa/(\kappa-\lambda)\), we get
\[\|\Upsilon_{k}^{\langle\kappa\rangle}-\Upsilon^{\langle\kappa \rangle}\|_{L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\kappa/p} = \int_{\mathbb{R}^{d}}|\Upsilon_{k}(x)^{\langle\kappa\rangle}-\Upsilon (x)^{\langle\kappa\rangle}|^{\frac{p}{\kappa}}\,\mathrm{d}x\] \[\leq \int_{\mathbb{R}^{d}}\left(C_{\kappa}^{\prime\prime\prime}\right)^ {p/\kappa}|\Upsilon_{k}(x)-\Upsilon(x)|^{\frac{\lambda p}{\kappa}}(|\Upsilon_{k }(x)|\vee|\Upsilon(x)|)^{\frac{(\kappa-\lambda)p}{\kappa}}\,\mathrm{d}x\] \[\leq C\|\Upsilon_{k}-\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})} ^{p\lambda/\kappa}\cdot\left(\|\Upsilon_{k}\|_{L^{p}(\mathbb{R}^{d})}^{p(\kappa -\lambda)/\kappa}+\|\Upsilon\|_{L^{p}(\mathbb{R}^{d})}^{p(\kappa-\lambda)/ \kappa}\right).\]
The result follows. The proof of (B.4) is similar.
The following generalization of [8, Lemma 13] follows from Holder's inequality.
**Lemma B.2**.: _Let \(q\in(1,\infty)\), \(r\in\left[\frac{q}{q-1},\infty\right)\), \(\Upsilon\in L^{q}(\mathbb{R}^{d};\mathbb{R}^{n})\), \(\Psi\in L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\). Then_
\[\|\Upsilon\cdot\Psi\|_{L^{\frac{qr}{q+r}}(\mathbb{R}^{d};\mathbb{R}^{n})}\leq \|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}\|\Psi\|_{L^{r}(\mathbb{R}^ {d};\mathbb{R}^{n})}.\]
_Moreover, if \(\Upsilon_{n}\to\Upsilon\) in \(L^{q}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \(\Psi_{n}\to\Psi\) in \(L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\), then \(\Upsilon_{n}\cdot\Psi_{n}\to\Upsilon\cdot\Psi\) in \(L^{\frac{qr}{q+r}}(\mathbb{R}^{d};\mathbb{R}^{n})\), as \(n\to\infty\)._
The next lemma is an extension of [8, Lemma 15], where the result for \(|\Upsilon|^{\kappa}\) was proved for \(n=1\), \(\kappa=p\).
**Lemma B.3**.: _Let \(1<\kappa\leq p<\infty\) be given. If \([0,\infty)\ni t\mapsto\Upsilon(t)\) is continuously differentiable in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), then:_
* \(|\Upsilon|^{\kappa}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _and_ (B.5) \[(|\Upsilon|^{\kappa})^{\prime}=\kappa\Upsilon^{\langle\kappa-1\rangle}\cdot \Upsilon^{\prime},\]
* \(\Upsilon^{\langle\kappa\rangle}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})\) _and_ (B.6) _with_ \(J_{\langle\kappa\rangle}\) _defined in (_2.6_)._
Proof.: Both statements are proved similarly, therefore we only prove (B.6), as it is the more complicated of the two.
Observe that for every \(a\in\mathbb{S}^{n-1}\subset\mathbb{R}^{n}\) and \(A:=a\otimes a\), the linear mapping \(F\mapsto AF\) is a contraction on \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). Indeed,
\[\|AF\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})} = \left(\int|(a,F)\cdot a|^{p}\;\mathrm{d}x\right)^{\frac{1}{p}}\] \[= \left(\int_{\mathbb{R}^{d}}|(a,F)|^{p}\,\mathrm{d}x\right)^{\frac {1}{p}}\leq\left(\int_{\mathbb{R}^{d}}\left(|a|\cdot|F|\right)^{p}\,\mathrm{d} x\right)^{\frac{1}{p}}=\left(\int_{\mathbb{R}^{d}}|F|^{p}\,\mathrm{d}x\right)^{ \frac{1}{p}}.\]
So, for every \(F\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), by Holder's inequality with exponents \(\kappa\) and \(\kappa/(\kappa-1)\),
\[\|(J_{\langle\kappa\rangle}\circ\Upsilon)F\|_{L^{p/\kappa}( \mathbb{R}^{d};\mathbb{R}^{n})} = \left\|\Upsilon|^{\kappa-1}\left((\kappa-1)(\frac{\Upsilon}{| \Upsilon|}\otimes\frac{\Upsilon}{|\Upsilon|})+\mathrm{Id}\right)F\right\|_{L^{p /\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})}\] \[\leq \|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\kappa-1} \cdot\left\|\left((\kappa-1)(\frac{\Upsilon}{|\Upsilon|}\otimes\frac{\Upsilon }{|\Upsilon|})+\mathrm{Id}\right)F\right\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n })}\] \[\leq \kappa\|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\kappa- 1}\cdot\|F\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}.\]
For \(t\geq 0\), we have the convergence \(\frac{1}{h}\Delta_{h}\Upsilon(t)\to\Upsilon^{\prime}(t)\) as \(h\to 0\), in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), with \(\Upsilon^{\prime}\) being continuous. This and (B.7) yield \(\frac{1}{h}(J_{\langle\kappa\rangle}\circ\Upsilon(t))\Delta_{h}(\Upsilon(t)) \to(J_{\langle\kappa\rangle}\circ\Upsilon(t))\Upsilon^{\prime}(t)\) in \(L^{p/\kappa}\) (with the limit continuous). Therefore we only need to verify that for \(h\to 0\),
\[W_{h}(t):=\frac{1}{h}\Delta_{h}\Upsilon^{\langle\kappa\rangle}(t)-(J_{\langle \kappa\rangle}\circ\Upsilon(t))\frac{1}{h}\Delta_{h}u(t)\to 0\qquad\text{in }L^{p/\kappa}( \mathbb{R}^{d};\mathbb{R}^{n}).\]
Since \(W_{h}(t)=\frac{1}{h}\mathcal{F}_{\langle\kappa\rangle}(\Upsilon(t),\Upsilon(t+h))\), we choose \(\lambda\in(1,2]\) such that \(\kappa-\lambda>0\), then use the inequality (A.2) to get:
\[|W_{h}(t)| \leq \frac{1}{|h|}C^{\prime}_{\kappa}|\Upsilon(t+h)-\Upsilon(t)|^{ \lambda}(|\Upsilon(t+h)|\vee|\Upsilon(t)|)^{\kappa-\lambda}\] \[= |h|^{\lambda-1}C^{\prime}_{\kappa}\left|\frac{1}{h}\Delta_{h} \Upsilon(t)\right|^{\lambda}(|\Upsilon(t+h)|\vee|\Upsilon(t)|)^{\kappa-\lambda}.\]
Furthermore, by Holder's inequality with parameters \(\kappa/\lambda\) and \(\kappa/(\kappa-\lambda)\),
\[\|W_{h}(t)\|_{L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})} = C_{\kappa}|h|^{\lambda-1}\left\|\left|\frac{1}{h}\Delta_{h} \Upsilon(t)\right|^{\lambda}(|\Upsilon(t+h)|\vee|\Upsilon(t)|)^{\kappa- \lambda}\right\|_{L^{p/\kappa}(\mathbb{R}^{d})}\] \[\leq C|h|^{\lambda-1}\left\|\frac{1}{h}\Delta_{h}\Upsilon(t)\right\| _{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\lambda}\cdot\||\Upsilon(t+h)|+| \Upsilon(t)|\|_{L^{p}(\mathbb{R}^{d})}^{\kappa-\lambda}\cdot\]
We then conclude as in the proof of Lemma B.1.
Finally we invoke, without proof, an analogue of the Leibniz rule.
**Lemma B.4** (**Product rule)**.: _Let \(p>1\) and \(r\in\left[\frac{p}{p-1},\infty\right)\) be given. If the mappings \([0,\infty)\ni t\mapsto\Upsilon(t)\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \([0,\infty)\ni t\mapsto\Psi(t)\in L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\) are continuously differentiable in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \(L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\), respectively, then \(\Upsilon\cdot\Psi\) is continuously differentiable in \(L^{\frac{p}{p+r}}(\mathbb{R}^{d})\) and \((\Upsilon\cdot\Psi)^{\prime}=\Upsilon^{\prime}\cdot\Psi+\Upsilon\cdot\Psi^{\prime}\)._
**Lemma B.5**.: _Let \(p>1\). If \(f,g\in\mathcal{D}_{p}(L)\), then for \(t\in[0,\infty)\),_
\[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}}P_{t}f(P_{t}g)^{\langle p- 1\rangle}\mathrm{d}x=\int_{\mathbb{R}^{d}}\left((P_{t}g)^{\langle p-1\rangle} LP_{t}f+(p-1)P_{t}f|P_{t}g|^{p-2}LP_{t}g\right)\mathrm{d}x.\]
_If \(f,g\in L^{p}(\mathbb{R}^{d})\), then the formula holds for \(t\in(0,\infty)\)._
Proof.: Of course, \(P_{t}f\) and \(P_{t}g\) are continuously differentiable at \(t\geq 0\) in \(L^{p}(\mathbb{R}^{d})\) and \(\frac{\mathrm{d}}{\mathrm{d}t}P_{t}f=LP_{t}f\), \(\frac{\mathrm{d}}{\mathrm{d}t}P_{t}g=LP_{t}g\). Hence, by Lemma B.3 with \(n=1\), \((P_{t}g)^{\langle p-1\rangle}\) is continuously differentiable at \(t\geq 0\) in \(L^{\frac{p}{p-1}}(\mathbb{R}^{d})\) and \(\frac{\mathrm{d}}{\mathrm{d}t}(P_{t}g)^{\langle p-1\rangle}=(p-1)|P_{t}g|^{p- 2}LP_{t}g\). By Lemma B.4 with \(r=p/(p-1)\), \(P_{t}f(P_{t}g)^{\langle p-1\rangle}\) is continuously differentiable at \(t\geq 0\) in \(L^{1}(\mathbb{R}^{d})\) and
(B.9) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(P_{t}f(P_{t}g)^{\langle p-1\rangle} \right)=(P_{t}g)^{\langle p-1\rangle}LP_{t}f+(p-1)P_{t}f|P_{t}g|^{p-2}LP_{t}g.\]
Since \(u\mapsto\int_{\mathbb{R}^{d}}u(x)\,\mathrm{d}x\) is a continuous linear functional on \(L^{1}(\mathbb{R}^{d})\), we get the result (the case of arbitrary \(f,g\in L^{p}(\mathbb{R}^{d})\) follows since the semigroup \(P_{t}\) is analytic).
## Appendix C Convexity properties
We provide here precise statements and proofs of convexity properties needed in Sections 4 and 5. First, we recall some facts from the theory of convex functions.
Let \(T:A\to\mathbb{R}\), where the set \(A\subset\mathbb{R}^{n}\) is convex. By definition, \(d(w)\in\mathbb{R}^{n}\) is a _subgradient_ of \(T\) at \(w\in A\) if
(C.1) \[T(z)\geq T(w)+d(w)\cdot(z-w)\quad\text{for all }z\in A.\]
The function \(T\) is convex in \(A\) if and only if for every \(w\in A\), a subgradient \(d(w)\) exists. If \(T\) is convex and the first-order partial derivatives of \(T\) exist at some \(w\in A\), then \(T\) has exactly one subgradient at the point \(w\), which is equal to its gradient \(\nabla T(w)\). In such a case, denoting by \(\frac{\partial T}{\partial v}\) the directional derivative of \(T\) along a given vector \(v\in\mathbb{R}^{n}\), we have that \(d(w)\) is a subgradient of the function \(T\) at the point \(w\in A\) if and only if
\[\frac{\partial T}{\partial v}(w)\geq d(w)\cdot v,\quad v\in\mathbb{R}^{n}.\]
For more details see Borwein and Lewis [13, Chapter 3].
We need the following lemma.
**Lemma C.1**.: _Let \(p\geq 2\). The function_
\[Y(z):=z_{1}\left(z_{2}\right)^{p-1}+|z|^{p},\quad z=(z_{1},z_{2})\in[0,\infty )^{2},\]
_is convex on \([0,\infty)^{2}\)._
Proof.: As \(Y\) is continuous on \([0,\infty)^{2}\), it is enough to prove the convexity on \((0,\infty)^{2}\). Recall (2.5) and
\[\nabla^{2}|z|^{p}=p(p-2)|z|^{p-4}\begin{bmatrix}z_{1}^{2}&z_{1}z_{2}\\ z_{1}z_{2}&z_{2}^{2}\end{bmatrix}+p|z|^{p-2}\mathrm{Id},\quad z\in\mathbb{R}^{2 }\setminus\{0\}.\]
The Hessian \(\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)\) is calculated in (4.7). The Hessian \(\nabla^{2}Y(z)\) of \(Y\) is:
\[\begin{bmatrix}p|z|^{p-2}+p(p-2)z_{1}^{2}|z|^{p-4}&(p-1)z_{2}^{p-2}+p(p-2)z_{1 }z_{2}|z|^{p-4}\\ (p-1)z_{2}^{p-2}+p(p-2)z_{1}z_{2}|z|^{p-4}&(p-1)(p-2)z_{1}z_{2}^{p-3}+p|z|^{p- 2}+p(p-2)z_{2}^{2}|z|^{p-4}\end{bmatrix}\]
We will verify that for \(z\in(0,\infty)^{2}\), the matrix is positive semi-definite. Clearly,
\[\left[p|z|^{p-2}+p(p-2)z_{1}^{2}|z|^{p-4}\right]>0.\]
Moreover, after long, but elementary, calculations we get:
\[\det\nabla^{2}Y(z) = \left[p|z|^{p-2}+p(p-2)z_{1}^{2}|z|^{p-4}\right](p-1)(p-2)z_{1}z_ {2}^{p-3}\] \[+p^{2}|z|^{2p-4}+p^{2}(p-2)|z|^{2p-4}\] \[-(p-1)^{2}z_{2}^{2p-4}-p(p-1)(p-2)z_{1}z_{2}^{p-1}|z|^{p-4}\] \[= p^{2}(p-1)|z|^{2p-4}-(p-1)^{2}z_{2}^{2p-4}\] \[+p(p-1)(p-2)|z|^{p-4}\left((p-1)z_{1}^{3}z_{2}^{p-3}-z_{1}z_{2}^{ p-1}\right).\]
We have \(z_{2}\leq|z|\), so applying Young's inequality with exponents \(p\) and \(q=p/(p-1)\) to the product \(z_{1}z_{2}^{p-1}\) we obtain
\[z_{1}z_{2}^{p-1}\leq\frac{z_{1}^{p}}{p}+\frac{(p-1)z_{2}^{p}}{p}=\frac{1}{p}(z_ {1}^{2})^{\frac{p}{2}}+\frac{p-1}{p}(z_{2}^{2})^{\frac{p}{2}}\leq|z|^{p}.\]
Summarizing,
\[\det\nabla^{2}Y(z) \geq p(p-1)^{2}(p-2)|z|^{p-4}z_{1}^{3}z_{2}^{p-3}\] \[+|z|^{2p-4}(p-1)\left(p^{2}-p(p-2)-(p-1)\right)\] \[= p(p-1)^{2}(p-2)|z|^{p-4}z_{1}^{3}z_{2}^{p-3}+|z|^{2p-4}(p-1)(p+1)>0.\]
If \(w_{1}\leq z_{1},w_{2}\leq z_{2},\ldots,w_{k}\leq z_{k}\) implies \(T(w_{1},\ldots,w_{k})\leq T(z_{1},\ldots,z_{k})\) in the domain of a real-valued function \(T\), then we say \(T\) is _coordinate-wise nondecreasing_. The following fact is self-explanatory, see also Boyd and Vandenberghe [14, Section 3.2.4].
**Lemma C.2**.: _Let \(S\colon A\to\mathbb{R}^{k}\), \(S(A)\subset B\), and \(T\colon B\to\mathbb{R}\), where \(A\subset\mathbb{R}^{n}\) and \(B\subset\mathbb{R}^{k}\) are convex. If each coordinate of \(S\) is convex and \(T\) is coordinate-wise nondecreasing and convex, then the composition \(T\circ S\colon A\to\mathbb{R}\) is convex._
The following two lemmas are critical for our development.
**Lemma C.3**.: _Let \(p\geq 2\) and define, for \(z=(z_{1},z_{2})\in\mathbb{R}^{2}\),_
\[Y^{(+)}(z):=z_{1}\left((z_{2})_{+}\right)^{p-1}+|z|^{p},\qquad Y^{(-)}(z):=z_{1 }\left((z_{2})_{-}\right)^{p-1}+|z|^{p}.\]
_The functions are convex on \([0,\infty)\times\mathbb{R}\)._
Proof.: Define \(T\colon[0,\infty)\times\mathbb{R}\to[0,\infty)^{2}\) as
\[T(z):=(z_{1},(z_{2})_{+}),\quad z=(z_{1},z_{2})\in[0,\infty)\times\mathbb{R},\]
and let \(Y\colon[0,\infty)^{2}\to\mathbb{R}\) be as in Lemma C.1. Since each coordinate of \(T\) is convex and the function \(Y\) is convex and coordinate-wise nondecreasing, the composition
\[(Y\circ T)(z)=z_{1}((z_{2})_{+})^{p-1}+\left((z_{1})^{2}+((z_{2})_{+})^{2} \right)^{p/2}\]
is convex on \([0,\infty)\times\mathbb{R}\) (from Lemma C.2). Therefore,
\[Y^{(+)}(z)=\max\{(Y\circ T)(z),|z|^{p}\}\]
is convex on \([0,\infty)\times\mathbb{R}\) as the maximum of convex functions [14, Section 3.2.3].
To prove the convexity of \(Y^{(-)}\) we just notice that \(Y^{(-)}(z_{1},z_{2})=Y^{(+)}(z_{1},-z_{2})\).
**Lemma C.4**.: _If \(p>2\) then for all \(z,w\in\mathbb{R}^{2}\),_
\[\mathcal{J}_{p}^{(++)}(w,z)+\mathcal{F}_{p}(w,z)\geq 0,\quad\mathcal{J}_{p}^{(-+ )}(w,z)+\mathcal{F}_{p}(w,z)\geq 0,\]
_where \(\mathcal{J}_{p}^{(++)}(w,z)\), \(\mathcal{J}_{p}^{(-+)}(w,z)\) are given by (5.2) and (5.3)._
Proof.: Because of (2.10) and (5.5), we only need to show that \(\mathcal{J}_{p}^{(++)}(w,z)+\mathcal{F}_{p}(w,z)\geq 0\). We rewrite this inequality as
(C.2) \[Y^{(++)}(z)\geq Y^{(++)}(w)+d(w)\cdot(z-w),\]
where \(Y^{(++)}(z):=(z_{1})_{+}\left((z_{2})_{+}\right)^{p-1}+|z|^{p}\) and
(C.3) \[d(w):=\left(\mathbf{1}(w_{1})\left((w_{2})_{+}\right)^{p-1},(p-1)(w_{1})_{+} \left((w_{2})_{+}\right)^{p-2}\right)+pw^{\langle p-1\rangle}.\]
Therefore the proof of (C.2) amounts to checking that \(d(w)\) is a subgradient of the function \(Y^{(++)}\) at the point \(w\in\mathbb{R}^{2}\).
To show (C.2), we first establish the convexity of \(Y^{(++)}\). Define \(T\colon\mathbb{R}^{2}\to[0,\infty)^{2}\) as
\[T(z):=((z_{1})_{+},(z_{2})_{+}),\quad z=(z_{1},z_{2})\in\mathbb{R}^{2}.\]
Let \(Y\colon[0,\infty)^{2}\to\mathbb{R}\) as in Lemma C.1. Since each coordinate of \(T\) is convex and the function \(Y\) is convex and coordinate-wise nondecreasing, the convexity on \(\mathbb{R}^{2}\) of the composition
\[(Y\circ T)(z)=(z_{1})_{+}((z_{2})_{+})^{p-1}+\left(((z_{1})_{+})^{2}+((z_{2})_ {+})^{2}\right)^{p/2}\]
follows from Lemma C.2. Since
\[Y^{(++)}(z)=\max\{(Y\circ T)(z),|z|^{p}\},\]
it is convex on \(\mathbb{R}^{2}\) as a maximum of two convex functions.
If \(w=0\) then \(Y^{(++)}(w)=0\) and \(d(w)=0\), hence (C.2) is true for every \(z\).
If \(w\neq 0\), to show that \(d(w)\) is a subgradient of \(Y^{(++)}\) at \(w\), we need to prove that
\[\frac{\partial Y^{(++)}}{\partial v}(w)\geq d(w)\cdot v,\quad w\in\mathbb{R}^{ 2}\setminus\{0\},\quad\text{for every $v=(v_{1},v_{2})\in\mathbb{R}^{2}$.}\]
Denote \(B:=\{(w_{1},w_{2})\in\mathbb{R}^{2}\colon w_{1}=0,w_{2}>0\}\) as vertical positive semi-axis. The function \(Y^{(++)}\) is differentiable everywhere but on \(B\). Thus when \(w\notin B\), the gradient of \(Y^{(++)}\) exists, is given by (C.3) and
\[\frac{\partial Y^{(++)}}{\partial v}(w)=\nabla Y^{(++)}(w)\cdot v=d(w)\cdot v.\]
In the remaining case \(w\in B\), we have two possibilities. If \(v_{1}\geq 0\), then
\[\frac{\partial Y^{(++)}}{\partial v}(w) = \left((w_{2})_{+}\right)^{p-1}v_{1}+pw^{(p-1)}\cdot v\] \[\geq \frac{1}{2}\left((w_{2})_{+}\right)^{p-1}v_{1}+pw^{(p-1)}\cdot v= d(w)\cdot v.\]
Otherwise, when \(v_{1}<0\), then
\[\frac{\partial Y^{(++)}}{\partial v}(w)=pw^{(p-1)}\cdot v\geq\frac{1}{2}\left( (w_{2})_{+}\right)^{p-1}v_{1}+pw^{(p-1)}\cdot v=d(w)\cdot v.\]
The proof is complete.
## Appendix D Alternative proof of polarization for \(p\geq 3\)
The main difficulty in the proof of Theorem 4.1 above is to justify the limiting procedure in the absence of nonnegativity in the integrands. For \(p\geq 3\), we can proceed differently: the absolute value of the function \(\mathcal{J}_{p}\) is dominated by the function \(\mathcal{G}_{p}\), which helps with the integrability issues in the proof of the polarized Hardy-Stein formula.
**Lemma D.1**.: _For every \(p\geq 3\), there is a constant \(c_{p}>0\) such that_
(D.1) \[|\mathcal{J}_{p}(w,z)|\leq c_{p}\mathcal{G}_{p}(w,z)\asymp\mathcal{H}_{p}(w,z ),\quad w,z\in\mathbb{R}^{2}.\]
Proof.: The formula (4.1), defining \(\mathcal{J}_{p}\), can be rewritten as
\[\mathcal{J}_{p}(w,z) = z_{1}z_{2}^{\langle p-1\rangle}-w_{1}w_{2}^{\langle p-1\rangle}-w_{ 2}^{\langle p-1\rangle}(z_{1}-w_{1})-(p-1)w_{1}|w_{2}|^{p-2}(z_{2}-w_{2})\] \[= (z_{1}-w_{1})(z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle })+w_{1}[z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle}-(p-1)|w_{2}|^{p- 2}(z_{2}-w_{2})].\]
Using (A.3) we can estimate the first summand above in the following manner
\[|z_{1}-w_{1}||z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle}| \leq C_{p}^{\prime}|w_{1}-z_{1}|\cdot|w_{2}-z_{2}|\cdot(|w_{2}|\vee|z _{2}|)^{p-2}\] \[\leq \frac{C_{p}^{\prime}}{2}\left(|w_{1}-z_{1}|^{2}+|w_{2}-z_{2}|^{2 }\right)\left(|w_{2}|\vee|z_{2}|\right)^{p-2}\] \[\leq \frac{C_{p}^{\prime}}{2}|w-z|^{2}\left(|w|\vee|z|\right)^{p-2}= \frac{C_{p}^{\prime}}{2}\mathcal{G}_{p}(w,z).\]
For the second summand we use (A.2),
\[|w_{1}[z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle}- (p-1)|w_{2}|^{p-2}(z_{2}-w_{2})]|\leq C_{p-1}^{\prime\prime}|w_{1} ||z_{2}-w_{2}|^{2}(|w_{2}|\vee|z_{2}|)^{p-3}\] \[\leq C_{p-1}^{\prime\prime}|w-z|^{2}(|w_{1}|\vee|z_{1}|)(|w_{2}| \vee|z_{2}|)^{p-3}\] \[\leq C_{p-1}^{\prime\prime}|w-z|^{2}(|w|\vee|z|)^{p-2}=C_{p-1}^{ \prime\prime}\mathcal{G}_{p}(w,z).\]
Thus,
\[|\mathcal{J}_{p}(w,z)|\leq c_{p}\mathcal{G}_{p}(w,z)\]
and \(\mathcal{G}_{p}(w,z)\asymp\mathcal{H}_{p}(w,z)\) by (2.13).
_Remark D.2_.: The statement (D.1) stays true also for \(p=2\). Indeed, by (4.2),
\[|\mathcal{J}_{2}(w,z)|=|(z_{1}-w_{1})(z_{2}-w_{2})|\leq|z-w|^{2}=\mathcal{G}_{ 2}(w,z).\]
On the other hand, it fails in general for \(p\in(1,3)\setminus\{2\}\). Indeed, for \(k=1,2,\ldots\), let \(w^{(k)}:=\left(1,\frac{1}{k}\right)\), \(z^{(k)}:=\left(1,\frac{2}{k}\right)\). Then, by (4.1),
\[|\mathcal{J}_{p}(w^{(k)},z^{(k)})| = \left|\frac{2^{p-1}}{k^{p-1}}-\frac{1}{k^{p-1}}-\frac{p-1}{k^{p-1 }}\right|=\left|2^{p-1}-p\right|\frac{1}{k^{p-1}},\] \[\mathcal{G}_{p}(w^{(k)},z^{(k)}) = \frac{1}{k^{2}}\left(1+\frac{4}{k^{2}}\right)^{\frac{p-2}{2}}.\]
Our claim is verified by notifying that
\[\frac{|\mathcal{J}_{p}(w^{(k)},z^{(k)})|}{\mathcal{G}_{p}(w^{(k)},z^{(k)})}= \frac{k^{3-p}\left|2^{p-1}-p\right|}{\left(1+\frac{4}{k^{2}}\right)^{\frac{p-2 }{2}}}\to\infty\quad\text{as }k\to\infty.\]
Estimate (D.1) permits to substantially simplify the proof of the polarized Hardy-Stein identity (4.5). Indeed, for \(f,g\in L^{p}(\mathbb{R}^{d})\), \(u(t)=P_{t}f\), \(v(t)=P_{t}g\), \(\Phi=(u,v)\), and \(p\geq 3\), from Theorem 3.1 we have that
(D.2) \[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{H}_{p}(P_{t }\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t<\infty,\]
so in view of Lemma D.1, an analogous integral of \(|\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))|\) is convergent as well. We next review the proof of Theorem 4.1: for \(f,g\in\mathcal{D}_{p}(L)\), we differentiate \(u(t)v(t)^{\langle p-1\rangle}\) in \(L^{1}(\mathbb{R}^{d})\) as in (4.12) and we have:
\[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}u(t)v(t)^{\langle p-1 \rangle}\,\mathrm{d}x=-\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y.\]
Since \(|\mathcal{J}_{p}|\leq\mathcal{H}_{p}\) and the integral in (D.2) is convergent, we can pass to the limit when \(h\to 0^{+}\) (we use the Dominated Convergence Theorem, (**P1**), and (**P2**)) to obtain
(D.3) \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}u(t)v(t)^{\langle p-1 \rangle}\,\mathrm{d}x=-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_ {p}(P_{t}\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\]
The rest of the proof remains unchanged: we integrate from \(0\) to \(T\) with \(T>0\) fixed, then we pass to the limit \(T\to\infty\). Then, we relax the assumption that \(f,g\in\mathcal{D}_{p}(L)\) by using the analyticity of the semigroup.
## Appendix E Proof of (1.4)
Let \(\{B_{t},t\geq 0\}\) be the Brownian motion on the Euclidean space \(\mathbb{R}^{d}\) running at twice the usual speed, and let \((P_{t})_{t\geq 0}\) be its semigroup:
\[P_{t}f(x):=\mathbb{E}_{x}f(B_{t})=\int_{\mathbb{R}^{d}}f(y)p_{t}(x,y)\, \mathrm{d}y=(p_{t}*f)(x),\quad t>0,x\in\mathbb{R}^{d},\]
where
\[p_{t}(x)=(4\pi t)^{-d/2}e^{-\frac{|x|^{2}}{4t}},\quad t>0,x\in\mathbb{R}^{d}\]
and \(p_{t}(x,y):=p_{t}(x-y)\), as before. Let \(1<p<\infty\). It is well known that \((P_{t})_{t\geq 0}\) is a strongly continuous, analytic, Markovian semigroup of symmetric operators in \(L^{p}(\mathbb{R}^{d})\). In particular, for every \(t>0\) and \(f\in L^{p}(\mathbb{R}^{d})\), \(P_{t}f\) belongs to the domain of the generator of this semigroup. Estimates (2.16) and (2.17) hold true as well, therefore the key ingredients needed to prove Hardy-Stein identity remain satisfied for the Brownian motion. Thus, for every \(u\in L^{p}(\mathbb{R}^{d})\), we define, as before,
\[\mathcal{E}_{p}[u]:=\lim_{t\to 0}\mathcal{E}^{(t)}(u,u^{\langle p-1\rangle})\]
and
\[\mathcal{D}(\mathcal{E}_{p})=\{u\in L^{p}(\mathbb{R}^{d}):\text{ finite }\lim_{t\to 0}\mathcal{E}^{(t)}(u,u^{\langle p-1\rangle})\text{ exists}\}.\]
Similarly as in the proof of Theorem 3.1, we obtain
(E.1) \[\int_{\mathbb{R}^{d}}|f|^{p}\,\mathrm{d}x=p\int_{0}^{\infty} \mathcal{E}_{p}[P_{t}f]\,\mathrm{d}t,\quad f\in L^{p}(\mathbb{R}^{d}).\]
The generator of the Gaussian semigroup \((P_{t})_{t\geq 0}\) acting on \(u\in L^{p}(\mathbb{R}^{d})\) is
\[Lu:=\lim_{h\to 0^{+}}\frac{1}{h}(P_{h}u-u),\quad\text{if the limit exists in }L^{p}(\mathbb{R}^{d}).\]
We can also write
\[Lu=\sum_{j=1}^{d}\frac{\partial^{2}u}{\partial x_{j}^{2}},\quad u\in L^{p}(\mathbb{ R}^{d}),\]
where the partial derivatives of \(u\) are understood in the distributional sense. We kept the letter \(L\) here, to be in accordance with previous devlopment. The domain of the generator is
\[\mathcal{D}_{p}(L) := \{u\in L^{p}(\mathbb{R}^{d}):\lim_{h\to 0^{+}}(P_{h}u-u)/h\text{ exists in }L^{p}(\mathbb{R}^{d})\}\] \[= \left\{u\in L^{p}(\mathbb{R}^{d}):\sum_{j=1}^{d}\frac{\partial^{2 }u}{\partial x_{j}^{2}}\in L^{p}(\mathbb{R}^{d})\right\}.\]
In Appendix F we explain and justify the above statements.
As earlier, for \(u\in\mathcal{D}_{p}(L)\subset\mathcal{D}(\mathcal{E}_{p})\),
(E.2) \[\mathcal{E}_{p}[u]=-\langle Lu,u^{\langle p-1\rangle}\rangle.\]
To express the Hardy-Stein identity in a more explicit form, we need the following identity, which was proved by Metafune and Spina [27].
**Lemma E.1**.: _Let \(1<p<\infty\). For \(u\in W^{2,p}(\mathbb{R}^{d})\),_
(E.3) \[\int_{\mathbb{R}^{d}}u^{\langle p-1\rangle}Lu\,\mathrm{d}x=-(p-1)\int_{ \mathbb{R}^{d}}|u|^{p-2}|\nabla u|^{2}\,\mathrm{d}x,\]
_where \(W^{k,p}(\mathbb{R}^{d})\) is the Sobolev space of order \(k\)._
It is not hard to see that for \(t>0\) and \(f\in L^{p}(\mathbb{R}^{d})\), we have \(P_{t}f\in W^{2,p}(\mathbb{R}^{d})\). Indeed, for every multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\), we denote, as usual, \(|\alpha|:=\alpha_{1}+\ldots+\alpha_{d}\) and \(\partial^{\alpha}:=\frac{\partial^{|\alpha|}}{\partial x_{1}^{u_{1}}\ldots \partial x_{d}^{u_{d}}}\). Then,
\[\|\partial^{\alpha}P_{t}f\|_{L^{p}(\mathbb{R}^{d})}=\|(\partial^{\alpha}p_{t} )\ast f\|_{L^{p}(\mathbb{R}^{d})}\leq\|\partial^{\alpha}p_{t}\|_{L^{1}( \mathbb{R}^{d})}\cdot\|f\|_{L^{p}(\mathbb{R}^{d})}<\infty.\]
By (E.2) and (E.3), for \(f\in L^{p}(\mathbb{R}^{d})\) and \(t>0\),
(E.4) \[\mathcal{E}_{p}[P_{t}f]=-\langle\Delta P_{t}f,(P_{t}f)^{\langle p-1\rangle} \rangle=(p-1)\int_{\mathbb{R}^{d}}|P_{t}f|^{p-2}|\nabla P_{t}f|^{2}\,\mathrm{d }x.\]
Since \(P_{t}f\in C^{\infty}(\mathbb{R}^{d})\), the above derivatives are taken in the classical sense. Here \(\Delta\) is the classical Laplacian. Using (E.4) we can express Hardy-Stein identity for the Gaussian semigroup (E.1) in the desired form. This finishes the proof of (1.4).
## Appendix F The generator of the Gaussian semigroup in \(L^{p}\)
For completeness we prove the equivalence of two definitions of the Laplacian on \(L^{p}(\mathbb{R}^{d})\) used in Appendix E. Let \(1<p<\infty\). Let \((P_{t})_{t\geq 0}\) be the Gaussian semigroup.
It is well known that for \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), the infinitely differentiable functions on \(\mathbb{R}^{d}\) with compact support.
\[\lim_{h\to 0^{+}}\frac{1}{h}(P_{h}\varphi-\varphi)=\sum_{j=1}^{d}\frac{ \partial^{2}\varphi}{\partial x_{j}^{2}}=:\Delta\varphi,\quad\text{ (the limit taken in $L^{p}(\mathbb{R}^{d})$)}.\]
We show that the equality is satisfied also for those functions from \(L^{p}(\mathbb{R}^{d})\), for which the right-hand side limit exists, without further regularity assumptions.
The _semigroup Laplacian_ is defined as
(F.1) \[Lf:=\lim_{h\to 0^{+}}\frac{1}{h}(P_{h}f-f),\quad f\in\mathcal{D}_{p}(L)\subset L^{ p}(\mathbb{R}^{d}),\]
where the limit above is taken in \(L^{p}(\mathbb{R}^{d})\), for \(f\) in the natural domain:
\[\mathcal{D}_{p}(L):=\{u\in L^{p}(\mathbb{R}^{d}):\lim_{h\to 0^{+}}(P_{h}u-u)/h \text{ exists in $L^{p}(\mathbb{R}^{d})$}\}.\]
Since \(L\) is the generator of a strongly continuous semigroup in \(L^{p}(\mathbb{R}^{d})\), the operator \(\lambda I-L\colon\mathcal{D}_{p}(L)\to L^{p}(\mathbb{R}^{d})\) is a bijection for every \(\lambda>0\).
We then recall the notion of the _distributional Laplacian_\(\tilde{L}\) of \(f\in L^{p}(\mathbb{R}^{d})\). If there exists \(g\in L^{p}(\mathbb{R}^{d})\) such that
\[\langle g,\varphi\rangle=\langle f,L\varphi\rangle=\left\langle f,\sum_{j=1}^ {d}\frac{\partial^{2}\varphi}{\partial x_{j}^{2}}\right\rangle=\langle f, \Delta\varphi\rangle\]
for all test functions \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), then we let \(\tilde{L}f:=g\). The class of all such functions \(f\) is denoted \(\mathcal{D}(\tilde{L})\). In other words,
\[\tilde{L}f=\sum_{j=1}^{d}\frac{\partial^{2}f}{\partial x_{j}^{2}},\]
where the partial derivatives are taken in distributional sense. The operators \(L\) and \(\tilde{L}\) coincide, which we prove below.
**Lemma F.1**.: _The operator \(\tilde{L}\) is an extension of \(L\)._
Proof.: Let \(f\in\mathcal{D}_{p}(L)\). For every \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), by symmetry of the operators \(P_{h}\),
\[\langle\tilde{L}f,\varphi\rangle=\langle f,L\varphi\rangle=\lim_{h\to 0^{+}} \frac{1}{h}\left(\langle f,P_{h}\varphi\rangle-\langle f,\varphi\rangle \right)=\lim_{h\to 0^{+}}\frac{1}{h}\left(\langle P_{h}f,\varphi\rangle- \langle f,\varphi\rangle\right)=\langle Lf,\varphi\rangle.\]
Thus, \(f\in\mathcal{D}(\tilde{L})\) and \(\tilde{L}f=Lf\).
**Lemma F.2**.: _For every \(\lambda>0\), the operator \(\lambda I-\tilde{L}\) defined on \(\mathcal{D}(\tilde{L})\) is one-to-one._
Proof.: Assume that \(f\in\mathcal{D}(\tilde{L})\) and \(\lambda f-\tilde{L}f=0\). We prove that \(f=0.\) Take \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), then using properties of convolutions and distributional derivatives, we can write
(F.2) \[\lambda f*\varphi-\tilde{L}(f*\varphi)=\lambda f*\varphi-(\tilde{L}f)*\varphi= (\lambda f-\tilde{L}f)*\varphi=0*\varphi=0.\]
This yields \(f*\varphi=0.\) Indeed, assuming the contrary, since \(f*\varphi\in C_{0}^{\infty}(\mathbb{R}^{d})\), there is a point \(x_{0}\in\mathbb{R}^{d}\) which is the positive maximum or the negative minimum of \(f*\varphi\). If \(x_{0}\) is the positive maximum of \(f*\varphi\), then \(\tilde{L}(f*\varphi)(x_{0})=\Delta(f*\varphi)(x_{0})\leq 0\) and
\[0=(\lambda f*\varphi)(x_{0})-\tilde{L}(f*\varphi)(x_{0})\geq(\lambda f* \varphi)(x_{0})>0,\]
a contradiction. The case of \(s_{0}\) being the negative minimum is handled similarly. Therefore \(f*\varphi=0\) for every \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), meaning \(f=0\).
**Proposition F.3**.: _We have \(\mathcal{D}_{p}(L)=\mathcal{D}(\tilde{L})\) and \(L=\tilde{L}\)._
Proof.: Take any \(\lambda>0\). In view of Lemma F.1 and F.2, the bijection \(\lambda I-L:\mathcal{D}_{p}(L)\to L^{p}(\mathbb{R}^{d})\) and its injective extension \(\lambda I-\tilde{L}:\mathcal{D}(\tilde{L})\to L^{p}(\mathbb{R}^{d})\) are equal, \(\mathcal{D}_{p}(L)=\mathcal{D}(\tilde{L})\), \(\lambda I-L=\lambda I-\tilde{L}\). Thus, \(L=\tilde{L}\).
|
2305.19748 | UKP-SQuARE: An Interactive Tool for Teaching Question Answering | The exponential growth of question answering (QA) has made it an
indispensable topic in any Natural Language Processing (NLP) course.
Additionally, the breadth of QA derived from this exponential growth makes it
an ideal scenario for teaching related NLP topics such as information
retrieval, explainability, and adversarial attacks among others. In this paper,
we introduce UKP-SQuARE as a platform for QA education. This platform provides
an interactive environment where students can run, compare, and analyze various
QA models from different perspectives, such as general behavior,
explainability, and robustness. Therefore, students can get a first-hand
experience in different QA techniques during the class. Thanks to this, we
propose a learner-centered approach for QA education in which students
proactively learn theoretical concepts and acquire problem-solving skills
through interactive exploration, experimentation, and practical assignments,
rather than solely relying on traditional lectures. To evaluate the
effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a
postgraduate NLP course and surveyed the students after the course. Their
positive feedback shows the platform's effectiveness in their course and
invites a wider adoption. | Haishuo Fang, Haritz Puerto, Iryna Gurevych | 2023-05-31T11:29:04Z | http://arxiv.org/abs/2305.19748v2 | # UKP-SQuARE: An Interactive Tool for Teaching Question Answering
###### Abstract
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course. Additionally, the breadth of QA derived from this exponential growth makes it an ideal scenario for teaching related NLP topics such as information retrieval, explainability, and adversarial attacks among others. In this paper, we introduce UKP-SQuARE as a platform for QA education. This platform provides an interactive environment where students can run, compare, and analyze various QA models from different perspectives, such as general behavior, explainability, and robustness. Therefore, students can get a first-hand experience in different QA techniques during the class. Thanks to this, we propose a learner-centered approach for QA education in which students proactively learn theoretical concepts and acquire problem-solving skills through interactive exploration, experimentation, and practical assignments, rather than solely relying on traditional lectures. To evaluate the effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a postgraduate NLP course and surveyed the students after the course. Their positive feedback shows the platform's effectiveness in their course and invites a wider adoption.
## 1 Introduction
Question Answering (QA) is one of the overarching research topics in Natural Language Processing (NLP). QA pipelines have been developed to address different types of questions, knowledge sources, and answer formats, including extractive, abstractive, knowledge base, multiple-choice, generative, and open-domain QA. Such a massive number of QA systems and relevant NLP techniques are making QA lectures more important in NLP courses. However, despite QA being an application-oriented topic (e.g., chatbots, virtual assistants, etc.), classes are usually theoretically driven. Thus, in this paper, we propose the use of the UKP-SQuARE platform as a tool for QA education. This platform integrates most QA formats, popular models, datasets, and analysis tools, such as explainability, adversarial attacks, and graph visualizations.
Compared with conventional teacher-led classes, we propose a learner-centered class following the flipped classroom [1] with UKP-SQuARE as the driving tool of the lecture. This tool provides an interface for users to interact with different QA models and analysis tools. Therefore, students can actively learn about QA systems and get hands-on experience by interacting with models on the platform. Concretely, students can flexibly compare multiple architectures that model different QA formats, analyze their outputs with explainability tools, and even analyze their robustness against adversarial attacks. Prior studies have shown that flipped classroom lectures improve the learning process of students in programming courses [1]. Thus, we believe that teaching and learning QA through a live demo with this platform can also make NLP lectures more engaging, drawing students' attention, and interest in the topics.
To investigate the effectiveness of UKKP-SQuARE in QA education, we adopted it for the first time in a postgraduate NLP course1 and conducted a survey afterward. The positive feedback from the students encourages us to continue adopting this platform and education method in more NLP courses. The contributions of this paper are: i) a novel interactive learner-centered methodology to teach QA and relevant NLP topics, ii) extending the UKP-SQuARE platform for teaching QA, and iii) the design of a syllabus for interactive QA lectures.
Footnote 1: Master’s level course
## 2 Ukp-SQuARE
UKP-SQuARE Baumgartner et al. (2022); Sachdeva et al. (2022); Puerto et al. (2023) is an extendable and interactive QA platform that integrates numerous popular QA models such as deeepset's roberta-base-squad2, SpanBERT Joshi et al. (2020) for HotpotQA, and QAGNN Yasunaga et al. (2021). It provides an ecosystem for QA research, including comparing different models, explaining model outputs, adversarial attacks, graph visualizations, behavioral tests, and multi-agent models. In addition, this platform provides a user-friendly interface3 that enables users to interact. Users can run available models, deploy new ones, compare their behaviors, and explain outputs.
Footnote 2: [https://huggingface.co/deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
Footnote 3: [https://square.ukp-lab.de/](https://square.ukp-lab.de/)
## 3 Learning Question Answering with Ukp-SQuARE
In this section, we present the syllabus of a lecture focused on QA and relevant NLP topics that use the platform UKP-SQuARE following the flipped classroom methodology Bishop and Verleger (2013). The flipped classroom is an effective learner-centered educational methodology in which students study pre-recorded lectures and materials in advance to engage in more interactive and collaborative learning activities in class. UKP-SQuARE can be the driving tool for the flipped classroom in QA education. With our platform, lecturers can introduce the topics by interacting with the students and then proceed to an in-depth explanation of the technical details behind the methods of each topic. We propose dividing the lecture into three topics in the QA field: basic QA concepts, trustworthy QA, and multi-agent QA systems. With these topics, students can learn about QA and related NLP topics such as information extraction, explainability, adversarial attacks, and multi-agent systems.
### Learning Basic QA Components
QA systems include two main components, i.e., Readers and Retrievers. Readers are QA models responsible for obtaining answers from the context retrieved by retrievers. In UKP-SQuARE, students can easily learn various readers (QA models) within different QA formats and information retrieval techniques via interacting with the interface.
#### 3.1.1 Contrasting Different QA Formats
With UKP-SQuARE, students can get first-hand experience by interacting with multiple models on our platform. The home readings would include descriptions of the main QA datasets and their baselines. In class, the lecturer can show the different QA formats with real demonstrations of the models and explain on the fly the architectural differences needed to model each QA format. An example is shown in Figure 1 where a span-extraction QA model, i.e., Span-BERT, and a multiple-choice QA model, i.e., CommonsenseQA model are presented to show the difference between these two QA formats. Such interactions can make theoretical explanations of the architectures easier to digest and, therefore, the class more engaging.
#### 3.1.2 Learning Information Retrieval
To learn Information Retrieval (IR) methods, the user interface of UKP-SQuARE offers a compelling approach to help students differentiate between different IR methods, e.g., lexical retrieval and semantic retrieval, and understand how they affect the final performance of QA models. The home readings would include book chapters or slides describing the main IR methods such as TF-IDF Sparck Jones (1988), BM25 Robertson et al. (1995), Sentence-BERT Reimers and Gurevych (2019), and Dense Passage Retrieval DPR Karpukhin et al. (2020). Like the above section, the lecturer can guide students to find the difference between lexical retrieval (e.g., BM25) and semantic retrieval (e.g., DPR) via playing with UKP-SQuARE by themselves. As shown in Figure 2, for the question _When was Barack Obama's inauguration?_, the BM25 retriever returns a passage covering all keywords but irrelevant to the question, while the DPR retriever returns the correct document, which contains the answer to the question. By providing this example in class, students can easily understand that DPR retrieves semantically similar passages while BM25 only retrieves passages that contain the query tokens and, thus, may retrieve unrelated passages. This could be further explored by comparing two open-domain QA models implementing these retrieval methods and the same reader model to demonstrate the error propagation due to irrelevant passages. This active learning method can prevent the issue of students losing attention that commonly occurs in traditional lectures Felder and Brent (2003).
### Learning Trustworthy QA Systems
In addition to learning basic QA components, it is important to understand how to identify and evaluate trustworthy QA systems. This involves several related NLP topics, such as explainability, transparency, and robustness. UKP-SQuARE provides such analysis tools to facilitate students' learning process of trustworthy QA systems.
#### 3.2.1 Explainability Methods
The exponential adoption of AI is pushing regulators to adopt policies to regulate its use. One of the key points they aim to address is the explainability of these methods to make AI safer4. Thus, it is of utmost importance to include explainability methods on AI courses in Universities. In terms of the explainability of QA models, UKP-SQuARE includes BertViz (Vig, 2019) and a suite of saliency map methods to facilitate the understanding of the model's decision-making process. Saliency maps employ attribution-weighting techniques such as gradient-based (Simonyan et al., 2014; Sundararajan et al., 2017) and attention-based (Jain et al., 2020; Serrano and Smith, 2019) methods to determine the relative importance of each token for the model prediction. The descriptions of these methods would form part of the home readings and to make the classes more active, the class would be driven by real examples of saliency maps using our platform and their interpretation. In this way, students can learn how to explain the output of a QA model based on saliency maps.
Footnote 4: [https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence](https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence)
An example of a saliency map is shown in Figure 3. The color level of the highlighted text reflects its importance for the answer. As we can see, _of what celestial body?_ is the most important part of
Figure 1: Different QA formats in UKP-SQuARE
Figure 2: Example of difference between using BM25 retriever and DPR retriever. The red boxes represent keywords in the retrieved passages
the question, while _sun_ gets the most attention in the context, which is the final answer. This means the model successfully understands the main point of the question and can link them to the context. Making this type of interpretation can help students identify potential problems or biases in the models.
#### 3.2.2 Behavioral Tests in QA models
The next important component in trustworthy QA is behavioral tests of models. Machine learning models do not throw errors as regular software programs. Instead, an error in machine learning is usually an unwanted behavior, such as a misclassification that may pass inadvertently to a person Ribeiro et al. (2020). This makes testing machine learning models challenging. To simplify the behavioral analysis of machine learning models, Ribeiro et al. (2020) proposes _CheckList_, a list of inputs and expected outputs that aims to analyze general linguistic capabilities and NLP models mimicking the unit tests in software engineering. The integration of _CheckList_ into UKP-SQuARE offers a simple method to analyze the performance of QA models beyond traditional benchmarks, such as MRQA tasks Fisch et al. (2019).
As illustrated in Figure 4, we test the SQuAD 2.0 RoBERTa Adapter and SQuAD 2.0 BERT Adapter using the CheckList in which multiple NLP capabilities are tested like coreference, negation, and robustness. As we can see SQuAD 2.0 BERT Adapter performs worse than RoBERTa Adapter in the above dimensions. Such an example can be used by the lecturer in class to introduce the idea of behavioral tests on the fly. In addition, the behavioral tests of UKP-SQuARE can be used to foster the students' analytical skills. A potential assignment could be to train a QA model and deploy it on our platform to analyze it with the provided ecosystem of QA tools. In particular, thanks to the behavioral tests in UKP-SQuARE, students can provide a deeper analysis of their model based on the quantitative results of their test set and a qualitative analysis based on the behavioral test results.
#### 3.2.3 Adversarial Attacks
Policymakers are also designing a regulatory framework that guarantees users that their AI models are resilient to adversarial attacks5. Therefore, AI curriculums should also include adversarial attacks to prepare students for these new regulations.
Footnote 5: See footnote 3
UKP-SQuARE provides tools to conduct adversarial attacks, such as HotFlip Ebrahimi et al. (2018), input reduction Feng et al. (2018), and sub-span Jain et al. (2020). Thus, the home readings should include a theoretical introduction to these methods. Then, the lecture would use the platform to exploit the interactive nature of adversarial attacks. In particular, the need to analyze examples to understand different types of attacks makes this part of the topic especially practical. Therefore, the lecturer can introduce the topic through UKP-SQuARE and delve deeper into the technical details afterward.
An exemplary case is that students can attack real models with examples by tuning different parameters, such as the number of flips in HotFlip, to see how the output changes when they subtly change the input data. In Figure 5, only flipping _(full stop)_ to _wore_ can directly change the answer. In class, a small experiment can be set up by lecturers in which students need to manually manipulate the input to see if it can trick the model into making
Figure 4: The result of running CheckList for SQuAD 2.0 RoBERTa Adapter and BERT Adapter. The number of failed and succeeded test cases are highlighted in green and red.
Figure 3: An attention-based saliency map of a question in UKP-SQuARE.
incorrect answers and compare it with adversarial attack tools to deepen their understanding of those adversarial attacks and the importance of building up trustworthy QA systems.
#### 3.2.4 Graph-based QA Models
Knowledge Graph Question Answering (KGQA) systems can have strong explanatory power thanks to the reasoning paths that can be extracted from the graph. Such transparency can enhance the interpretability and trustworthiness of the system. UKPSQuARE currently offers QA-GNN (Yasunaga et al., 2021), a KGQA model that makes use of ConceptNet (Speer et al., 2017), and provides a visualization interface to explore the subgraph used by the model.
Although a reasoning path in a graph may provide a clear explanation of a model's prediction, we believe that interpreting graph-based models is not straightforward because, usually, that path contains many irrelevant nodes and edges that may obscure the actual reasoning of the model. Thus, we propose to teach KGQA models with real examples of graphs. In this way, the lecturer, or even the students themselves, have to show the process of cleaning the graph to obtain and interpret the reasoning path. This process would be much more valuable for the future endeavor of the students than using a set of slides with examples of preprocessed clean graphs because they will be able to reproduce what they learn in real-use cases in companies.
### Learning Multi-Agent Systems
Lastly, the current progress in QA is pushing toward creating robust models across multiple domains. To do this, there are two types of approaches: multi-dataset models and multi-agent models. While the former aims to train a single architecture on multiple datasets, the latter does the opposite. It trains multiple models (agents) on single datasets and combines the agents. UKPSQuARE is compatible with both approaches; therefore, it is an ideal platform to teach them.
Thanks to UKP-SQuARE, we can also follow a flipped classroom methodology to teach multi-agent systems. After reading class materials explaining the models of this topic at home, the class time can be used as an explanation of the topic with a live demonstration of these models. In particular, we can easily show that multi-agent systems such as MetaQA (Puerto et al., 2021) select different agents depending on the input question. Figure 7 shows that the first answer selected by MetaQA, which is the correct one, is from an out-of-domain agent, while the second answer, which is not correct, is from the in-domain agent. This example illustrates the collaboration between agents achieved by multi-agent systems and can be an ideal way of starting the lecture on this topic before explaining the architectural details of MetaQA. Similarly, the platform can be used to introduce multi-dataset systems such as UnifiedQA (Khashabi et al., 2020), before delving into in-detail explanations of the model. In particular, the lecturer can explain the multiple accepted QA formats by UnifiedQA through real examples, and then, continue the explanation with the training details of the model with the support of slides.
Figure 5: A HotFlip example where only flipping. _(full stop)_ to _670_ changes the answer.
Figure 6: A visualized reasoning graph of the question _Where would you find a basement that can be accessed with an elevator?_
### Assignments with UKP-SQuARE
In addition to the above teaching scenarios in class, we also propose a homework assignment based on UKP-SQuARE6 that leverages the insights and knowledge they acquire from the class. The students need to train their own QA model using the popular Hugging Face's Transformer library (Wolf et al., 2020), deploy the model on our platform, and then write an in-detail report where they analyze their model from multiple perspectives. This report must include a quantitative analysis of the performance of their model on the test set and also a qualitative analysis that includes an explanation of the outputs of the model to a series of input questions, adversarial attacks that shows errors of their model, and an analysis of the possible behavioral errors obtain from _CheckList_. Furthermore, the students should also compare their model with other available models and identify the type of questions where their model fails. This would help them understand that models overfit the domain of their training data and, therefore, may fail in other domains. This assignment requires students to truly understand each component they learned during the class, which will help them consolidate their knowledge and develop a deeper understanding of the inner workings of different QA techniques. Additionally, the assignment can serve as a useful assessment tool, enabling teachers to gauge students' understanding of the material and provide targeted feedback and support as needed.
Footnote 6: [https://colab.research.google.com/drive/17qwldLMmU5EDxf9TLR29zIG9-EGKmNxP?usp=share_link](https://colab.research.google.com/drive/17qwldLMmU5EDxf9TLR29zIG9-EGKmNxP?usp=share_link)
### User Study
To quantitatively evaluate the effectiveness of UKP-SQuARE in teaching the above QA techniques, we designed a questionnaire to collect feedback from students. The questionnaire was administered to a group of students who had completed a graduate NLP course that used our platform in both class time and for the assignment. All participants are 20-to-30 years-old graduate students in computer science. The questionnaire mainly focuses on two aspects: whether UKP-SQuARE deepens their understanding of techniques in QA systems and whether it makes it easier to get hands-on experience in UKP-SQuARE. The majority of questions require students to rate on a scale of 1 to 5. The complete questionnaire can be found in Appendix A.
Figure 8 shows the Likert scale chart with the responses of seven students who participated in the survey. As we can see, students have very positive attitudes towards all aspects of UKP-SQuARE for their QA learning. All participants think that the platform makes the class more engaging and interesting. In particular, most of them (91%) think UKP-SQuARE helps them better distinguish different QA formats. For information retrieval, the majority of the responders do not think that the platform can help them understand better the difference between lexical retrieval and semantic retrieval. The main reason behind this is that the difference between lexical and semantic retrievers is challenging to distinguish only via visualization unless students actively compare the documents by themselves. Besides, it also requires students
Figure 7: Multi-Agent QA in UKP-SQuARE: different agents are selected to predict the answer based on the input
to have a good understanding of semantic similarity and lexical similarity. Therefore, we plan to improve it by showing the difference between vector similarity and keyword matching between questions and retrieved documents. Regarding explainability and adversarial attack tools, around two-thirds of students believe that the platform facilitates their learning process of these topics. When it comes to hands-on experience, the vast majority of students agree that UKP-SQuARE is easy to use. Our platform provides an infrastructure that dramatically lowers the bar for students to get hands-on experience. All students think that without UKP-SQuARE, they would spend more time finding suitable open-source software to compare different models, analyze the output, and conduct adversarial attacks. Moreover, the respondents estimated that without UKP-SQuARE, the average time spent on homework would increase from 2-5 hours to more than 8 hours. One student also commented that doing experiments with the platform was straightforward and allowed him to try different ideas without any overhead. Therefore, although the survey sample is small and limits the conclusions, this overall positive feedback invites us to continue investigating how to conduct our QA and NLP classes more interactively with UKP-SQuARE and suggests that our students would benefit from extending this interactive class to other NLP topics such as generative pre-trained large language models, prompting with reinforcement learning from human feedback, word embeddings, parsing trees, and machine translation among others.
## 4 Related Work
The most relevant tool is the AllenNLP demo7, which provides a user interface to the main components of the AllenNLP library Gardner et al. (2018). This website includes an interface where users can interact with five extractive QA models. However, their goal is to have a showcase of their library rather than an extensive platform for teaching QA. Thus, their functionalities are limited. Most of their deployed models are outdated, only cover extractive QA settings, and do not provide information retrieval methods. Moreover, their explainability and adversarial attacks are not compatible with their transformer-based model. Furthermore, they do not provide graph-based models, which can be useful to explain graph neural networks and explainability methods based on graphs. Additionally, it cannot be used for our homework assignment because users cannot deploy and analyze their own models with explainability and adversarial attack tools as in our platform. However, they do provide demos for other NLP topics, such as Open Information Extraction and named entity recognition, and parsing trees, among others.
Footnote 7: [https://demo.allennlp.org/reading-comprehension/](https://demo.allennlp.org/reading-comprehension/)
## 5 Conclusion
In this paper, we present a novel method to teach question-answering to postgraduate NLP students following the learner-centered method of flipped classrooms. We propose to provide reading materials to the students before the class and use the UKP-SQuARE platform as a driving tool to conduct the class. This platform integrates the most popular QA pipelines and an ecosystem of tools to analyze the available models. These tools include explainability methods, behavioral tests, adversarial attacks, and graph visualizations. We provide a series of use cases for teaching based on the provided models and methods by UKP-SQuARE, showing that classes can become much more interactive by using UKP-SQuARE than in conventional lectures. To evaluate the effectiveness of the platform and our methodology, we conducted a survey to collect feedback from students who took our class. The results show that most of the students think
Figure 8: Students feedback towards UKP-SQuARE used in QA education.
UKP-SQuARE accelerates their learning process and reduces the overhead to get hands-on experience. We plan to extend our platform to support prompting large language models, and therefore, we leave as future work creating a curriculum to teach prompting methods.
## Acknowledgements
We thank Max Eichler, Martin Tutek, Thomas Arnold, Tim Baumgartner, and the anonymous reviewers for their insightful comments on a previous draft of this paper. This work has been funded by the German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29-1), the QASciInf project (GU 798/18-3), and by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
|
2309.08590 | Neural Machine Translation Models Can Learn to be Few-shot Learners | The emergent ability of Large Language Models to use a small number of
examples to learn to perform in novel domains and tasks, also called in-context
learning (ICL). In this work, we show that a much smaller model can be trained
to perform ICL by fine-tuning towards a specialized training objective,
exemplified on the task of domain adaptation for neural machine translation.
With this capacity for ICL, the model can take advantage of relevant few-shot
examples to adapt its output towards the domain. We compare the quality of this
domain adaptation to traditional supervised techniques and ICL with a
40B-parameter Large Language Model. Our approach allows efficient batch
inference on a mix of domains and outperforms state-of-the-art baselines in
terms of both translation quality and immediate adaptation rate, i.e. the
ability to reproduce a specific term after being shown a single example. | Raphael Reinauer, Patrick Simianer, Kaden Uhlig, Johannes E. M. Mosig, Joern Wuebker | 2023-09-15T17:44:21Z | http://arxiv.org/abs/2309.08590v1 | # Neural Machine Translation Models Can Learn to be Few-shot Learners
###### Abstract
The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.
## 1 Introduction
Large Language Models (LLMs) have demonstrated few-shot learning capabilities on various natural language processing tasks, as highlighted by Brown et al. (2020) or Garcia et al. (2023). When prompted with suitable example translations, they can compete with neural machine translation (NMT) models, built and trained specifically for translating between languages (Vilar et al., 2023). Interestingly, one can adapt LLMs to specific domains merely by adding example translations to their prompt at inference time (Moslem et al., 2023). This ability to adapt to specific tasks and domains is known as _in-context learning_ (ICL). In contrast to traditional fine-tuning methods, ICL does not require a separate set of customized parameters for each domain, which implies major efficiency gains through batched inference.
In this paper, we integrate ICL for domain adaptation into NMT systems in multiple steps. We compare our method for adapting NMT systems to traditional fine-tuning approaches, as well as to the domain adaptation abilities of an open-source LLM. Specifically, our main contributions are the following:
1. We evaluate an unmodified NMT system's ICL capacity for domain adaptation and demonstrate its limitations.
2. We propose a training scheme to improve an NMT model's ICL capability.
3. We show that ICL can be combined with more traditional adaptation methods to further improve domain adaptation performance.
4. We compare our method to the performance of the open-source LLM Falcon-40B (Penedo et al., 2023) on a machine translation task with ICL for domain adaptation.
## 2 Related Work
Bulte and Tezcan (2019) improve the translation performance of an NMT model by integrating translation fuzzy-matched pairs from a translation memory as input to an NMT model. This idea was further expanded by Pham et al. (2020) and Xu et al. (2020), who for a given source segment use sentence embeddings to retrieve similar examples and compared different schemes for integrating those as cues into the NMT network.
Our approach differs in that we only train on the tokens belonging to the translation and not on the tokens provided as context, which we show to work better. In addition, Pham et al. (2020)'s training procedure differs, as they train their model from scratch, using training data from multiple domains and evaluate on those same domains, while we train on general domain data and evaluate on a new domain that is not in the training data. Furthermore,
we focus on the multi-domain adaptation task using light-weight adapters. This approach not only allows us to extend to new domains without retraining the full model, but also offers a more practical and efficient strategy for real-world applications.
The authors of (Moslem et al., 2023) investigated the capabilities of a proprietary LLM, specifically GPT-3.5, for adaptive machine translation using ICL. Their extensive experiments showed that GPT-3.5 can adapt well to in-domain sentence pairs and/or terminology.
## 3 Experiments
We conduct a series of experiments to develop NMT systems that exceed at few-shot ICL domain adaptation. Here we present the experiments in a logical order, where we start with the baseline models described in Section 3.1 and subsequently introduce several stages of development. In stages 0 and 1 we attempt ICL with the unmodified and domain-fine-tuned baseline models, respectively. Then, in Stage 2, we fine-tune the baseline model to the _task_ of domain ICL, instead of a particular domain. Finally, we combine ICL and domain adaptation through fine-tuning in Stage 3. Our experimental progression was guided by the metrics and datasets that we introduce in Sections 3.5 and 3.6, respectively.
### Models
Throughout this paper, we work with an NMT system and the Falcon-40B LLM, which we both describe here.
#### 3.1.1 Falcon LLM
To provide a direct comparison with LLMs and their capacity for ICL, we conduct experiments with the decoder-only Transformer language model Falcon-40B (Penedo et al., 2023), specifically the non-instruction-tuned variant1. Inference is done with greedy decoding. Following previous work (Bawden and Yvon, 2023; Garcia et al., 2023; Hendy et al., 2023) (_inter-alia_) the model is prompted to perform translation without specific fine-tuning towards the machine translation task.
Footnote 1: The model is available from the _huggingface_ platform: [https://huggingface.co/tiuae/falcon-40b](https://huggingface.co/tiuae/falcon-40b)
A simple prompt template is used for all \(k\)-shot experiments with Falcon-40B, see Figure 1.
In preliminary experiments we found that \(k=0\) does not work well with this specific model2 - the outputs tend to be entirely hallucinated.
Footnote 2: For \(k=0\) the prompt contains only the single source sentence as input and the target language followed by a colon.
#### 3.1.2 NMT Systems
The baseline model that we use as the starting point for all further experiments is a Transformer (Vaswani et al., 2017) model with 12 encoder layers and two decoder layers, implemented with the NVIDIA NeMo toolkit (Kuchaiev et al., 2019). The embedding size is 1,024 with a feed-forward network dimension of 4,096. The model has a joint vocabulary of 32,768 tokens, while embedding matrices are specific to the encoder, decoder, and output projection modules, i.e. parameters are not shared between them. The model was trained to support a maximum input size of 1,536 tokens by augmenting the training data with random concatenations of parallel sentences. We evaluate the model using greedy decoding.
For the experiments presented here, the baseline model is either fine-tuned in full (Stage 2a and Stage 2b), or light-weight adapters (Bapna and Firat, 2019) are added to the model (Stage 1 and Stage 3). We choose full-model fine-tuning on out-of-domain data to adapt the NMT model to a new task - translating with an increased context of related examples - and adapter layers for learning from in-domain data.
The adapters we use follow Bapna et al. (2019)'s formulation, but with layer normalization applied after the bottleneck rather than before it. We use a bottleneck width of 256 and insert adapters in every layer of the decoder and every other layer of the encoder.
We always fine-tune with the ADAM optimizer (Kingma and Ba, 2014) and early stopping based on validation loss.
### Stage 0 & Stage 1: ICL with a Standard NMT Model
Motivated by the few-shot learning capabilities of LLMs, we examine the ability of a standard English-to-German NMT model to adapt to a domain given only similar and relevant translation
Figure 1: Prompt template for LLM.
pairs as additional context, i.e., without changing the model's parameters.
To find similar source segments in the translation memory, we search for nearest neighbours in an embedding space. We use the multi-lingual sentence embedding model3 from the sentence transformer library (Reimers and Gurevych, 2020) to embed the source sides of all segment pairs. Then we employ hnswlib(Malkov and Yashunin, 2020) to find the approximate nearest neighbours: Each source sentence in the domain-specific datasets is first encoded with the sentence-embedding model and then added to an index. For the sake of simplicity in this paper, we will refer to the approximate nearest neighbors simply as nearest neighbors. To measure the similarity between a pair of segments \(\mathsf{s}\) and \(\mathsf{s}^{\prime}\), we use the cosine distance of the corresponding embedding vectors \(\mathsf{v}_{\mathsf{s}}\) and \(\mathsf{v}_{\mathsf{s}^{\prime}}\), i.e.,
Footnote 3: Model name on [https://www.sbert.net/](https://www.sbert.net/): all-MiniLM-L6-v2
\[\mathrm{d}(\mathsf{s},\mathsf{s}^{\prime}):=1-\frac{\mathsf{v}_{\mathsf{s}} \cdot\mathsf{v}_{\mathsf{s}^{\prime}}}{\|\mathsf{v}_{\mathsf{s}}\|_{2}\cdot\| \mathsf{v}_{\mathsf{s}^{\prime}}\|_{2}}.\]
For a given source \(\mathsf{s}\) and target segment \(\mathsf{t}\), we identify its nearest neighbours \(s_{1}\), \(s_{2}\),..., \(s_{k}\), using the the cosine distance above. Each source sentence \(s_{i}\) is paired with a reference translation \(t_{i}\) for \(i=1,...,k\). We sort the pairs by their distance to \(\mathsf{s}\) in the embedding space, i.e.,
\[\mathrm{d}(\mathsf{s},\mathsf{s}_{1})\leq\mathrm{d}(\mathsf{s},\mathsf{s}_{2 })\leq...\leq\mathrm{d}(\mathsf{s},\mathsf{s}_{k})\;.\]
Our assumption is that similar segments should have similar translations. For Stage 0 of the experiments, we treat the context sentences and actual source text as one body of text, separated only by a single space, ordering the segments from least similar to most similar, with the current source segment \(\mathsf{s}\) at the end. As a result, the input of the encoder is
\[\text{<bos> }\mathsf{s}_{k}\ \mathsf{s}_{k-1}\...\ \mathsf{s}_{1}\ \mathsf{s}\ \text{<eos>}\]
while for the decoder, we use the prefix:
\[\text{<bos> }\mathsf{t}_{k}\ \mathsf{t}_{k-1}\...\ \ \mathsf{t}_{1}\]
where \(\text{<bos>}\) and \(\text{<eos>}\) represent the beginning-of-sentence and end-of-sentence tokens, respectively. The model's task is then to continue from the target prefix by generating a translation of the source segment \(\mathsf{s}\).
In our experiments, we evaluated the translation performance using a varying number \(k\) of nearest neighbors, specifically \(k\in\{1,2,5\}\).
In Stage 1 we run additional experiments where we fine-tune the model for each domain, using the in-domain training data in the original format. This domain-specific fine-tuning is performed by injecting adapter layers (Bapna and Firat, 2019) into the network while freezing the rest of the model, and leveraging a standard negative log-likelihood (NLL) loss for training. For each domain, we then test the fine-tuned model directly (\(0\)-shot in Tables 3 and 4) as well as with ICL (\(k\)-shot with \(k\neq 0\)).
Adapters are trained towards convergence, i.e. until there is no further improvement in terms of validation loss.
### Stage 2a & Stage 2b: Fine-Tuning towards ICL
To improve the model's capability to use nearest neighbor examples in the context, we further fine-tune the full model on out-of-domain data, namely _News-Commentary4_(Kocmi et al., 2022), which contains roughly 450K parallel segments. For validation we use a sample of 2K parallel segments from _EuroPar5_(Koehn, 2005). For this full model fine-tuning we do not train until convergence, but apply aggressive early stopping: Training is stopped when the validation loss does not decrease by at least 0.1 twice in a row, validating for every 1% of an epoch. This is to encourage the model to only learn the new task and data format, but not adapt to a new data distribution.
Footnote 4: From the WMT’23 evaluation campaign: https://data. statmt.org/news-commentary/v18.1/
Instead of directly concatenating the nearest neighbors to the training examples, we add a special separation token - <sep> - to separate the source and target segments. We then construct the training instances for the encoder as:
\[\text{<bos> }\mathsf{s}_{k}\ \text{<sep> }\mathsf{s}_{k-1}\ \text{<sep> }\...\ \text{<sep> }\mathsf{s}_{1}\ \text{<sep> }\mathsf{s}\ \text{<eos>}\]
and for the decoder as:
\[\text{<bos> }\mathsf{t}_{k}\ \text{<sep> }\mathsf{t}_{k-1}\ \text{<sep> }\...\ \text{<sep> }\mathsf{t}_{1}\ \text{<sep> }\mathsf{t}\ \text{<eos>} \tag{1}\]
and compute the NLL loss on all tokens of (1). This training loss is identical to the one used in Pham et al. (2020). We denote this procedure as Stage 2a.
For Stage 2b the idea is that the model should learn to predict the target segment from the source
segment using the nearest neighbor translations but not learn to predict \(t_{k},...,t_{1}\) as in Pham et al. (2020). Hence we mask the NLL training loss such that it is computed only on the tokens that belong to the target segment t, excluding all context tokens, thus fully focusing the training signal on translating t in the context of its \(k\) nearest neighbors.
We then use the same format as in Stage 2a for training, while at inference time we provide the decoder with a prefix containing the ICL examples:
\(\texttt{<bos>t}_{k}\)\(\texttt{<sep>t}_{k-1}\)\(\texttt{<sep>}\)\(...\)\(\texttt{<sep>t}_{1}\)\(\texttt{<sep>}\)
Finally, we measure quality of the predicted translation \(\hat{t}\) by computing BLEU and COMET scores with the target segment t as reference.
For both Stage 2a and Stage 2b, the \(k\)-nearest neighbors for each segment in the training data and validation data are extracted from the entire _News-Commentary_ dataset as described in Section 3.2.
### Stage 3: Combining ICL and Domain Adaptation
To combine Stage 2b's ICL capacity with adapter-based domain adaptation, we add adapters to the model from Stage 2b using the same configuration as for the Stage 1 experiments. Again, we train separate adapter layers for each domain.
Each example from the training set is annotated with its nearest neighbors from the same training set, excluding itself.
### Metrics
For evaluating translation quality, we used the SacreBLEU framework Post (2018) that implements the BLEU metric Papineni et al. (2002). We also evaluate with reference-based COMET Rei et al. (2022) to compare the model outputs to the reference translations in the test data.
### Datasets
We run our experiments with the English-German language pair on 8 domains from the ACED- and MDNS corpus collections, which we describe in this section. Statistics for all datasets are provided in Table 1.
#### 3.6.1 ACED corpus
The ACED corpus Lin et al. (2022) is comprised of three distinct datasets, namely Asics, Emerson, and Digitalocean, each consisting of English-German sentences extracted from various domains. ACED is a real-world benchmark containing data derived from translations performed by humans.
#### 3.6.2 MDNS corpus
The MDNS corpus Aharoni and Goldberg (2020) is a multi-domain corpus containing English-German parallel text from five diverse domains (IT, Koran, Law, Medical, Subtitles). It was specifically created for evaluating domain-adaptation.
## 4 Results
Here we discuss the experimental results, progressing from Stage 0 to Stage 3. All results are depicted separately for ACED- and MDNS corpora in Tables 3 and 4 respectively.
### Stage 0: ICL with Baseline NMT Model
When we add nearest neighbors to the inputs and target prefixes we first observe that the automated metrics are mostly improved across all datasets. Notably, the result with 1-shot nearest neighbors is the best in this group of experiments. Additionally we find that the 5-shot result often degrades below the baseline.
Specifically for the Medical and Subtitles corpora of MDNS, we find that the model fails to improve over the baseline for all \(k\).
The cosine distance of the nearest neighbors seems to be a viable indicator of performance in this set of experiments, e.g. when comparing the results for ACED Emerson & Digitalocean, where the average cosine distance (see Table 2) for \(k=1\) is much lower for Emerson at 0.13, versus 0.3 for Digitalocean. We find a moderate, statistically insignificant, negative Pearson correlation (\(r=-0.43\)) between the average cosine distances for \(k=1\) and the difference in BLEU scores between the Stage 0 1-shot experiment and the baseline.
\begin{table}
\begin{tabular}{c|c|c|c} & Training & Validation & Test \\ \hline Asics & 1.4 & 0.5 & 0.6 \\ Digitalocean & 11.8 & 2.0 & 7.6 \\ Emerson & 4.3 & 1.3 & 1.7 \\ \hline \hline IT & 223 & 2.0 & 2.0 \\ Koran & 17.9 & 2.0 & 2.0 \\ Law & 467 & 2.0 & 2.0 \\ Medical & 248 & 2.0 & 2.0 \\ Subtitles & 500 & 2.0 & 2.0 \\ \end{tabular}
\end{table}
Table 1: Segment counts for the domain-specific dataset splits used for experimentation, in thousands.
While BLEU indicates improvement (COMET reduces only for \(k>1\)), we find that the model's behavior is in fact degenerate. Specifically, the model often fails to produce any output after the given prefix and instead predicts <eos> immediately, which leads to empty translations. We find that the rates of empty translations are 8.5%, 8.1%, and 9.1% for \(k=1,2\), and 5 respectively. In contrast, the baseline system has a 0% rate of empty outputs. This is despite the model being specifically trained to support inputs covering the full context-width in pre-training.
### Stage 1: Combining ICL with Domain Fine-Tuning
For Stage 1 we first observe that the model can be effectively adapted to each domain by training adapters (see the Stage 1, 0-shot results in Tables 3 and 4). A notable exception is MDNS Subtitles, where adaptation only slightly improves over the baseline. This result is, however, consistent with other work (Aharoni and Goldberg, 2020).
When we combine the trained adapters with ICL, we find no improvements over Stage 1's 0-shot results, with the exception of ACED Asics.
Performance drops catastrophically for the MDNS Medical & Subtitles corpora. The rate
\begin{table}
\begin{tabular}{l c|c|c||c|c|c|c|c} & \multicolumn{4}{c}{ACED} & \multicolumn{4}{c}{MDNS} \\ & Asics & Digitalocean & Emerson & IT & Koran & Law & Medical & Subtitles \\ \hline \(k=1\) & 0.19 & 0.30 & 0.13 & 0.15 & 0.18 & 0.13 & 0.12 & 0.24 \\ \(k=2\) & 0.21 & 0.31 & 0.14 & 0.17 & 0.20 & 0.15 & 0.14 & 0.25 \\ \(k=5\) & 0.23 & 0.34 & 0.16 & 0.21 & 0.24 & 0.17 & 0.17 & 0.27 \\ \end{tabular}
\end{table}
Table 2: Average cosine distance in embedding space of test set sources to \(k\)-nearest neighbors from train, for \(k\in\{1,2,5\}\).
\begin{table}
\begin{tabular}{l|l|l l|l l|l l|l l} & & \multicolumn{4}{c}{Asics} & \multicolumn{4}{c}{Digitalocean} & \multicolumn{2}{c}{Emerson} & \multicolumn{2}{c}{Average} \\ & & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET \\ \hline \multirow{5}{*}{
\begin{tabular}{l} \end{tabular} } & Baseline & 34.5 & 0.8624 & 53.3 & 0.9043 & 44.9 & 0.9108 & 44.2 & 0.8925 \\ \cline{2-11} & 1-shot & 43.7 & 0.8578 & 54.4 & 0.8982 & 72.1 & 0.9213 & 56.7 & 0.8924 \\ & 2-shot & 44.5 & 0.8525 & 54.5 & 0.8967 & 67.2 & 0.9137 & 55.4 & 0.8876 \\ & 5-shot & 41.0 & 0.8420 & 53.9 & 0.8955 & 28.7 & 0.8705 & 41.2 & 0.8693 \\ \cline{2-11} & 0-shot & 41.2 & 0.8780 & 60.1 & **0.9152** & 79.2 & 0.944 & 60.2 & 0.9124 \\ & 1-shot & 46.4 & 0.8657 & 59.6 & 0.9099 & 78.1 & 0.9378 & 61.4 & 0.9045 \\ & 2-shot & 46.2 & 0.8628 & 59.0 & 0.9090 & 66.3 & 0.9275 & 57.2 & 0.8998 \\ & 5-shot & 44.2 & 0.8500 & 57.3 & 0.9038 & 32.2 & 0.893 & 44.6 & 0.8823 \\ \cline{2-11} & 1-shot & 43.0 & 0.8765 & 55.0 & 0.9073 & 73.1 & 0.9382 & 57.0 & 0.9073 \\ & 2-shot & 43.5 & 0.8785 & 54.4 & 0.9072 & 71.6 & 0.9392 & 56.5 & 0.9083 \\ & 5-shot & 42.3 & 0.8662 & 54.4 & 0.9066 & 73.4 & 0.9347 & 56.7 & 0.9025 \\ \cline{2-11} & 1-shot & 44.5 & 0.8766 & 54.9 & 0.9046 & 73.1 & 0.9391 & 57.5 & 0.9068 \\ & 2-shot & 44.5 & 0.8777 & 55.4 & 0.9080 & 74.3 & 0.939 & 58.1 & 0.9082 \\ & 5-shot & 44.7 & 0.8734 & 55.0 & 0.9072 & 70.0 & 0.9363 & 56.6 & 0.9056 \\ \cline{2-11} & 1-shot & **48.8** & 0.8896 & **60.5** & 0.9141 & 78.9 & **0.9480** & 62.7 & **0.9172** \\ & 2-shot & 48.5 & **0.8914** & 60.1 & 0.9132 & **80.7** & 0.9456 & **63.1** & 0.9167 \\ & 5-shot & 47.6 & 0.8837 & 59.0 & 0.9095 & 80.2 & 0.9437 & 62.3 & 0.9123 \\ \cline{2-11} & 1-shot & 31.8 & 0.8588 & 40.0 & 0.8677 & 71.6 & 0.9380 & 47.8 & 0.8882 \\ & 2-shot & 34.5 & 0.8671 & 44.8 & 0.8876 & 76.9 & 0.9416 & 52.1 & 0.8988 \\ \cline{2-11} & 5-shot & 40.8 & 0.8789 & X & X & 78.5 & 0.9434 & X & X \\ \end{tabular}
\end{table}
Table 3: Results for the ACED corpus of the multi-stage evaluation for various numbers of \(k\)-nearest-neighbors, using BLEU and COMET metrics. The ”Baseline” scores are for the English-to-German NMT system described in Section 3.1. We omit the Digitalocean dataset for the Falcon-40B 5-shot evaluation.
of empty translations also increases dramatically6, with a rate of up to 63.1% for the 1-shot result on MDNS Medical (up from 8.0% at Stage 0).
Footnote 6: Empty translation rates of Stage 1 for each \(k\) over all corpora: 1-shot: 20.0%, 2-shot: 20.6%, 5-shot: 13.6%.
### Stage 2a & Stage 2b: Fine-Tuning towards ICL
When we compare the Stage 2b (fine-tuning with the masked loss as described in Section 3.3) to the Stage 0 results, we find that adding the separator and fine-tuning the model leads to generally improved scores on the ACED corpora for all \(k\).
BLEU Results on MDNS corpora show slightly worse performance compared to the Stage 0 results in 3 out of 5 corpora for \(k=1\), but the averages are still improved. COMET scores are however consistently improved for this comparison. We also find that the scores for \(k=2\) and \(k=1\) are very close, with 2-shot being ahead of 1-shot by 0.6% BLEU points on average on ACED data, and 1-shot being ahead of 2-shot by 0.2 BLEU points on MDNS. Which is in contrast to what we have observed in Stage 0. \(k=5\) still performs worse, but we observe high relative gains compared to the 5-shot Stage 0 result.
When comparing Stage 2a and Stage 2b, i.e. the masked loss and the standard NLL loss the results are inconclusive.
We further observe that Stage 2b exhibits almost negligible rates of producing empty translations, at 0.3%, 0.8%, and 1.2% for \(k=1,2,5\) respectively.
### Stage 3: Combining ICL and Domain Adaptation
When combining ICL with adapters trained with nearest neighbor annotated data, we observe the globally best results for the NMT models. Compared to Stage 1, which is also fine-tuned towards each domain, we observe greatly improved results on all automatic metrics. Stage 3 2-shot delivers the best result on ACED, with an improvement of 2.5 BLEU points compared to the runner-up in terms of average BLEU Stage 1 1-shot. On MDNS, Stage 3 1-shot improves over the runner-up Stage 1 0-shot by 3.8 points.
Especially the scores for MDNS Koran improve
\begin{table}
\begin{tabular}{l|l|l l|l l|l l|l l l|l l} & & \multicolumn{2}{c|}{IT} & \multicolumn{2}{c|}{Koran} & \multicolumn{2}{c|}{Law} & \multicolumn{2}{c}{Medical} & \multicolumn{2}{c}{Subtitles} & \multicolumn{2}{c}{Average} \\ & & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET \\ \hline \multirow{8}{*}{**CED**} & Baseline & 34.3 & 0.8153 & 14.7 & 0.7229 & 44.7 & 0.8696 & 43.5 & 0.8406 & 27.7 & **0.7891** & 33.0 & 0.8075 \\ \cline{2-13} & 1-shot & 35.9 & 0.7698 & 17.2 & 0.6580 & 51.6 & 0.853 & 42.3 & 0.7964 & 17.5 & 0.6358 & 32.9 & 0.7426 \\ & 2-shot & 35.9 & 0.7433 & 17.2 & 0.6346 & 49.9 & 0.8467 & 38.2 & 0.7810 & 22.4 & 0.7024 & 32.7 & 0.7416 \\ & 5-shot & 31.9 & 0.7196 & 14.5 & 0.6000 & 42.3 & 0.8287 & 30.5 & 0.7505 & 24.4 & 0.7400 & 28.7 & 0.7278 \\ \hline \multirow{8}{*}{**CED**} & 0-shot & 39.6 & 0.8403 & 22.6 & 0.7274 & 50.7 & 0.8824 & 47.8 & 0.8429 & **28.1** & 0.7879 & 37.8 & 0.8162 \\ & 1-shot & 36.7 & 0.7620 & 21.1 & 0.6434 & 51.1 & 0.8228 & 7.1 & 0.5078 & 0.0 & 0.4306 & 23.2 & 0.6333 \\ & 2-shot & 35.6 & 0.7436 & 20.5 & 0.6152 & 48.9 & 0.8019 & 15.9 & 0.5441 & 0.0 & 0.4208 & 24.2 & 0.6251 \\ & 5-shot & 32.8 & 0.7296 & 18.4 & 0.5980 & 44.9 & 0.7940 & 23.4 & 0.5854 & 16.8 & 0.6388 & 27.3 & 0.6692 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 34.3 & 0.8277 & 15.5 & 0.7222 & 49.5 & 0.8739 & 43.6 & 0.8380 & 25.7 & 0.7838 & 33.7 & 0.8091 \\ & 2-shot & 35.8 & 0.8244 & 16.4 & 0.7154 & 49.6 & 0.8739 & 44.6 & 0.8362 & 24.1 & 0.7810 & 34.1 & 0.8062 \\ & 5-shot & 34.3 & 0.8203 & 15.9 & 0.7083 & 48.1 & 0.8659 & 40.7 & 0.8220 & 24.0 & 0.7712 & 32.6 & 0.7975 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 34.6 & 0.8269 & 16.0 & 0.7217 & 50.4 & 0.8752 & 44.2 & 0.8405 & 25.9 & 0.7830 & 34.2 & 0.8095 \\ & 2-shot & 35.5 & 0.8182 & 16.5 & 0.7150 & 49.9 & 0.8747 & 43.4 & 0.8349 & 24.5 & 0.7774 & 34.0 & 0.8040 \\ & 5-shot & 33.5 & 0.8103 & 16.6 & 0.7070 & 48.2 & 0.8696 & 37.5 & 0.8274 & 25.2 & 0.7782 & 32.2 & 0.7985 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 41.4 & **0.8423** & 28.8 & 0.7235 & **58.1** & **0.8862** & **52.9** & **0.8488** & 27.0 & 0.7846 & **41.6** & **0.8171** \\ & 2-shot & **41.7** & 0.8401 & **29.6** & 0.7225 & 57.3 & 0.8850 & 51.2 & 0.8480 & 27.6 & 0.7850 & 41.5 & 0.8161 \\ \cline{1-1} & 5-shot & 40.9 & 0.8296 & 29.2 & 0.7249 & 55.8 & 0.8804 & 48.7 & 0.8413 & 27.5 & 0.7876 & 40.4 & 0.8128 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 31.5 & 0.7985 & 17.9 & 0.7081 & 45.4 & 0.8538 & 42.4 & 0.8035 & 21.7 & 0.7586 & 31.8 & 0.7845 \\ \cline{1-1} & 2-shot & 35.5 & 0.8202 & 22.4 & 0.7263 & 49.5 & 0.8680 & 47.5 & 0.8288 & 21.4 & 0.7605 & 35.3 & 0.8008 \\ \cline{1-1} & 5-shot & 40.1 & 0.8377 & 24.5 & **0.7358** & 50.5 & 0.8749 & 50.1 & 0.8401 & 22.6 & 0.7776 & 37.6 & 0.8132 \\ \end{tabular}
\end{table}
Table 4: Results for the MDNS corpus of the multi-stage evaluation for various numbers of \(k\)-nearest-neighbors using BLEU and COMET metrics. The ”Baseline” scores are for the English-to-German NMT system described in Section 3.1.
well above all previous models, with a relative improvement of 101% compared to the baseline. The models seem to be able to make better use of close nearest neighbors in this dataset, which are often substrings of one another. See Section 4.6 for a detailed analysis of the copying behavior on the ACED Asics dataset.
The rate of empty translations is reduced to 0.0% for all \(k\).
We further notice that the results for 1- and 2-shot ICL are very similar, and that the scores for 5-shot are also improved.
### Falcon: Adapting Both to a Task and a Domain at the Same Time
The Falcon-40B LLM proves to excel at ICL, learning a task and adapting to a domain at the same time. Notably, scores improve with higher values of \(k\), which is the opposite behavior to what we have observed with NMT models. When nearest neighbors are close to the test data, as they are for the ACED Emerson and MDNS IT datasets, we find results that are close to the best Stage 3 results.
Falcon-40B's generation speed is however very slow at an average of 2.6 tokens per second in the 1-shot setting.
Also note that we have no means at this time to check whether parts of the test data are contained in Falcon's training data.
### Qualitative Analysis
Maintaining consistency in translations is an important quality criterion in the localization industry, and is a major motivator in the use of translation memories, which help ensure that marketing materials, for example, are uniform in the promised features and functions of the products being advertised (Emery et al., 2011). In NMT models, this consistency is traditionally increased by fine-tuning a translation model for a specific domain, which we denote by "Stage 1 with 0-shot". In this section, we compare the fine-tuning approach with our ICL, specifically "Stage 3 with 1-shot". We evaluate translation consistency on the Asics dataset. For that purpose we select segments s in the test data for which the source nearest neighbor s\({}^{\prime}\) in the Asics train data differs by exactly one word. These segments s are denoted as word-substitution segments. For each pair (s, s\({}^{\prime}\)), we then use two sources and one target t\({}^{\prime}\) in the ICL prompt and the other target t as reference to compare the generated translation to. We define the fraction of pairs for which the generated translation exactly matches the reference as the word substitution accuracy (WSA). The results are in Table 6.
The translation for Stage 3 1-shot achieves a WSA score of 74.6%, compared to 57.14% for the fine-tuning approach (Stage 1 0-shot), whereas the non-adapted model only produces the exact reference translation in 1.7% of cases.
## 5 Conclusions
We have shown that a standard NMT system can be trained to be an effective in-context learner in domain adaptation tasks. We find that this is most effective when we combine generic fine-tuning towards the ICL task and training adapter layers for a specific domain with nearest neighbor annotated data.
When the model is not fine-tuned towards the task, we find that ICL works to some extent, but shows degenerate behavior.
While LLMs like Falcon-40B can adapt to the MT task with ICL, this comes at the cost of increased compute. Generally, the results with the LLM still underperform our dedicated MT models.
|
2309.16976 | Benchmarking and In-depth Performance Study of Large Language Models on
Habana Gaudi Processors | Transformer models have achieved remarkable success in various machine
learning tasks but suffer from high computational complexity and resource
requirements. The quadratic complexity of the self-attention mechanism further
exacerbates these challenges when dealing with long sequences and large
datasets. Specialized AI hardware accelerators, such as the Habana GAUDI
architecture, offer a promising solution to tackle these issues. GAUDI features
a Matrix Multiplication Engine (MME) and a cluster of fully programmable Tensor
Processing Cores (TPC). This paper explores the untapped potential of using
GAUDI processors to accelerate Transformer-based models, addressing key
challenges in the process. Firstly, we provide a comprehensive performance
comparison between the MME and TPC components, illuminating their relative
strengths and weaknesses. Secondly, we explore strategies to optimize MME and
TPC utilization, offering practical insights to enhance computational
efficiency. Thirdly, we evaluate the performance of Transformers on GAUDI,
particularly in handling long sequences and uncovering performance bottlenecks.
Lastly, we evaluate the end-to-end performance of two Transformer-based large
language models (LLM) on GAUDI. The contributions of this work encompass
practical insights for practitioners and researchers alike. We delve into
GAUDI's capabilities for Transformers through systematic profiling, analysis,
and optimization exploration. Our study bridges a research gap and offers a
roadmap for optimizing Transformer-based model training on the GAUDI
architecture. | Chengming Zhang, Baixi Sun, Xiaodong Yu, Zhen Xie, Weijian Zheng, Kamil Iskra, Pete Beckman, Dingwen Tao | 2023-09-29T04:49:35Z | http://arxiv.org/abs/2309.16976v1 | # Benchmarking and In-depth Performance Study of Large Language Models on Habana Gaudi Processors
###### Abstract.
Transformer models have achieved remarkable success in various machine learning tasks but suffer from high computational complexity and resource requirements. The quadratic complexity of the self-attention mechanism further exacerbates these challenges when dealing with long sequences and large datasets. Specialized AI hardware accelerators, such as the Habana GAUDI architecture, offer a promising solution to tackle these issues. GAUDI features a Matrix Multiplication Engine (MME) and a cluster of fully programmable Tensor Processing Cores (TPC). This paper explores the untapped potential of using GAUDI processors to accelerate Transformer-based models, addressing key challenges in the process. Firstly, we provide a comprehensive performance comparison between the MME and TPC components, illuminating their relative strengths and weaknesses. Secondly, we explore strategies to optimize MME and TPC utilization, offering practical insights to enhance computational efficiency. Thirdly, we evaluate the performance of Transformers on GAUDI, particularly in handling long sequences and uncovering performance bottlenecks. Lastly, we evaluate the end-to-end performance of two Transformer-based large language models (LLM) on GAUDI. The contributions of this work encompass practical insights for practitioners and researchers alike. We delve into GAUDI's capabilities for Transformers through systematic profiling, analysis, and optimization exploration. Our study bridges a research gap and offers a roadmap for optimizing Transformer-based model training on the GAUDI architecture.
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: J
Our research delves into this territory, exploring strategies to intricately balance the tasks assigned to MME and TPC. (3) Unexplored Transformer performance in long sequences. The third challenge pertains to the performance of Transformers on Habana's GAUDI, particularly in scenarios involving long input sequences. This uncharted territory lacks exploration, hindering our ability to grasp the GAUDI's prowess in handling extended sequences. (4) Lack of end-to-end large language model (LLM) performance on GAUDI. The dearth of existing research in a holistic evaluation of end-to-end LLM performance on Habana's GAUDI, coupled with an exploration of potential performance bottlenecks.
To address those challenges, we benchmark and analyze deeply the performance of Transformers and Transformer-based models on Habana's GAUDI.
The main contributions of this paper are summarized as follows:
* We conduct an in-depth performance comparison between the Matrix Multiplication Engine (MME) and Tensor Processing Cores (TPC) within GAUDI. Our analysis offers insights into the relative strengths and weaknesses of these components, empowering practitioners to make informed decisions when tailoring Transformers to the GAUDI platform.
* We explore strategies to balance the workload effectively between MME and TPC, we provide practical guidance to achieve enhanced performance and efficiency for Transformers on GAUDI.
* We tackle the dearth of research in evaluating the performance of Transformers on GAUDI, especially when dealing with long sequences. Through systematic benchmarking and analysis, we uncover the performance bottlenecks that arise in this scenario, shedding light on the unique challenges posed by long input sequences.
* We assess the overall performance of Transformer-based models on Habana's GAUDI and identify performance bottlenecks, we offer a holistic perspective on GAUDI's capabilities for accelerating complex language models.
In summary, through this comprehensive study, our work demonstrates the potential of specialized hardware accelerators like GAUDI processors. We contribute a deeper understanding of Habana's GAUDI for Transformers and Transformer-based models. Our findings not only address existing research gaps but also provide practitioners and researchers with valuable insights to optimize the performance of Transformers and Transformer-based models on GAUDI, further unlocking the potential of these models for real-world applications.
## 2. Background and Motivation
In this section, we present background information on the Habana GAUDI processor architecture, the TPC programming model, Transformers, and our motivation.
### Habana GAUDI Processor Architecture
Habana GAUDI processor is a specialized hardware accelerator designed for deep learning training workloads (Habana et al., 2018). As shown in Figure 1, it features a heterogeneous compute architecture with a Matrix Multiplication Engine (MME), eight fully programmable Tensor Processing Cores (TPC), and fast memory and network units. Specifically, GAUDI efficiently handles various deep learning operations by lowering them into matrix multiplication operations (e.g., convolution) and nonlinear operations (e.g., activation) that can be executed on MME and TPC, respectively. The fast memory and network units enhance intra-/inter- processor data transfers, respectively.
MME is specifically tuned for computation tasks in deep neural network (DNN) training such as fully connected layers, convolution layers, and batched-GEMM, providing significant acceleration compared to traditional CPU and GPU solutions (Han et al., 2017). The TPC is a very long instruction word (VIW) single instruction multiple data (SIMD) processor crafted for deep learning nonlinear operations. It is designed to accelerate non-matrix-based operations that cannot be efficiently handled by the MME. The programming approach of TPC offers users a high degree of flexibility and innovation, supported by features tailored to various workloads. These include acceleration for non-GEMM operations, tensor-based addressing, capabilities to hide latency, random number production, and advanced implementation of special functions.
GAUDI incorporates a DMA engine, streamlining the data exchange between MME and TPC using shared memory. For communications between different processors, GAUDI includes on-chip RoCE v2 engines, facilitating efficient inter-processor dialogue during training sessions. Consequently, GAUDI ensures seamless collaboration between MME and TPC and delivers exceptional scalability in both expanding and multiplying setups.
### TPC programming model
_TPC architecture_. The TPC boasts a very long instruction word (VIW) design. Its wide single instruction multiple data (SIMD) vector mechanism can handle 2048-bit SIMD tasks and is compatible with several data types like float, bfloat16, INT16, INT32, and INT8. The instruction set for the TPC processor is segmented into four functional slots:
* responsible for memory loading, value movements, and value settings.
* handles scalar computations.
* manages vector computations.
Figure 1. A high-level view of GAUDI architecture, which consists of Matrix Multiplication Engine (MME), Tensor Processing Cores (TPC), Memory Units (Local Memory, Shared Memory, DMA, HBM, RDMA), and Connection Units (Ethernet, PCIe).
- oversees memory storage, value movements, and value settings.
Four distinct memory domains are embedded within the TPC processor: scalar local memory, vector local memory, global memory, and configuration space. The global memory can be interfaced through specialized access points termed as tensors. On average, every four cycles can accommodate the loading or writing of a 2,048-bit vector to the global memory. It's also worth noting that individual TPC maintain distinct local memory instances, and each TPC can exclusively access its dedicated local cache. The local memory is bifurcated into two storage banks, scalar local memory (1 KB) and vector local memory (80 KB). There's an unrestricted bandwidth when reading from or writing to the local memory in each cycle.
_TPC programming._ The TPC stands as a fully programmable VLIW SIMD processor, programable via TPC-C, a C language derivative. TPC-C incorporates vector data types for seamless use of processor-specific SIMD capabilities. A TPC program is composed of host glue code and a TPC kernel. Host glue code, executed on the host machine, controls program execution. TPC kernels, executed on TPC processors, handle computation. Users leverage the SynapseAI TPC SDK, featuring an LLVM-based TPC-C compiler, simulator, and debugger, for TPC kernel development. The TPC processor on the GAUDI ASIC accepts tensor inputs/outputs with dimensions ranging from 1 to 5. Index spacing, similar to threads in CUDA programming, efficiently divides workloads among TPC processors. Each index space member corresponds to an independent unit of work executed on a single TPC. Users utilize Habana's intrinsics, encompassing arithmetic, bitwise, and load operations, to create TPC kernels, while ensuring effective workload distribution.
### Transformers
The Transformer architecture was first introduced by Vaswani et al. (2016) as a novel approach to sequence-to-sequence learning tasks, particularly in natural language processing. Transformers have since become a popular choice for various machine-learning applications, including language modeling, machine translation, and computer vision. The key innovation of the Transformer architecture is the self-attention mechanism, which allows the model to weigh different parts of the input sequence differently when making predictions. This mechanism enables Transformers to capture long-range dependencies and contextual information more effectively compared to traditional recurrent neural networks (RNNs) and convolutional neural networks (CNNs). Figure 2 presents the architecture of a Transformer, which typically consists of encoder blocks, decoder blocks, and other operations such as position embedding and layer normalization. Specifically, each encoder/decoder block consists of multi-head self-attention mechanisms followed by a position-wise feed-forward network. Many widely-recelved DNN models are based on Transformers. For example, the Bidirectional Encoder Representations from Transformers (BERT) (Beng et al., 2017) and the Generative Pre-trained Transformer (GPT) (Beng et al., 2017). BERT is primarily an encoder from the Transformer architecture. GPT is both an encoder and a decoder, but during training, only the decoder portion is utilized. BERT is bidirectional, trying to understand the context on both sides of a word. GPT is unidirectional, predicting words based on the preceding context.
\begin{table}
\begin{tabular}{r r r} \hline \hline
**Operation** & **Explanation** & **Mapping** \\ \hline torch.mul & element wise mul & TPC \\ torch.matmul & matrix product & MME \\ torch.square & tensor square & TPC \\ ** & tensor square & TPC \\ tensor +- tensor & tensor +- tensor & TPC \\ scalar \({}^{\star}\) tensor & scalar \({}^{\star}\) tensor & TPC \\ scalar +- tensor & scalar +- tensor & TPC \\ torch.sqrt & square root & TPC \\ torch.log & natural logarithm & TPC \\ \hline \hline \end{tabular}
\end{table}
Table 1. Operation-Hardware Mapping via SynapseAI
Figure 3. Matrix Computation workflow of each self-attention. \(Q\), \(K\) and \(V\) are query, key, value matrices of dimension size \(N\) by \(D_{Q}\),\(D_{K}\), \(D_{V}\), respectively.
Figure 2. Transformer model architecture overview, which mainly consists of multi-head attention.
### Motivation
The impressive ability of Transformer-based models comes from complex computational operations and the huge number of parameters (340 million in BERT, 1.5 billion in GPT-3) (Beng et al., 2017; Chen et al., 2017), which results in intensive computations during training. Consequently, training Transformer-based models is both time-consuming and resource-intensive. Although today's AI accelerators, such as Haibana GAUDI outperform GPUs in some training tasks (Kang et al., 2019), the architecture-specific optimizations on these accelerators are not well studied. For example, Figure 3 shows the workflow of matrix computations in self-attention. Specifically, The input sequence \(x\in\mathbb{R}^{N\times D_{x}}\) is projected by three weight matrices \(W_{Q},W_{K},W_{V}\) to corresponding representations \(Q\), \(K\) and \(V\). Following common terminology, the \(Q\), \(K\), and \(V\) are referred to as the "queries", keys", and "values" respectively. Then softmax is used to normalize attention matrix \(\mathcal{Q}\)\(\mathcal{K}\)\({}^{T}\) into a probability distribution. The softmax's computation can only be executed on TPC, which degrades the overall training performance of Habana GAUDI (to be detailed in SS3). Thus, we perform comprehensive profiling on Habana GAUDI with insights that derive our optimizations in improving the training performance.
## 3. Experimental Results
In this section, we present our experimental setup, profiling results, and discussion.
### Experimental Setup
PlatformsWe perform our experiments on one Habana Labs System 1 (HLS-1) (Hansen et al., 2019) AI training system. The HLS-1 incorporates eight GAUDI processors and two Gen 4.0 PCIe switches. External Host CPU is used to manage HLS-1 via PCIe switches. Each GAUDI processor is equipped with 32 GB on-chip memory. All experiments are on a single GAUDI processor.
Implementation detailsHabana's SynapseAI (Hansen et al., 2019) software suite enables efficient mapping of neural network topologies onto GAUDI hardware. All experiments are performed on PyTorch-based SynapseAI. The PyTorch version is 1.13.1.
### Basic Profiling
Operation mappingPyTorch provides a variety of operations. GAUDI's compute architecture is heterogeneous and includes two independent compute engines - an MME and a fully programmable TPC cluster. So it is necessary for us to know which compute engine each operation is finally mapped to. We perform detailed profiling to obtain the operation-compute engine mapping, as shown in Table 1. From this table, we draw the following conclusions: only matrix multiplication operations are mapped to MME, and all other operations are mapped to TPC. Even linear operations on tensors like tensor multiplied by scalar are mapped to TPC.
Performance comparison between MME and TPCA detailed performance comparison between MME and TPC is very necessary because it helps us analyze the performance bottleneck of the GAUDI. Different operations in the neural network will either be mapped to MME or TPC, and the slowest operation on two compute engines will become a performance bottleneck.
To profile computation performance, we enable MME and TPC to perform batch matrix multiplication operations on various dense matrices of different sizes and measure the run time and tera floating point operations per second (TFLOPS). We directly choose tron. bnn on MME to perform a batch matrix-matrix product, where the batch size is set to 64. We implement TPC batch matrix-matrix product kernels using example code from Habana_Custom_Kernel repository (Habana et al., 2019). SynapseAI profiler is used as suggested by Habana to generate hardware trace events and accurately measure the execution time of each operation. Table 2 shows the execution time between MME and TPC for matrix multiplications of different sizes. We can conclude that the computational performance of TPC is up to 7\(\times\) lower than that of MME. In the case of such an obvious performance gap, the most suitable application scenario for GAUDI is that the current operation has a large amount of calculation and can be successfully mapped to MME. The next operation has a small amount of calculation and can be mapped to TPC, in such a situation TPC will not form a computing performance bottleneck. But if the next operation has a similar amount of calculation, then MME has to become idle and wait for the calculation of TPC to complete.
### Transformer Layer Profiling
Softmax attentionSelf-attention computes, for every position, a weighted average of the feature representations of all other positions with a weight proportional to a similarity score between the representations. Transformers usually follow original design (Hansen et al., 2019) by Ashish Vaswani to adopt softmax attention. Softmax attention is a specific form of self-attention where the similarity score is the exponential of the dot product between a query and a key. Similarity function is \(sim(q,k)=exp(\frac{q^{T}K}{\sqrt{D}})\). The \(Q\), \(K\), and \(V\) are referred to as the "queries", keys", and "values" respectively.
Long sequence training in Transformer-based natural language processing (NLP) models, such as BERT and GPT, offers several significant benefits: (1). Capturing long-range dependencies: Long sequence training allows Transformer models to capture these complex dependencies, enabling a better understanding of the context and improving the quality of language representations. (2). Improved contextual understanding: Longer sequences provide more context to the model, allowing it to comprehend the nuances and subtleties in language. (3). enhanced text generation: Longer context windows help the model maintain better coherence and consistency in longer text generation tasks. (4). Better handling of large documents: In real-world applications, NLP models often encounter long
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Size** & **T\_MME** & **F\_MME** & **T\_TPC** & **F\_TPC** & **Speedup** \\ \hline
128 & 7.31 & 2.35 & 9.21 & 1.86 & 1.3 \\
256 & 11.78 & 11.67 & 67.04 & 2.05 & 5.7 \\
512 & 76.51 & 14.37 & 516.60 & 2.13 & 6.7 \\
1024 & 151.03 & 14.56 & 1006.30 & 2.18 & 6.7 \\
2048 & 338.27 & 14.59 & 2247.80 & 2.19 & 6.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Comparison of execution time between MME and TPC for matrix multiplication of different sizes. T_MME, F_MME, T_TPC, F_TPC are short for run time of MME, TFLOPS of MME, run time of TPC, TFLOPS of TPC, respectively. Speedup\(=\) T_T_MME. Time unit is millisecond (ms).
documents or lengthy pieces of text. Because of the advantages of long sequence training, in experiments, we set the input sequence length, batch size, the number of heads, and the hidden size per head as 2048, 128, 6, and 64 respectively.
Figure 4 shows a profiling result of a single Transformer layer. From this result, we have two observations. (1). There are many blank areas in the MME operating area. These blank areas indicate that MME is idle is waiting for tasks. (2). In the running region of TPC, it is very clearly shown that the running time of softmax exceeds 80% of the total running time.
The reasons for this phenomenon are: (1). The TPC is less computationally powerful than the MME as discussed in Section 3.2. But The computational complexity of softmax operation in a Transformer is \(\mathcal{O}(N^{2})\). As a result, it becomes a performance bottleneck when softmax operation is mapped into TPC. (2). Softmax requires reduction operations, which are not well-suited for single instruction, multiple data (SIMD) architectures like TPC. Long sequences further exacerbate this problem especially when the sequence length exceeds 1024. Overall, the limited computational capability of TPC combined with the complexities of softmax operations on this architecture hinders GAUDI's overall performance and efficiency.
_Linearized attention._ Linearized attention, also known as "linear attention", is an alternative approach to the traditional softmax-based attention mechanism used in Transformers. It aims to reduce the computational complexity associated with the softmax operation while maintaining the core principles of self-attention. Linear attention is particularly useful when dealing with very long sequences, where standard softmax-based self-attention becomes impractical due to its quadratic complexity.
The softmax-based self attention is \(\text{softmax}(\frac{QK^{T}}{\sqrt{D}})V\), where \(Q,K\) and \(V\in\mathbb{R}^{N\times D}\). The computational complexity of self-attention is quadratic to sequence length \(N\). Assume \(\phi\) is a feature map that is applied in a row-wise manner, linear attention is \((\phi(Q)\phi(K)^{T})V=\phi(Q)(\phi(K)^{T}V)\) after applying the associative property of matrix multiplication. linear attention leads to a computational complexity of \(\mathcal{O}(N)\).
There are two reasons why we want to use linear attention on Habana: (1). The calculation of the softmax operation itself is relatively complicated, and it involves exponential operations and reduction operations. (2). The essence of linear attention is that matrix multiplication can ensure that almost all self-attention calculations are mapped to MME with stronger computation performance.
```
1defFAVO(q,k,v):
2#Projectkeyandqueriesoftextransmpspace
3q_scaled=self.pre_scale(q)
[email protected]
5q_prime=torch.exp(q_scaled+self.offset)
6k_scaled=self.pre_scale(k)
[email protected]
8k_prime=torch.exp(k_scaled+self.offset)
9att_norm=q_prime@(
10k_prime.transpose(-2,-1)@torch.ones_like(v)
11}
12att_raw=q_prime@(k_prime.transpose(-2,-1)@v)
13x=att_raw/att_norm
14returnx
```
Listing 1: Pseudocode for FAVOR Algorithm.
We adopt feature maps from Linear Transformers [11] and Performer [3] to construct linear attention on Habana. Linear Transformer proposes to directly set the feature map as \(\phi(x)=elu(x)+1\). The Performer uses a novel Fast Attention Via a positive Orthogonal Random features approach (FAVOR). Its feature map is \(\phi(x)=\frac{h(x)}{\sqrt{m}}(f_{1}(\omega_{1}^{T}x),\cdots,f_{1}(\omega_{m}^{T }x),\cdots,f_{l}(\omega_{1}^{T}x),\cdots,f_{l}(\omega_{m}^{T}x))\), where \(f_{i},\cdots,f_{l}:\mathbb{R}\rightarrow\mathbb{R}\). \(\omega_{1},\cdots,\omega_{m}\) are drawn from some distribution.
Figure 5 depicts profiling results of linear Transformers and Performers. The total run time of linear Transformers and Performer is 30 ms and 80 ms, respectively. Compared to original softmax-based attention, linear Transformers and Performer achieve 6 x, 2 x speedup. Besides there are not many blank areas in the MME operating area, which indicates full utilization of MME. Therefore,
Figure 4. Profiler Trace of the transformer with softmax attention. DMA is direct memory access engine that manages data transfer/copy between MME and TPC. We observe that executing softmax operations on TPC results in MME idle time (i.e., gaps between MME operations).
we can conclude that linearized attention is a good alternative to softmax attention from the perspective of performance.
However, there is a blank area in the MME operating area when using Performer. The blank area is because the TPC is busy with exponential operations. As shown in the algorithm of FAVOR, we can find that the calculation of "q_prime" and "k_prime" is independent. But Graph Compiler does not detect this independence, so it does not schedule MME and TPC tasks well so that they can overlap.
_Activation functions_. Linear Transformer (Krizhevsky et al., 2017) does not consider the impact of different activation functions on TPC performance, it directly sets the activation function to exponential linear unit (ELU). And there is no previous work discussing the performance of different activation functions on TPC. Thus we conduct a rigorous evaluation to assess the impact of various activation functions on the overall efficiency and computational capabilities of the TPC. The experiments incorporate popular activation functions explored in NLP tasks, including rectified linear unit (ReLU), LeakyReLU, Gaussian Error Linear Units (GELU), and gated linear unit function (GLU).
In experiments, we set the input sequence length, batch size, the number of heads, and the hidden size per head to 2048, 128, 6, and 64 respectively. Figure 7 depicts hardware traces of different activation functions. From the profiling results, we have two observations 1. The total run time of a Transformer with ReLU, LeakyReLU, GELU, and GLU is 30.1 ms, 30.2 ms, 29.7 ms, and 32.6 ms, respectively. Transformers with ReLU, LeakyReLU, and GELU have similar performance and The execution of MME and TPC has a good overlap. 2. Transformer with GLU has the worst performance. And its execution causes a blank area in MME. We think the reasons for such phenomena are (1). those activation functions are applied to element-wise tensor, which is extremely suitable for SIMD architecture like TPC. (2). SynapseAI does not have good support for GLU, which cause extra compilation during the execution.
### End-To-End Language Models Profiling
In order to analyze the end-to-end performance of a full language model on GAUDI, we choose profile execution of BERT and GPT run on GAUDI. For GPT model, we utilize the GPT2LMHeadModel module from Hugging Face (Hugging Face, 2018). GPT2LMHeadModel is the GPT2 Model Transformer with a language modeling head on top. For the BERT model, we use the BertForMaskedLM module from Hugging Face. BertForMaskedLM is the BERT model with a language modeling head on top. The input dataset is book corpus (Hugging et al., 2019). Due to limited GAUDI memory, we set the input sequence length, batch size, the number of layers, the number of heads, and the hidden size per head as 2048, 8, 2, 8, and 64 respectively.
Figure 8, 9 show hardware traces of GPT and BERT models. From traces, we have similar observations as single Transformer layer profiling. There are many blank areas in the MME operating area, which indicates MME is idle. However, TPC is obviously busy. Potential performance issues of Transformer-based language models on GAUDI are (1). workload between MME and TPC is unbalanced. (2). There is no good overlap between MME and TPC. As a result, either MME or TPC is ideal, which causes a waste of computing resources.
## 4. Insights and Takeaways
(1) We need to try to provide all source code so GraphCompiler can analyze the source code thoroughly and generate good mapping and schedule of MME and TPC. (2) The code should use very basic operations provided by Torch and avoid high-level abstracts like torch.einsum() for good mapping and schedule of MME and TPC by GraphCompiler. (3) When designing a neural network model, the user should consider that most calculations in the model can be transformed into matrix multiplication. In this way, the model can fully utilize MME' powerful computation capability.
## 5. Conclusion and Future Work
In this work, we embarked on a comprehensive exploration of the performance capabilities of Habana's GAUDI processor when accelerating Transformers and Transformer-based models. Our findings not only address existing research gaps but also provide practitioners and researchers with valuable insights to optimize the performance of Transformers and Transformer-based models on GAUDI, further unlocking the potential of these models for real-world applications. In the future, we plan to investigate novel attention mechanisms tailored to GAUDI's architecture could also optimize performance for long sequences.
###### Acknowledgements.
The material was supported by the U.S. DOE Office of Science (SC), Office of Advanced Scientific Computing Research (ASCR), under contracts DE-AC02-06CH11357. This work was also supported by NSF awards 2303820, 2303064, 2247080, 2311876, and 2312673. We gratefully acknowledge the computing resources provided and operated by the Joint Laboratory for System Evaluation (JISE) at Argonne National Laboratory.
Figure 5. Profiling of linear Transformers. Colored blocks are computation periods and gaps between colored blocks are idle periods.
Figure 6. Profiling of Performer. Colored blocks are computation periods, and gaps between colored blocks are idle periods.
Figure 8: Hardware trace of GPT model.
Figure 7: Activation functions in NLP.
Figure 9: Hardware trace of BERT model. |
2304.00130 | Top-down integration of a hBN quantum emitter in a monolithic photonic
waveguide | Integrated quantum photonics, with potential applications in quantum
information processing, relies on the integration of quantum emitters into
on-chip photonic circuits. Hexagonal boron nitride (hBN) is recognized as a
material that is compatible with such implementations, owing to its relatively
high refractive index and low losses in the visible range, together with
advantageous fabrication techniques. Here, we combine hBN waveguide
nanofabrication with the recently demonstrated local generation of quantum
emitters using electron irradiation to realize a fully top-down elementary
quantum photonic circuit in this material, operating at room temperature. This
proof of principle constitutes a first step towards deterministic quantum
photonic circuits in hBN. | Domitille Gérard, Michael Rosticher, Kenji Watanabe, Takashi Taniguchi, Julien Barjon, Stéphanie Buil, Jean-Pierre Hermier, Aymeric Delteil | 2023-03-31T21:09:04Z | http://arxiv.org/abs/2304.00130v2 | # Top-down integration of a hBN quantum emitter in a monolithic photonic waveguide
###### Abstract
Integrated quantum photonics, with potential applications in quantum information processing, relies on the integration of quantum emitters into on-chip photonic circuits. Hexagonal boron nitride (hBN) is recognized as a material that is compatible with such implementations, owing to its relatively high refractive index and low losses in the visible range, together with advantageous fabrication techniques. Here, we combine hBN waveguide nanofabrication with the recently demonstrated local generation of quantum emitters using electron irradiation to realize a fully top-down elementary quantum photonic circuit in this material, operating at room temperature. This proof of principle constitutes a first step towards deterministic quantum photonic circuits in hBN.
Hexagonal boron nitride (hBN) has recently emerged as a very attractive platform for integrated quantum photonics [1; 2]. This van der Waals (vdW) material offers a wide range of fabrication techniques that allow to associate it with other materials -including other vdW crystals- in highly miniaturized complex devices. In particular, it presents favorable properties for photonics, with atomically flat surfaces and a very wide bandgap (\(\sim 6\) eV), opening the possibility to use it as a light confining medium. In this spirit, fabrication of complex hBN photonic structures, such as waveguides [3; 4], phase plates and microlenses [5], bullseye antennas [6] and photonic crystal structures [7; 8], have been recently demonstrated.
Last but not least, hBN also hosts optically active point defects that act as excellent single-photon emitters (SPEs) in various wavelength ranges [9; 10; 11]. Most of these color centers occur randomly in the flake, thereby hindering scalable integration in photonic devices. Nonetheless, these emitters have been at the core of highly promising implementations of both monolithic and hybrid photonic devices, including waveguides [3; 12; 13], cavities [7; 14; 15] and fibers [16; 17; 18]. Those realizations are relying on either _a posteriori_ integration of the quantum emitter, or on the random presence of an emitter in the structure, which limits both control and scalability of those devices.
The recent demonstration of local generation of blue-emitting color centers (B-centers) using a focused electron beam has offered an attractive workaround [19; 20; 21]. These emitters can be generated in a commercial scanning electron microscope (SEM) with a high control of their position and average number, and consistently exhibit a reproducible emission wavelength, a predominent in-plane polarization, a short lifetime and a high optical coherence [20; 21; 22; 23; 24].
Here, we take advantage of this e-beam technique by including it in a completely top-down approach for the fabrication of an elementary quantum photonic device, where the emitter generation is included as an additional step in the fabrication process. We first fabricate short waveguides (10 \(\mu\)m) with semicircular grating couplers [25; 26] and subsequently embed quantum emitters in the waveguide by local irradiation. Photoluminescence (PL) characterization demonstrates the coupling of both the excitation laser and the SPE emission into the waveguide. Although the design we implemented is not intended to be optimal, it illustrates the potential of electron-beam generated SPEs for quantum photonics and integrated optical quantum information.
The geometry that we have opted for is a ridge waveguide, chosen for the simplicity of its realization. The light is confined by refractive index contrast between hBN (\(n_{o}\sim 2.2\)) and the environment. The SiO\({}_{2}\)/Si substrate has a refractive index that is low enough to obtain low-losses propagating modes in flakes as thin as 60 nm. Fig. 1(a) shows a sketch of the waveguide with semicircular grating couplers at its two output ports. Fig. 1(b) shows the waveguide cross section and the corresponding FDTD simulation of the fundamental TE mode profile. Fig. 1(c) shows the longitudinal profile of the same mode. For a point dipole emitting at 440 nm with an in-plane polarization orthogonal to the waveguide main axis and located at the mode antinode, we calculate that 23 % of the light is coupled to the waveguide in each direction, of which 18 % is extracted towards the top direction to be collected by a NA = 0.8 lens. Additionally, 5 % is directly coupled to the upper free space, allowing to characterize the sample without using the guided modes.
Figure 2 depicts the fabrication steps. The waveguide fabrication starts with the excitation of high-pressure, high-temperature grown hBN [27] on a SiO\({}_{2}\)(300 nm)/Si substrate. Single crystals of 60 to 220 nm thickness are selected using atomic force microscopy and cathodoluminescence, to infer the quality of the crystal as well as the presence of carbon complexes, identified as precursors of the B-centers [21]. The
waveguides are then processed from the hBN crystals based on the following steps [28]. The waveguide shape is patterned by electron beam lithography with a Raith eLine system working at 20 kV (PMMA A3, dose 250 \(\mu\)C/cm\({}^{2}\)). We then deposit 30 nm of aluminum that, after lift-off, serves as a mask in the following step. The etching of the waveguide is performed with a fluoride reactive ion etching (RIE) for 3 min 30 s with the following parameters: plasma power of 50 W, etching pressure of 40 mTorr, 40 sccm of CHF\({}_{3}\) and 4 sccm of O\({}_{2}\) (etching speed 33 nm/minute). The aluminum is then dissolved in a KOH solution. To generate the SPEs in the fabricated waveguide, the sample is finally inserted in a SEM. The waveguide is then irradiated at precise positions located in the center of the ridge, using a static focused beam of 0.4 nA under an acceleration voltage of 15 kV during 15 s. These parameters were found to provide an average SPE yield of order one per irradiated site in this sample, based on in-situ cathodoluminescence [29]. The SPE generation still has a partially probabilistic character, associated with fluctuations in the SPE number, in-plane polarization direction and depth. The two latter attributes impact their coupling with the guided mode. We therefore performed four irradiations on a 60 nm thick waveguide (termed WG1) and, in the following, we focus on a SPE that presents favorable characteristics. In addition, another waveguide, denoted WG2 (thickness 220 nm), was irradiated with a higher dose to yield a localized ensemble of SPEs.
A SEM image of the final structure is shown figure 3(a). We characterize the waveguide in a confocal microscope operating at room temperature, equipped with a high-quantum-efficiency cooled CCD camera and avalanche photodiodes (APDs). We first verify that light can be coupled in, transmitted through and coupled out from the waveguide. Fig 3(b) shows a CCD image of the waveguide under laser illumination. The presence of sizable light intensity coming from the other port demonstrates coupling from free space to the guided mode and again to free space. The waveguide transmission spectrum can be inferred from the ratio between the transmitted and the reflected spectra of a broadband laser (fig 3c). It exhibits etalonning due to Fabry-Perot oscillations in the waveguide. The B-center zero-phonon line (ZPL) at 440 nm coincides with a maximum of transmission.
We then perform PL measurements. The emitters are excited with a 405 nm laser diode operating in pulsed regime
Figure 2: Fabrication of the hBN waveguide embedding quantum emitters. (a) A hBN crystal is exfoliated on a SiO\({}_{2}\)/Si substrate. (b) and (c) E-beam lithography is realized on PMMA. (d) Aluminum is deposited on the sample. (e) After lift-off, the remaining Al serves as a mask. (f) The hBN flake is etched away outside of the Al mask. (g) The Al mask is removed with KOH. (h) The waveguide is irradiated to generate localized quantum emitters.
Figure 1: Design of the hBN waveguide embedding quantum emitters. (a) Scheme of the hBN waveguide on SiO\({}_{2}\)/Si embedding a SPE. (b) TE\({}_{00}\) mode profile as calculated with FDTD. (c) Longitudinal cut of the dipole emission propagation in the structure as calculated with FDTD.
(80 MHz), at a power of \(\sim\)400 \(\mu\)W, which is in the linear regime of the emitter [20]. The PL signal is filtered out from the backreflected laser using a filter centered around the emitter ZPL, and collected using either the CCD camera or the APDs. We start with WG2, where an ensemble is generated in the waveguide, to perform spectroscopy measurements. We compare two different configurations of the detection path, while exciting from the top. The configuration 1 consists in exciting and detecting via the same free-space mode, directly above the emitter (fig. 4(a), upper panel). This configuration does not use the guided mode. In this configuration, we observe the ensemble spectrum. Its spectral shape is well known [20; 29], and features a 440 nm ZPL and phonon sidebands. We then verify that the PL light is coupled to the guided mode by switching to configuration 2, where we keep the same excitation path but we detect from one of the grating couplers, as depicted on the upper panel of figure 4(b). This configuration is obtained by fixing the collection path to the chosen grating coupler, and translating the excitation beam such that it excites the emitters, as monitored by PL measured on the CCD camera. As can be seen on the lower panel of figure 4(b), the spectrum is essentially unchanged by being collected through the waveguide.
In the next step, we proceed to the characterization of an individual emitter. We compare three different configurations of the excitation and detection paths, which are depicted Fig. 5(a). The configurations 1 and 2 consist again in exciting directly above the emitter. Fig. 5(b) shows the corresponding CCD image, with the waveguide outline superimposed for clarity. The SPE PL emission is visible at the excitation spot (violet arrow) as well as at the two output ports (blue arrows), showing that it couples to the guided mode then to free-space via the grating couplers. This coupling is enabled by the large angle between the waveguide axis and the SPE polarization axis. The latter was determined by the dependence of the count rate on the angle of a polarizer inserted in the detection port (fig 5(c)). The emitter lifetime is 1.83 ns, as measured by its fluorescence decay. This value is consistent with prior measurements of B-centers in non-processed flakes [20]. Using a Hanbury Brown and Twiss setup, we measure the autocorrelation function \(g^{(2)}\) of the SPE in configuration 1, where the light is directly collected from the top of the emitter, at the location depicted by the violet circle on fig. 5(b). Fig 5(f) shows a histogram of the photon delay times integrated over multiples of the laser repetition period. The decreased coincidence number of the center period (zero delay) with respect to the others provide \(g^{(2)}(0)=0.35\pm 0.04\), indicating that light predominantly originates from a single B-center. This value is limited by background signal and can be largely improved by decreasing the temperature and using narrower filtering [24]. Switching to configuration 2 is done by keeping the same excitation path but detecting from one of the grating couplers (plain blue circle on fig. 5(b)), as depicted on the scheme fig. 5(a). In this configuration, the count rate is about a factor 4 lower, indicating that the emitter-waveguide coupling is 45 % lower than the ideal case considered in the simulations, where the emitter is located at the mode antinode.
Figure 3: (a) SEM image of a waveguide. (b) CCD image of the waveguide under laser illumination focused on one of the grating couplers. The circle denotes the laser spot. (c) Transmission spectrum of a broadband source.
Figure 4: (a) Upper panel: Scheme of the configuration of excitation and collection path (configuration 1). Lower panel: Ensemble spectrum in configuration 1. (b) Upper panel: Scheme of configuration 2. Lower panel: Ensemble spectrum in configuration 2.
This lower count rate could also originate from deviations of the grating coupler dimensions from the nominal values. Fig. 5(e) shows the \(g^{(2)}\) measured in configuration 2, which exhibits similar antibunching (\(g^{(2)}(0)=0.33\pm 0.06\)). Crucially, this demonstrates that the \(g^{(2)}\) is not degraded through propagation in the structure. Finally, we show that the excitation laser can also be coupled to the guided mode (configuration 3) to excite the SPE. In this configuration, the laser excites the whole structure, such that other emitters luminesce in the waveguide and the grating couplers. Fig. 5(d) shows the corresponding CCD image. To ensure that we only detect light from the same SPE, we then collect the PL signal from the top of the waveguide, at the spot indicated by the blue arrow on fig. 5(d). Fig. 5(g) shows the corresponding coincidence histogram, yielding \(g^{(2)}(0)=0.26\pm 0.04\).
Altogether, these results demonstrate that hBN fabrication and B-center generation can be combined in a complete process starting from hBN exfoliation all the way to deterministic emitter positioning. The obtained device yields guided single photons and operates at room temperature. Future improvements will require optimized photonic structures and emitter-to-photonic mode coupling and a more controlled SPE generation process.
###### Acknowledgements.
The authors acknowledge Christophe Arnold for his help with cathodoluminescence measurements. This work is supported by the French Agence Nationale de la Recherche (ANR) under reference ANR-21-CE47-0004-01 (E\(-\)SCAPE project). This work also received funding from the European Union's Horizon 2020 research and innovation program under Grant No. 881603 (Graphene Flagship Core 3). K.W. and T.T. acknowledge support from JSPS KAKENHI (Grant Numbers 19H05790 and 20H00354).
|
2309.13826 | Building a quantum superposition of conscious states with integrated
information theory | Could there be a quantum superposition of consciousness, as in the Wigner's
friend thought experiment? The integrated information theory (IIT) of
consciousness has turned this into a well-defined question. According to IIT,
consciousness is a measurable physical quantity given by integrated information
($\Phi$), such that the amount of consciousness in a system corresponds to its
amount of $\Phi$. We use the most recent IIT formalism (IIT4.0) to analyze the
simplest non-zero $\Phi$ system known as a feedback dyad. We then propose a
circuit that puts the dyad into a superposition of states which, according to
IIT, would correspond to a superposition of conscious states. We refer to this
as "Schr\"odinger's dyad". We therefore show that either IIT is false or the
simple dyad is conscious and can easily be put into a superposition of
conscious states. We then identify the simplest possible consciousness-collapse
model, which predicts that this superposition is unstable and collapses at a
rate determined by a measure of difference between the superposed conscious
states. Our analysis will enable us to make a number of key observations about
the general structure of integrated information theory (IIT2.0, IIT3.0, IIT4.0,
and QIIT) and the general structure of consciousness-collapse models. | Kelvin J. McQueen, Ian T. Durham, Markus P. Mueller | 2023-09-25T02:15:24Z | http://arxiv.org/abs/2309.13826v1 | # Building a quantum superposition of conscious states with integrated information theory
###### Abstract
Could there be a quantum superposition of consciousness, as in the Wigner's friend thought experiment? The integrated information theory (IIT) of consciousness has turned this into a well-defined question. According to IIT, consciousness is a measurable physical quantity given by integrated information (\(\Phi\)), such that the amount of consciousness in a system corresponds to its amount of \(\Phi\). We use the most recent IIT formalism (IIT4.0) to analyze the simplest non-zero \(\Phi\) system known as a feedback dyad. We then propose a circuit that puts the dyad into a superposition of states which, according to IIT, would correspond to a superposition of conscious states. We refer to this as "Schrodinger's dyad". We therefore show that either IIT is false or the simple dyad is conscious and can easily be put into a superposition of conscious states. We then identify the simplest possible consciousness-collapse model, which predicts that this superposition is unstable and collapses at a rate determined by a measure of difference between the superposed conscious states. Our analysis will enable us to make a number of key observations about the general structure of integrated information theory (IIT2.0, IIT3.0, IIT4.0, and QIIT) and the general structure of consciousness-collapse models.
###### Contents
* 1 Introduction
* 2 The feedback dyad
* 3 Calculating the amount of consciousness (\(\Phi\)) in the feedback dyad
* 4 Calculating the state of consciousness (Q-shape) of the feedback dyad
* 5 The simplest consciousness-collapse model
* 6 Physically implementing the dyad
* 7 Conclusion
* A The general IIT4.0 formalism
* B The quantum feedback dyad in QIIT
* C Solution of the optimization problem of section 5
Introduction
Could there be a quantum superposition of consciousness? This question was raised by Eugene Wigner in the thought experiment that is now known as "Wigner's Friend". Wigner imagined his friend, in a nearby sealed lab, making a quantum measurement. Wigner, who is uncertain of his friend's result, wonders whether he should consider his friend to have entered a quantum superposition of experiencing different results. Wigner argued that this is "absurd because it implies that my friend was in a state of suspended animation". He then concluded that "consciousness must have a different role in quantum mechanics than the inanimate measuring device" ([51, p.180]).
There has since been much speculation by physicists and philosophers over whether states of consciousness could be superposed and what that would even mean. For example, there have been many attempts to extend the Wigner's friend scenario and the associated epistemological and metaphysical implications ([21], [14], [19], [13], [52]). There have also been many attempts to make sense of superpositions of conscious states in many worlds and many minds interpretations of quantum mechanics ([20], [44], [50], [33], [16], [5], [8], [31], [32]). However, without any well-defined criteria for determining which physical states are conscious (and to what degree), the question of whether there could be such a superposition, and what it would be like to be in one, is difficult to evaluate.
Recent neuroscience, on the other hand, has seen the rise of mathematical theories of consciousness, notably, the integrated information theory, or IIT for short ([46], [47], [40], [48], [4])). IIT associates systems with both quantitative amounts of consciousness (roughly, the amount of integrated information in the system, denoted by the symbol \(\Phi\)) and qualitative states of consciousness (roughly, the "shape" of the system's integrated information, or its "Q-shape"). More recently, IIT has been extended into the quantum domain in a framework known as QIIT ([53], [27], [3]). Inspired by these results, Wigner's suggestion that consciousness may be responsible for the collapse of the wave function has been resurrected in models that use integrated information as a criterion for collapse ([28], [17]). In comparison to standard collapse models [11], it has been claimed that IIT-based consciousness-collapse models may be much easier to experimentally test, since they can be tested by the right sorts of quantum computers, if only
we could design the right sort of circuit [17].
In this paper, we propose such a circuit which, if implemented, would put a simple quantum computer into a superposition of states of conscious experience according to the IIT definition of consciousness. Following [17], we consider the simplest non-zero \(\Phi\) system, a feedback dyad. Classically, the dyad has four possible states: (0,0), (1,1), (0,1), and (1,0). Each state is predicted to have a tiny amount of consciousness. This prediction is robust across successive IIT formalisms. Each dyad state has \(\Phi=2\) in IIT2.0 and \(\Phi=1\) in IIT3.0, as shown in [37]. Here, we show that each dyad state has \(\Phi=2\) in IIT4.0 (section 3) and in QIIT (appendix B). Although these states have the same _amount_ of consciousness, they yield different _states_ of consciousness, because they are associated with different Q-shapes, as we show in section 4. The dyad in a superposition of two of its four possible states is therefore the simplest consciousness superposition predicted by IIT. We refer to this as "Schrodinger's dyad" and we propose a simple quantum circuit that allows Schrodinger's dyad to be built.
We would like to stress that we are not endorsing IIT, and so we remain agnostic on whether the dyad is conscious in any meaningful sense. IIT has been shown to be consistent with a number of important experimental results in neuroscience ([34], [15], [22], [2], [30], [29], [38]). However, many criticisms of IIT have also been proposed, and we are sympathetic with some of them ([23], [1], [12], [7], [18], [41]). Either way, what we show is that unless one drastically revises IIT (e.g. [36], [35]), then either IIT is false or the dyad is conscious and can easily be put into a superposition of conscious states. We leave it to the reader to decide between these options.
In section 5 we identify the simplest possible consciousness-collapse model, which predicts that Schrodinger's dyad is unstable and collapses at a rate determined by a measure of difference between the superposed conscious states. We take the Q-shapes defined in section 4, and use them to define the simplest possible collapse operators. This toy model makes a number of important properties of such models transparent. We then compare our toy-model to the more general consciousness-collapse model proposed in [17].
Finally, in section 6 we propose a physical implementation of Schrodinger's dyad, in which two photons enter into a feedback loop inside an optical cable. On the one hand, the implementation may potentially falsify the simplest versions of the IIT-based consciousness collapse models. On
the other hand, the example raises a difficulty with IIT when it comes to physical implementation: IIT assumes that there is always an objective fact of the matter about what the basic causal units in a physical system are.
In addition to identifying this prediction of IIT, our analysis helps to reveal much about the structure of IIT. For example, we resolve a crucial ambiguity in IIT in which logic gates are treated as having binary states (section 2). We also identify a subtle inconsistency between the IIT4.0 description of the dyad and the axioms of IIT4.0 (appendix A).
The paper is organized as follows. Section 2 describes the classical feedback dyad. Section 3 shows how to calculate the classical dyad's \(\Phi\) using IIT4.0. Section 4 provides a simple way of describing the classical dyad's Q-shape. Section 5 explains the simple consciousness-collapse model. Section 6 proposes a physical implementation of the dyad, which may test the model, but which also raises questions about how to understand causality in IIT.
Finally, appendix A explains IIT4.0 more generally and identifies the steps in the IIT calculus that our simple dyad allowed us to skip; appendix B shows how our analysis is consistent with QIIT as presented in [3]; and appendix C proves a general result concerning our Q-shape collapse operators.
## 2 The feedback dyad
The classical dyad is a simple system consisting of two elements or channels, A and B, that simply swap their states from one time step to the next. That is, if at some time, \(t_{0}\), A is in state 1 and B is in state 0, then at the next time step, \(t_{+1}\), A is in state 0 and B is in state 1. The action on these channels is equivalent to a logical SWAP gate which is given a simple diagrammatic representation in Figure 1. The figure makes it clear that there are three distinct levels of description to the dyad: channels, channel values, and channel relationships. A and B
Figure 1: The logical SWAP gate simply exchanges the values \(a\) and \(b\) of channels A and B respectively such that if the input is (A=a,B=b), then the output is (A=b,B=a).
are the channels that are related via the logical SWAP gate in such a way as to exchange their values. In the language of quantum information, the channels are systems, the channel values are states, and the channel relationships are transformations. This is a crucial point. The SWAP gate is a _transformation_ of the states of systems A and B. The gate itself is never "in a state" on its own.
This is an important distinction because gates are frequently described as being in a state that possesses a value, especially in IIT3.0 [40]. In particular, the elements or nodes in the IIT3.0 diagrams have binary states but are also treated as being logic gates. If the nodes are understood as neurons, then they are considered as being in an "active" or an "inactive" state [9], much like a channel. Yet the neurons are also said to act like gates by only activating in response to the right combination of connections to other neurons that are themselves either active or inactive. But this is really a notational relic from the early days of Boolean networks [24, 25] that ignores what is happening at a more granular level. In the neuronal case, an "active" or "inactive" neuron really refers to whether it sends a signal via some channel, i.e. it represents an _action_. The difference is typically unimportant at the granularity in which it is usually considered. But when considering quantum models of these networks, this treatment breaks down. As such, it is the states of the channels that can be in superposition, not the gates themselves. In a neuronal sense, it is thus conscious states that are in superposition, not the physical neurons themselves.
Figure 1 also highlights a fundamental causal dependence in the dyad. The output of channel A causally depends on the input to channel B and vice-versa. In order to emphasize this point, we use capital letters to identify the channels or systems themselves and lowercase letters to identify the values the channels can attain, i.e. their states. One could think of the SWAP gate as a black box with the channels simply identifying the locations of the inputs and outputs of the box. Values are fed into the inputs and then produced by the outputs.
To develop a feedback system with this SWAP gate we simply feed the outputs directly back into the inputs. For simplicity we can represent this system over a series of time steps in the manner shown in Figure 3. The output at a given time step is given as in Figure 1. For example, if the system at a given time step is given by \((a,b)\in\{0,1\}\) where the first element of the set is the state of channel A and the second is the state of channel B, if the inputs were \((a=0,b=1)\equiv(0,1)\) the evolution of the system state over time is just \((0,1)\rightarrow(1,0)\rightarrow(0,1)\)
Creating Schrodinger's dyad then requires that we treat the channels as quantum and represent their states as such. That is, a classical state \((a,b)\) is equivalent to a pure quantum state in the so-called computational basis \(|a,b\rangle\). A superposition of the \(|1,0\rangle\) and \(|0,0\rangle\) states can be achieved by feeding the superposition state
\[|+\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle+|1\rangle\right) \tag{1}\]
into channel \(B\) at \(t_{-1}\). The input state to the dyad as a whole at \(t_{-1}\) is
\[|0,+\rangle=|0\rangle\otimes|+\rangle=\frac{1}{\sqrt{2}}\left(|0,0\rangle+|0, 1\rangle\right), \tag{2}\]
which then evolves into the following state at \(t_{0}\):
\[|+,0\rangle=|+\rangle\otimes|0\rangle=\frac{1}{\sqrt{2}}\left(|0,0\rangle+|1,0 \rangle\right). \tag{3}\]
This is not a superposition of \(\Phi\) values, since all four possible classical states of the dyad have the same \(\Phi\) value. It _is_, however, a superposition of distinct Q-shapes according to IIT3.0 and IIT4.0, as we show in section 4. And so according to IIT, the \(t_{0}\) state of equation 3 represents a superposition of qualitatively distinct states of consciousness. We begin by calculating the dyad's \(\Phi\).
Figure 2: The SWAP gate considered as a feedback system over a series of time steps \(t_{-1}\), \(t_{0}\), and \(t_{+1}\). The output at any given time step is determined by the input at the previous time step according to the mapping shown in Figure 1.
Calculating the amount of consciousness (\(\Phi\)) in the feedback dyad
The general procedure for calculating \(\Phi\) and Q-shape takes many steps. Fortunately, the simplicity of our dyad allows us to skip several steps and to emphasize the most important ones. We explain the more general case in appendix A.
The dyad consists of two parts, A and B. We begin by calculating the integrated cause information and the integrated effect information of each part. Integrated information concerns how much information is lost by _partitioning_ the system, which means replacing a causal relationship with noise where the noise is represented as an equiprobable distribution over all possible states.
To illustrate, let us calculate how much integrated effect information A has, given its present state, about the next state of each of the system's parts, A and B. The _maximum_ of these defines A's integrated effect information.
Given that our dyad is a SWAP gate, it is trivially true that A's present state has zero integrated effect information about A's next state since A's next state is entirely determined by B's present state. Put another way, A's possible next states are all equally probable given its current state. So introducing a partition that induces noise between A at \(t_{0}\) and A at \(t_{+1}\) makes no difference. This makes sense given that there is no causal connection between them in the first place: A affects B but not itself in the next time step. A's present state is not causally connected to A's future state and so there is no integrated effect information.
However, A's present state _does_ fully determine B's future state and so if, for example, our system's present state is (1,0), equation 39 in [4] tells us that the integrated effect information of A's state at time \(t_{0}\) given that it is in state 1 at that instant, is
\[\phi_{e}(a_{t_{0}}=1)=p(b_{t_{+1}}=1|a_{t_{0}}=1)\log_{2}\left[\frac{p(b_{t_{ +1}}=1|a_{t_{0}}=1)}{p^{\theta}(b_{t_{+1}}=1|a_{t_{0}}=\mbox{noise})}\right]. \tag{4}\]
Here, \(p(b_{t_{+1}}=1|a_{t_{0}}=1)\) is the probability that B will be in state \(b=1\) at time \(t_{+1}\) given that A is currently in state 1. It is trivially true that this equals 1. Likewise \(p^{\theta}(b_{t_{+1}}=1|a_{t_{0}}=\mbox{noise})\) represents the probability that B will be in state 1 at time \(t_{+1}\) given the partition \(\theta\) which sets the value of channel A to an equiprobable distribution of the two possible states. In other
words, the partition replaces the effect that A had on B with noise, which means that B's future state is randomly determined. Since there are only two possible states, that means that \(p^{\theta}(b_{t_{+1}}=1|a_{t_{0}}=\text{noise})=0.5\). As such, we have
\[\phi_{e}(a_{t_{0}}=1)=1\cdot\log_{2}\left[\frac{1}{0.5}\right]=1. \tag{5}\]
The same basic equation tells us that the integrated effect information of B's state at time \(t_{0}\), \(\phi_{e}(b_{t_{0}}=0)\), also equals 1.
The integrated cause information for A is calculated in a slightly different manner and illustrates a time asymmetry in the equations of IIT. As in the effect case, the past state of A contains no information about the present state of A, and likewise for B. We only consider the information B's past state has on A's current state and the information A's past state has on B's current state. Specifically, given a current state of (1,0), equation 42 in [4] gives
\[\phi_{c}(a_{t_{0}}=1)=p(b_{t_{-1}}=1|a_{t_{0}}=1)\log_{2}\left[\frac{p(a_{t_{0 }}=1|b_{t_{-1}}=1)}{p^{\theta}(a_{t_{0}}=1|b_{t_{-1}}=\text{noise})}\right] \tag{6}\]
where \(p(b_{t_{-1}}=1|a_{t_{0}}=1)\) is calculated according to Bayes' rule as follows:
\[p(b_{t_{-1}}=1|a_{t_{0}}=1)=\frac{p(a_{t_{0}}=1|b_{t_{-1}}=1)\cdot p(b_{t_{-1} }=1)}{p(a_{t_{0}}=1)} \tag{7}\]
where \(p(a_{t_{0}}=1)\) and \(p(b_{t_{-1}}=1)\) are unconstrained probabilities (see equations 6-8 in [4]) and are both equal to 0.5 since, at any given time step and with no knowledge of past or future states, the probability that we will find either channel in a given state is 0.5 because there are only two states. Here we also have that \(p(a_{t_{0}}=1|b_{t_{-1}}=1)\) is the probability that A's current state is 1 if B's past state is 1 and \(p(b_{t_{-1}}=1|a_{t_{0}}=1)\) is the probability that B's past state was 1 given that A's state is currently 1. As before, \(p^{\theta}(a_{t_{0}}=1|b_{t_{-1}}=\text{noise})\) noises the system and is equal to 0.5. Since \(p(a_{t_{0}}=1|b_{t_{-1}}=1)=1\), Bayes' rule given by equation (7) tells us that \(p(b_{t_{-1}}=1|a_{t_{0}}=1)=1\). As before, then, we find that \(\phi_{c}(a_{t_{0}}=1)=1\). Likewise, the same process tells us that \(\phi_{c}(b_{t_{0}}=0)\) also equals 1.
Equation 45 in [4] then tells us that the integrated information of a part is the minimum of
its integrated effect and integrated cause information, i.e.
\[\phi(a_{t_{0}}=1)=\min\left[\phi_{c}(a_{t_{0}}=1),\phi_{e}(a_{t_{0} }=1)\right] \tag{8}\] \[\phi(b_{t_{0}}=0)=\min\left[\phi_{c}(b_{t_{0}}=0),\phi_{e}(b_{t_{0 }}=0)\right] \tag{9}\]
respectively, which are both trivially 1.
The amount of consciousness (\(\Phi\)) in the state of the whole system is then simply a sum of the integrated information of the smaller subsystems as calculated above. The state of the dyad at the time \(t_{0}\) therefore has
\[\Phi(t_{0}) =\phi(a_{t_{0}}=1)+\phi(b_{t_{0}}=0) \tag{10}\] \[=1+1=2\]
units of consciousness.
No matter which of its four possible states the dyad is in, all of the above reasoning applies, and we find that it always has two units of consciousness. It is therefore not possible to put the dyad into a superposition of \(\Phi\)-values. What we can do, however, is put the system into a superposition of different states of consciousness.
To understand this distinction intuitively, compare experiencing a green screen with experiencing a blue screen. It might be that these two experiences do not correspond to any difference in \(\Phi\) (why would changing only the color change the amount of consciousness?). Now imagine that we put a subject into a superposition of experiencing a blue screen and experiencing a green screen. By assumption this is not a \(\Phi\) superposition, but it is clearly a superposition of distinct conscious experiences. One might doubt that distinct _human_ states of consciousness could ever have identical \(\Phi\)[26], but IIT allows for this in AI, and IIT3.0 and IIT4.0 predict that this is indeed the case for our simple dyad, as we now explain.
Calculating the state of consciousness (Q-shape) of the feedback dyad
If two qualitatively distinct states of consciousness are quantitatively identical (i.e. they have identical \(\Phi\)), then their distinctness must come down to the different ways in which each state generates that \(\Phi\)-value. This difference is what is captured in a Q-shape.1 In this section we define Q-shapes for all four states of the dyad. We show that these Q-shapes are distinct. It follows that IIT (as presently formulated) must treat these states as corresponding to qualitatively distinct states of consciousness. Finally, we discuss some differences in how Q-shapes are understood in IIT3.0 versus IIT4.0, which will be relevant to the collapse model proposed in the next section.
Footnote 1: This has come under various labels in the literature. In [40] it is primarily referred to as a “maximally irreducible conceptual structure (MICS)”. But it is also referred to as a “shape in qualia space”, and so we adopt the simpler terminology, “Q-shape”. In [4] it is referred to as a “\(\Phi\)-structure”.
The dyad states \((1,0)\) and \((0,0)\) each have \(\Phi=2\), but for different reasons. This can be seen by partitioning the dyad, replacing some of the parts by noise, as defined above, and then noting that \((1,0)\) and \((0,0)\) induce different forward and backward probability distributions. These different distributions lead to different Q-shapes. In the general case of more complex systems, we also have to weigh the parts according to their individual values of \(\phi\). The simple structure of the dyad allows us to bypass this (since \(\phi(A)=\phi(B)=1\)), but we will return to the more general case in the next section.
We begin with part A, when the dyad is in state \((1,0)\). The prescription of _partitioning_ means that we replace the complement of A (that is, B) by noise, i.e. an equiprobable distribution of \(0\) and \(1\), while keeping \(A\) in state \(1\). Evolving this forward in time, we obtain a probability distribution \((0,\frac{1}{2},0,\frac{1}{2})\), where we have labelled the four states in lexicographical order: \((0,0),(0,1),(1,0),(1,1)\). Evolving it backwards in time, i.e. retrodicting the dyad's state at one time step earlier, we obtain exactly the same probability distribution. This gives us the first
two rows in the Q-shape matrix
\[Q(1,0)=\left(\begin{array}{cccc}0&\frac{1}{2}&0&\frac{1}{2}\\ 0&\frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}&0&0\\ \frac{1}{2}&\frac{1}{2}&0&0\end{array}\right). \tag{11}\]
The third and fourth row are the forward (effect) and backward (cause) probability distributions that we obtain if we consider the subsystem B instead, keeping it in state 0 and replacing A by noise as above. Thus, the Q-shape of a given state (such as \((1,0)\)) is a collection of four probability distributions over the four dyad states, represented by the four rows in our representation matrix.
Performing the calculation for the other dyad states (which each have \(\Phi=2\) due to the two parts always having \(\phi\)=1), we obtain
\[Q(0,0)=\left(\begin{array}{cccc}\frac{1}{2}&0&\frac{1}{2}&0\\ \frac{1}{2}&0&\frac{1}{2}&0\\ \frac{1}{2}&\frac{1}{2}&0&0\\ \frac{1}{2}&\frac{1}{2}&0&0\end{array}\right),\;\;Q(0,1)=\left(\begin{array} []{cccc}\frac{1}{2}&0&\frac{1}{2}&0\\ \frac{1}{2}&0&\frac{1}{2}&0\\ 0&0&\frac{1}{2}&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{2}\end{array}\right),\;\;Q(1,1)=\left(\begin{array} []{cccc}0&\frac{1}{2}&0&\frac{1}{2}\\ 0&\frac{1}{2}&0&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{2}\end{array}\right). \tag{12}\]
Our Q-shapes are not really "shapes"; they are just matrices of probability distributions. But we can turn them into shapes by following the IIT3.0 prescription described in [40] (see especially Figures 10-12). To obtain such visualizations for our dyad states, we simply interpret two probability distributions over the dyad's state space (which have four real entries each) as an element of the eight-dimensional vector space \(\mathbb{R}^{8}\). This is the phase space of the dyad. Let us use this to build the "shape" corresponding to \(Q(1,0)\) from equation 11 above. Consider the first two rows of that matrix. They determine the location of part A. The last two rows determine the location of part B. This gives us two points in the eight-dimensional space. Since \(\phi=1\) in all our cases, it does not help us to distinguish Q-shapes, so we have ignored it.
IIT3.0 therefore predicts that the dyad is (minimally) conscious, and can be in one of four qualitatively distinct conscious states. It is natural therefore to wonder what it is like to be the dyad, and what these qualitative differences actually consist of. This is a question that
IIT actually aims to answer. That is, IIT wants to be able to say something about what the experience of any given conscious system is like, especially when the system is incapable of verbal reports. The general idea is to extrapolate from features of our own Q-shapes.
For example, consider what it is like to be an echolocating bat. In [39] it was famously argued that this question is intractable. However, more recently in [49] it was argued that IIT makes it tractable. The idea is to consider the general properties of human visual experience Q-shapes and human auditory experience Q-shapes. Then, we compare them with bat experience Q-shapes. If bat experience Q-shapes are "more similar" to, say, human auditory experience Q-shapes, then we can say something about what it is like to be a bat (it is more like human auditory experience than human visual experience). Of course for both human and bat experience, deriving exact Q-shapes is far too complicated. Consequently, there is also no straightforward way to compare the dyad Q-shapes with (aspects of) our Q-shapes.
Nonetheless, there is a curious discussion about this in [40] (see Figure 19), that considers a system that is only slightly more complex than our dyad, which they call a "photodiode". It also involves two parts, labelled 'D' and 'P', that specify each other's states at each time step. (The main difference is that D receives two external inputs and has a threshold \(\geq\) 2. All connections have weight 1. Meanwhile P serves as a memory for the previous state of D and its feedback to D serves as a predictor of the next external input by effectively decreasing the threshold of D.) Despite these differences, its Q-shapes are very similar to our dyad's Q-shapes. They also involve two points in an 8D space. About its experience, they say the following:
"It is instructive to consider the quality of experience specified by such a minimally conscious photodiode. [...] D says something about P's past and future, and P about D's, and that is all. Accordingly, the shape in qualia space is a constellation having just two [points], and is thus minimally specific. [...] Moreover, the symmetry of the [Q-shape] implies that the quality of the experience would be the same regardless of the system's state: the photodiode in state DP=00, 01, or 10, receiving one external input, generates exactly the same [Q-shape] as DP=11. In all the above cases, the experience might be described roughly as "it is like this rather than not like this", with no further qualifications. The photodiode's experience is thus both quantitatively and
qualitatively minimal."
If all four states of the photodiode have the same Q-shape, then they must all correspond to the same probability distributions. For as they say (in the IIT3.0 jargon), the probability distributions (or "cause effect repertoires") for each part (or each "concept") specifies what each part "contributes to the quality of the experience". (Meanwhile, the \(\phi\) of each part is said to be "how much" the part is present in experience.) But as we have seen, our feedback dyad does not yield this result: the four possible states correspond to distinct Q-shapes. It is therefore not possible to simply describe each of the four possible conscious states of the dyad as "it is like this rather than not like this". What could the differences in our four dyad Q-shapes possibly translate to in experience? These are difficult questions for IIT.
We have mostly followed the IIT3.0 rather than the IIT4.0 prescription for building Q-shapes. In IIT4.0, they are somewhat simpler, in that they replace probability distributions with states (see equation (56) in [4]). In particular, the IIT4.0 Q-shape of any dyad state is given by the \(\phi\)-values of A and B as well as the states that these \(\phi\)-values were maximized over. So in the case of \(Q(1,0)\), A and B both have \(\phi=1\); for A this was maximized over B being in state 1 (i.e. \(b=1\)), while for B this was maximized over A being in state 0 (i.e. \(a=0\)). For \(Q(0,0)\), A and B both have \(\phi=1\); for A this was maximized over B being in state 0 (i.e. \(b=0\)), while for B this was maximized over A being in state 1 (i.e. \(a=1\)). The four states of the dyad therefore correspond to distinct Q-shapes in IIT4.0, consistently with IIT3.0.
The choice of how to represent Q-shapes here seems somewhat arbitrary, as both options satisfy the constraint of identifying differences in how the parts contributed to an overall \(\Phi\)-value for the system. However, as we explain in the next section, the IIT3.0 choice is much better suited for a certain application of IIT: defining a fully general consciousness-collapse model.
## 5 The simplest consciousness-collapse model
In [17] a dynamical collapse model is proposed in which Q-shape superpositions are unstable and tend to collapse. The following general form for continuous collapse models ([10, p.27]) is used:
\[d\psi_{t}=[-i\hat{H}_{0}dt+\sqrt{\lambda}(\hat{A}-\langle\hat{A}\rangle_{t}) dW_{t}-\frac{\lambda}{2}(\hat{A}-\langle\hat{A}\rangle_{t})^{2}dt]\psi_{t}. \tag{13}\]
The first term on the right-hand side of the equation represents Schrodinger evolution, while the remaining two terms represent the collapse evolution. Here, \(\hat{H}_{0}\) is the Hamiltonian of the system, \(\lambda\) is a real-valued parameter governing the collapse rate, \(\hat{A}\) is a collapse operator whose eigenstates the system collapses towards, \(\langle\hat{A}\rangle_{t}\) is its expected value at time \(t\), and \(W_{t}\) is a noise process which ensures that collapse happens stochastically at a rate determined by a measure of difference between the superposed \(\hat{A}\) eigenstates.
The pure state \(\rho_{t}^{W}:=|\psi_{t}\rangle\langle\psi_{t}|\) therefore evolves stochastically. All statistical predictions that we can extract from this state are linear in \(\rho_{t}^{W}\) due to the Born rule. Hence, given a single realization of the process (13), all statistical predictions (say, about outcomes of any measurement that we might decide to perform at some point while the process unfolds) can be computed from \(\rho_{t}:=\mathbb{E}[\rho_{t}^{W}]\)[10]. As a consequence of (13), this resulting state evolves according to the Lindblad equation
\[\frac{d}{dt}\rho_{t}=-i[\hat{H}_{0},\rho_{t}]-\frac{\lambda}{2}[\hat{A},[\hat {A},\rho_{t}]] \tag{14}\]
(for the derivation see e.g. [10]). Hence, the system can evolve via Schrodinger dynamics, via collapse, or via some combination of the two. To understand the collapse term we can ignore the Schrodinger dynamics term by setting its Hamiltonian to zero, \(\hat{H}_{0}=0\). The collapse term only has an effect when the system is in a superposition of eigenstates of \(\hat{A}\). In this situation, the double commutator will be non-zero and the state will evolve. The "speed" at which it evolves is a function of the eigenvalues \(a_{i}\) of \(\hat{A}\). This is because the \((i,k)\)th matrix entry of the double commutator in \(\hat{A}\)'s eigenbasis is
\[[\hat{A},[\hat{A},\rho]]_{ik}=\rho_{ik}(a_{i}-a_{k})^{2}. \tag{15}\]
The dampening of the off-diagonal elements of \(\rho\) occurs at a rate that grows with \((a_{i}-a_{k})^{2}\) where if \(a_{i}\neq a_{k}\) the system is in a superposition. We see that the eigenbasis of \(\hat{A}\) determines the collapse basis, i.e. the basis in which the state becomes "classical", while its eigenvalues tell us which superpositions of pairs of such states are removed more quickly (namely, those with large \((a_{i}-a_{k})^{2}\)).
Let us now use this prescription to construct the simplest possible consciousness collapse model for the dyad. Subsequently, we will compare this with the more general, but more involved
approach in [17]. For the moment, let us only mention that our simple model contains only a _single_ collapse operator, whereas the one in [17] involves several such operators, generalizing Eq. (13). We will say more about the similarities and differences below.
The four states of the dyad are mutually distinct states of consciousness, spanning the total Hilbert space. Therefore, we expect a consciousness-collapse model to lead to a state for large times \(t\) that is diagonal in that basis. Therefore, our collapse operator \(\hat{Q}\) will have the form
\[\hat{Q}=\lambda_{00}|00\rangle\langle 00|+\lambda_{01}|01\rangle\langle 01|+ \lambda_{10}|10\rangle\langle 10|+\lambda_{11}|11\rangle\langle 11|, \tag{16}\]
with four eigenvalues \(\lambda_{ij}\). Any consciousness-collapse model should arguably imply the following principle for the choice of those eigenvalues:
_If two states of the dyad (say, \(ij\) and \(kl\)) are qualitatively very different states of consciousness, then superpositions of these states should vanish very quickly, i.e. \(|\lambda_{ij}-\lambda_{kl}|\) should be very large._
That is, it is natural to allow superpositions of "qualitatively similar" states to persist for longer, while qualitatively different states must decohere quickly.
For a quantitative application of this prescription, we need a way to compare states of consciousness, i.e. a distance measure on Q-shapes. Since Q-shapes are collections of probability distributions, it is natural to define their distance in terms of distance measures on probability distributions, which is a classical and well-studied topic in information theory. The preferred distance measures on probability distributions in IIT have changed in almost every successive version. IIT2.0 used the well-known Kullback-Leibler divergence. IIT3.0 used Earth Mover's distance [42]. IIT4.0 uses the intrinsic difference measure from section 3. IIT3.0's measure was explicitly turned into a generalized distance measure for Q-shapes. IIT4.0's measure is not so well suited for this task, a point we will return to at the end of the section.
A natural choice is to define the distance of two Q-shapes \(Q=(q_{1},q_{2},q_{3},q_{4})^{\top}\) (i.e. with rows \(q_{1},\ldots,q_{4}\)) and \(\tilde{Q}=(\tilde{q}_{1},\tilde{q}_{2},\tilde{q}_{3},\tilde{q}_{4})^{\top}\) as
\[{\cal D}(Q,\tilde{Q}):=\sum_{i=1}^{4}{\cal D}(q_{i},\tilde{q}_{i}), \tag{17}\]
where \({\cal D}\) is some choice of distance measure on the set of probability distributions. That is, the
distance of two Q-shapes is the sum of the distances of their probability distributions. (This is precisely the form of IIT3.0's extended Earth mover's distance measure.)
Now we have a large choice of possible distance measures \({\cal D}\) at our disposal. However, note that the four Q-shapes of the dyad (Eqs. (11) and (12)) consist of a small variety of very simple probability distributions only: all entries are 0 or \(\frac{1}{2}\), and any two rows are either equal, or they differ in all 4 entries. Two identical rows must have distance zero. Furthermore, it is natural to demand that every two probability distributions arising as rows in these Q-shapes that differ in _all four_ places all have the same distance, which we can set to unity by a choice of scaling factor. For example,
\[{\cal D}\left((0,\tfrac{1}{2},0,\tfrac{1}{2}),(\tfrac{1}{2},0,\tfrac{1}{2},0) \right)=1.\]
We can then determine the distances between all pairs of Q-shapes of the dyad and obtain the following values, writing \({\cal D}(Q,\tilde{Q})\) as the \(Q\tilde{Q}\)-entry of a table:
\begin{tabular}{l|c c c c|} & Q(0,0) & Q(0,1) & Q(1,0) & Q(1,1) \\ \hline Q(0,0) & 0 & 2 & 2 & 2 \\ Q(0,1) & 2 & 0 & 4 & 2 \\ Q(1,0) & 2 & 4 & 0 & 2 \\ Q(1,1) & 2 & 2 & 2 & 0 \\ \end{tabular}
Let us now return to our consciousness-collapse principle. Formulating it in terms of this distance measure, it reads: _If the distance \({\cal D}\) between two Q-shapes \(Q(i,j)\) and \(Q(k,l)\) is large, then the distance between the eigenvalues \(\lambda_{ij}\) and \(\lambda_{kl}\) of the collapse operator must also be large_.
This desideratum could always be satisfied by the arbitrary prescription to make all eigenvalues extremely large and distant from each other. However, this would typically induce almost-instantaneous collapse, a behavior that we do not expect for simple systems such as the dyad. Thus, we are searching for a choice of eigenvalues that is as tame as possible while still satisfying the above postulate.
This leads us to define the eigenvalues in terms of an optimization problem:
\begin{tabular}{|l|} \hline Minimize \(\lambda_{00}+\lambda_{01}+\lambda_{10}+\lambda_{11}\) \\ subject to \(\lambda_{ij}\geq 0,\ \ |\lambda_{ij}-\lambda_{kl}|\geq{\cal D}(Q(ij),Q(kl))\). \\ \end{tabular}
This prescription keeps the collapse behavior "tame" by demanding that the eigenvalues are not arbitrarily large, but only as large as they need to be (in their total sum) to satisfy our principle for all pairs of Q-shapes. Note that the total time scale of the collapse is not determined by \(\hat{Q}\) and its eigenvalues, which do not have any physical units. Instead, it is determined by the noise term of (13), i.e. the parameter \(\lambda\) in (14). This will remain a parameter of the collapse model that needs to be determined experimentally. The above considerations tell us only the _relative_ speed at which superpositions between distinct Q-shapes are suppressed, whereas the _total_ speed would depend on \(\lambda\) and hence on further considerations as to which states of consciousness are implausible to remain in superposition for significant amounts of time because of, say, human experience.
As we show in appendix C, this optimization problem has twelve solutions: one of them is
\[\lambda_{00}=2,\ \lambda_{01}=0,\ \lambda_{10}=4,\ \lambda_{11}=6,\]
and the other solutions are permutations of this one (\((2,0,4,6)\)), such as \((6,4,0,2)\) -- indeed, all permutations of these four numbers such that \(|\lambda_{01}-\lambda_{10}|\geq 4\). This degeneracy can be understood as a consequence of the symmetry of the problem: for example, the table of pairwise distances does not change if we exchange \(Q(0,0)\) and \(Q(1,1)\). Indeed, these solutions do not only minimize the sum of the \(\lambda_{ij}\), but they also minimize the expression
\[\frac{1}{2}\sum_{i,j,k,l}|\lambda_{ij}-\lambda_{kl}|=|\lambda_{00}-\lambda_{01 }|+|\lambda_{00}-\lambda_{10}|+|\lambda_{00}-\lambda_{11}|+|\lambda_{01}- \lambda_{10}|+|\lambda_{01}-\lambda_{10}|+|\lambda_{10}-\lambda_{11}|,\]
i.e. the total sum of the pairwise collapse rates, under the assumption (that we can always make) that one of the \(\lambda_{ij}\) is zero.
We can simply pick one of the six solutions and use it to define our collapse operator. For the sake of the argument, let us pick the above, but the choice does not matter for the following discussion.
Let us interpret the result by looking at some example collapse rates. We have \({\cal D}(Q(00),Q(01))=2\) which is small, and \(|\lambda_{00}-\lambda_{01}|=2\) is also small (and, indeed, identical). Superpositions of the two dyad states \(00\) and \(01\) can thus remain stable for a relatively long time. On the other hand,
\({\cal D}(Q(01),Q(10))=4\) is large, and so is \(|\lambda_{01}-\lambda_{10}|=4\). Hence, superpositions between the dyad states 01 and 10 will be killed off more quickly.
However, consider the two dyad states 01 and 11. Their distance is small, \({\cal D}(Q(01),Q(11))=2\), and our principle demands that the corresponding difference of eigenvalues (i.e. the associated collapse rate) is at least as large as that. However, it is actually \(|\lambda_{01}-\lambda_{11}|=6\), which is much larger than required. Thus, any superposition of these two dyad states would fall off much faster than what would be expected by considering the difference between their Q-shapes alone.
We can understand this behavior by noting that the \(n=4\) dyad states lead to \(n(n-1)/2=6\) distance values (the table above), from which \(n=4\) eigenvalues of the collapse operator have to be determined. Thus, every value of \(|\lambda_{ij}-\lambda_{kl}|\) must depend on _more_ than just the number \({\cal D}(Q(ij),Q(kl))\). If our principle is satisfied, then a large value of the latter implies a large value of the former, but the converse is not in general true. The quantum limitation of only having \(n\) eigenvalues introduces additional constraints.
It seems that this must be a general phenomenon: if we have \(n\) distinct Q-shapes, but \(m\ll n/2\) collapse operators, then the \(m\cdot n\) eigenvalues are smaller in number than the \(n(n+1)/2\) distance values. We must hence have pairs of Q-shapes whose superposition must collapse more quickly than what their mere qualitative distance as states of consciousness would suggest. Superposition resistance hence cannot only resemble the structure of conscious experience, but is also additionally constrained by the general structure of quantum mechanics.
We can now see how the elements of the construction above are realized in greater generality in [17]. Here, there is not a single Q-shape operator whose eigenstates are all the classical Q-shapes. Rather, a Q-shape is associated with an ensemble of orthogonal self-adjoint collapse operators. The eigenvalue of each operator does not pick out a Q-shape, but an element of a Q-shape, which is either an entry in a probability matrix or a \(\phi\)-value. This solves the above problem and allows for superposition resistance to resemble the structure of conscious experience more closely. However it does so at the cost of having a very complex model.
To illustrate this complexity, consider how many collapse operators we need to capture all the details of a Q-shape of a classical system. A system of \(n\) elements will have \(2^{n}-1\) subsystems. So if we have two elements, as with our dyad, we have three subsystems (A, B, and AB). We were able to ignore AB in our simple case but we cannot do that in general. Every element has _d_
possible states giving a total of \(d^{n}\) possible states for the system. Each subsystem is associated with two probability distributions, as we saw in the previous section, and one \(\phi\)-value. To capture all of this, the number of collapse operators we need is \((2^{n}-1)\times(2\times d^{n}+1)\). So for our _classical_ dyad, where \(n=d=2\), we need 27 collapse operators. It would be extraordinary if Nature were to operate at such a high level of complexity for such a simple system.
But this still is not sufficient when dealing with quantum systems, since qubit elements do not have \(d=2\) possible states, but have infinitely many possible pure states. It is for this reason that the model in [17] formulates everything in terms of the QIIT found in ([53],[27]). Here each subsystem is associated with two appropriate density matrices instead of two appropriate classical probability distributions. The density matrices for the quantum dyad have more entries than the classical probability distributions associated with the classical dyad, so we need more collapse operators. In particular, we now need \((2^{n}-1)\times(2\times d^{2n}+1)\) collapse operators in general, and so 99 collapse operators for our dyad.
The use of QIIT raises a further complication. In QIIT, every quantum system is assigned a well-defined Q-shape (which in many cases may be the null Q-shape) whether or not the system is in a superposition of classical Q-shapes. That is, for QIIT, _distinct states of consciousness do not always correspond to mutually orthogonal quantum states_, and this makes it in general impossible to have physical processes whose observable behavior depends on all the properties of those states of consciousness (because non-orthogonal states cannot be perfectly distinguished). In particular, this excludes collapse models where the rate of collapse is proportional to the "size" of the superposition, e.g. to the qualitative difference of the superposed conscious states. The ensemble of collapse operators defined in equation (2) of [17] therefore adds an additional constraint that restricts these operators to just those states that are associated with classical Q-shapes.
Thus, in an attempt to be completely general, the collapse model in [17] became very complex. But as has been demonstrated here, if one just wants a collapse model for some simple system whose physical properties are known, then much of those complexities can be bypassed, as we can instead define a single collapse operator as we have done here.
Finally, we note that specific predictions of one's collapse model may vary with the use of IIT formalism, which is constantly being updated. The choice of distance measure in particular
has undergone significant revision.
IIT2.0 used the Kullback-Leibler divergence to measure the distance between probability distributions. But that was rejected in part because it is not symmetric.
IIT3.0 adopted Earth Mover's distance (EMD). This is symmetric and, as shown in [40], yields different results than the IIT2.0 measure, even for the dyad. The EMD can easily be generalized to become a measure of distance between Q-shapes, as in equation 5. This Q-shape distance measure was essential to IIT3.0 because it was used to calculate \(\Phi\) (by measuring the distance between the Q-shape of a system's state and the Q-shape of that state partitioned). But the EMD has recently been abandoned, in part because of problems raised in [7]. Alternative measures can be found in [45].
IIT4.0 adopts the intrinsic difference measure, described in IIT4.0 and in section 3 above, as its preferred distance measure. The intrinsic difference is infinite if the denominator is zero. This is avoided when the denominator involves some partition and therefore white noise. But measuring the distance between two Q-shapes doesn't involve any partitioning. So, the intrinsic difference is not suitable for measuring Q-shape distance. Consequently, IIT4.0 does not supply such a distance measure. This led to a simpler calculation of \(\Phi\) in IIT4.0: the system \(\Phi\) is a sum of subsystem \(\phi\)-values. QIIT in its most recent version is consistent with IIT4.0 B.
## 6 Physically implementing the dyad
Consider the following simple implementation of the dyad as depicted in Figure 3: channels A and B are optical cables and the dyad does nothing more than cross those cables, without contact. The outputs are then fed back into the inputs, creating a kind of feedback cycle. We have two photons in the cables, and each of them can carry one of two perfectly distinguishable "classical" states, corresponding to horizontal \(|0\rangle\) or vertical polarization \(|1\rangle\). What horizontal or vertical means is determined by an external reference frame; for what follows, the exact choice of reference is unimportant, except that the physical situation must tell us what we mean by both photons carrying _identical_ or _orthogonal_ polarization directions (e.g. \(|00\rangle\) in the first case, and \(|01\rangle\) in the second). It is clear that we are not restricted to preparing the photons in the classical basis states, but we can prepare them in arbitrary superpositions, such as that of Equation 3.
This is a necessary condition to implement "Schrodinger's dyad" as introduced in the previous sections.
However, if we identify our basic units as the photons that traverse the cables, then it may not seem like IIT applies here, since IIT requires causal relationships. In particular, to be a basic unit of IIT, something should have the power to "take a difference" (be affected by something) and "make a difference" (produce effects on something) [4]. The concern with this implementation is that the photons do not take or make a difference, because they never change their polarization states. Indeed, under this interpretation of the physical setup, we would not even have implemented the dyad, but another system (two bits and an identity gate).
On the other hand, this depends on counting the photons as our basic causal units. If we instead identify our basic units as the polarization qubits at the physical locations \(A\) and \(B\) in space, we get a different result. In particular, we may say that the photon polarization state at A\({}_{t0}\) causes the state at B\({}_{t+1}\) and was caused by the state at B\({}_{t-1}\). Under this way of identifying our basic units, the system has non-zero \(\Phi\) and is the simplest conscious system according to IIT. This may even fit well with interpretations of modern physics in which the basic causal objects are spacetime points [43].
IIT does not want the \(\Phi\) or Q-shape of a system to depend on some arbitrary choice: these are meant to be objective properties. So at least one of the above two causal interpretations of the system must be ruled out. IIT does not give clear criteria for what to do in such a situation. However, one option is suggested by the IIT4.0 _principle of maximal existence_, which states that "what exists is what exists the most". This ontological principle is used to motivate the exclusion principle, which effectively states that if two overlapping sets of units have non-zero \(\Phi\), then only the system with _maximal_\(\Phi\) is conscious. Thus, if there are multiple interpretations of what the causal units are in the first place, we might similarly only consider the interpretation that
Figure 3: A possible implementation of the dyad. The dashed line and labels indicate that the systems A and B are associated with regions of space.
yields the greater \(\Phi\). In that case, we have found a very simple implementation of the dyad by identifying the qubits with locations in space.
## 7 Conclusion
In this paper we have described some simple predictions of IIT that have enabled us to make a number of observations.
First, we showed that either IIT is false, or a simple system like the feedback dyad is conscious and can easily be put into a quantum superposition of conscious states (i.e. a superposition of Q-shapes). This result was shown to be robust across successive IIT formalisms.
Second, we identified the simplest consciousness-collapse (or Q-shape-collapse) model. It involves a single Q-shape collapse operator, whose eigenstates are the four possible states of the dyad. For the model to do what is needed (make the collapse rate proportional to a measure of difference between the superposed Q-shapes), we found that the four eigenvalues must depend on six distance values.
In such models, the rate of collapse of a superposition of two states of consciousness must therefore depend on more than the relation between the two states. More complex models may avoid this by defining an ensemble of orthogonal Q-shape collapse operators. However, this can get very complicated, so for practical purposes (like testing Q-shape-collapse models), the prescription that we have provided here may be more useful.
Finally, we have made several observations about the general structure of IIT. For example, we argued that while treating gates as having states is permissible if gates are neurons, this does not work in general, and especially not for computers, where gates operate on systems that possess states. This is especially clear for quantum computers, where qubits, and not gates, are superposed. We have also noted that to apply IIT to a physical system, we need a specification of the basic causal units in the system. Insofar as physics does not specify such things, IIT is not fully applicable to physical systems.
In further research, it would be interesting to investigate whether any existing quantum computers (or other quantum systems) can maintain states like the \(t_{0}\) state of our quantum dyad, and for how long. Such systems may place bounds on the fundamental parameters of
IIT-based consciousness-collapse models.
## Acknowledgments
We are grateful to Thomas D. Galley for discussions, and to Larissa Albantakis for helpful feedback on an earlier draft. This research was supported by grant number FQXi-RFP-CPW-2015 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation. Moreover, this research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario through the Ministry of Colleges and Universities.
|
2310.00087 | Optically-trapped microspheres are high-bandwidth acoustic transducers | We report on the use of an optically-trapped microsphere as an acoustic
transducer. A model for the hydrodynamic coupling between the microsphere and
the surrounding acoustic fluid flow is combined with thermo-mechanical
calibration of the microsphere's position detection to enable quantitative
acoustic measurements. We describe our technique in detail, including the
self-noise, sensitivity, and minimum detectable signals, using a model
appropriate for both liquid and gas environments. We then test our approach in
an air-based experiment and compare our measurements with two state-of-the-art
commercially-available acoustic sensors. Piezoelectrically-driven bursts of
pure tones and laser ablation provide two classes of test sounds. We find
accurate measurements with a bandwidth of 1 MHz are possible using our
technique, improving by several orders of magnitude the bandwidth of previous
flow measurements based on optically-trapped microspheres. | Logan E. Hillberry, Mark G. Raizen | 2023-09-29T18:56:02Z | http://arxiv.org/abs/2310.00087v1 | # Optically-trapped microspheres are high-bandwidth acoustic transducers
###### Abstract
We report on the use of an optically-trapped microsphere as an acoustic transducer. A model for the hydrodynamic coupling between the microsphere and the surrounding acoustic fluid flow is combined with thermo-mechanical calibration of the microsphere's position detection to enable quantitative acoustic measurements. We describe our technique in detail, including the self-noise, sensitivity, and minimum detectable signals, using a model appropriate for both liquid and gas environments. We then test our approach in an air-based experiment and compare our measurements with two state-of-the-art commercially-available acoustic sensors. Piezoelectrically-driven bursts of pure tones and laser ablation provide two classes of test sounds. We find accurate measurements with a bandwidth of 1 MHz are possible using our technique, improving by several orders of magnitude the bandwidth of previous flow measurements based on optically-trapped microspheres.
## I Introduction
Owing to their micro-manipulation and force transduction capabilities, optical tweezers have become an indispensable tool in a variety of scientific fields [1]. By tightly focusing a laser beam, optical forces can exceed gravitational forces and thermal fluctuations to stably trap micron-scale objects [2]. In vacuum [3], optical tweezers have enabled zeptonewton force sensing [4], state-of-the-art torque sensitivity [5], and searches for new physics [6], including proposals to measure high-frequency gravity waves [7]. Also in vacuum, optical tweezers can trap and cool microspheres [8] to the motional ground state [9; 10], and have been multiplexed to arrays of hundreds of single-atom traps in a promising platform for quantum computation and simulation [11; 12]. In aqueous solution, optical tweezers can measure mechanical properties of life at the nano-scale [13; 14], such as the stepping strength of molecular motors or the rigidity of biomolecules [15; 16; 17]. Also in liquid, optical tweezers enable ultra-fast viscosity measurements [18] and Casimir force measurements [19]. In gaseous media, optical tweezers have revolutionized single-particle aerosol science [20], including absolute pressure measurements and species identification [21], mass metrology [22; 23], and single-droplet growth and freezing studies [24; 25; 26].
There further exists a body of work using optically-trapped microspheres to measure flow in liquids [27; 28; 29; 30; 31; 32]. So far, these studies have characterized low frequency (\(<500\) Hz) flows by monitoring the motion of optically-trapped microspheres with a camera or position-sensitive detector. In this Letter, we propose and demonstrate a fluid velocity measurement scheme with a bandwidth approaching 1 MHz using optically-trapped microspheres in air. Flow at such high frequencies is generally associated with acoustic radiation. A schematic of our optically-trapped-microsphere acoustic transducer is shown in Fig. 1. Other non-traditional acoustic sensors have recently been studied, including optical micro-resonators [33; 34], and laser deflection or interference methods [35]. As we will see, our method uniquely combines self-calibration, high-bandwidth, and high-sensitivity to acoustic velocity waves (rather than pressure waves).
Our method builds on earlier work that first measured the instantaneous velocity of a thermally-fluctuating microsphere in air [36]. This same system is not only sensi
Figure 1: Schematic depiction of the experimental set up. A 1064 nm laser is split by a polarizing beamsplitter. The \(p\)-polarized beam is sent through an acousto-optic modulator (AOM) to shift its frequency by 80 MHz, thereby eliminating interference effects in the trap. The \(p\)-polarized beam is then steered counter-propagating to the \(s\)-polarized beam and both are focused to the same point between twin aspheric lenses (numeric aperture 0.7), generating a stable optical trap for silica microspheres in air. After passing through the trap, the \(s\)-polarized beam is separated with a second polarizing beamsplitter and sent to the detection system. For detection, a sharp, D-shaped cut mirror splits the incoming transverse mode into two halves that are sent to a balanced photo-detector (75 MHz bandwidth). Various acoustic sources provide test sounds, and additional acoustic sensors, a microphone and Microflow, are positioned just behind the trap. The entire system is enclosed in a multi-chamber acrylic box to mitigate air currents.
tive to thermal fluctuations, but also to acoustic perturbations. Two ingredients, a hydrodynamic model of the acoustic force and thermo-mechanical self-calibration, enable quantitative acoustic measurements. Since the microsphere is uniquely sensitive to high frequency velocity flows, we use two commercially-available sensors to asses our platform's capabilities: We benchmark our method in terms of accuracy and bandwidth against 1) a high-bandwidth (\(200\,\mathrm{kHz}\)) pressure microphone, and 2) a micron-scale dual-hot-wire anemometer [37; 38] (calibrated bandwidth \(20\,\mathrm{kHz}\)) that is commercially known as the _Microflown_[39].
The remainder of this paper is organized as follows: In Section II we describe the microsphere's acoustic sensing modality, including calibration, self-noise, and minimum-detectable signals. Section III reports our sound detection results. We then discuss our results within the context of other microsphere-based flow measurements and speculate on future applications in Section IV. The paper is then concluded in Section V.
## II Noise, calibration, and acoustic response
In thermal equilibrium with a reservoir fluid at finite temperature, a microsphere's position fluctuates in random and perpetual _Brownian motion_[40]. Brownian motion velocity detectors [41; 18; 36] are sensitive to both thermally fluctuating and driven fluid flows. If the resulting driven motion is larger than the random thermal motion (and detector noise), an acoustic signal is detectable. In what follows we develop a model for the acoustic signal and thermal noise of our proposed acoustic detection system.
For the general setup, consider a microsphere of radius \(R\) and density \(\rho\) harmonically bound to the coordinate origin. The microsphere mass is \(m=4\pi\rho R^{3}/3\) and the harmonic trap strength is \(\kappa\). Let the trapping fluid at temperature \(T\) have density \(\rho_{\mathrm{f}}\), speed of sound \(c_{0}\), and dynamic viscosity \(\eta\). The \(x\)-component of the system's equation of motion is
\[m\ddot{x}(t)+\kappa x(t)-F_{\mathrm{d}}[v(t)]=F_{\mathrm{ext}}(t)+F_{\mathrm{ th}}(t) \tag{1}\]
where \(v(t)=\dot{x}(t)\) is the microsphere's velocity at time \(t\), \(F_{\mathrm{d}}(v)\) is the dissipative, velocity-dependent drag force, and \(F_{\mathrm{ext}}\) is an external driving force. \(F_{\mathrm{th}}\) is the fluctuating thermal force that is related to the dissipative force through the fluctuation-dissipation theorem.
When all bounding walls are far from the sphere [42] and the fluid flow at sphere's surface does not slip [43], the hydrodynamic drag force in the incompressible limit is [44; 45; 46]
\[F_{\mathrm{d}}[v(t)]=-\gamma_{0}\left(v(t)+\sqrt{\frac{\tau_{\mathrm{f}}}{ \pi}}\int_{-\infty}^{t}\mathrm{d}t^{\prime}\,\frac{\dot{v}(t^{\prime})}{\sqrt {t-t^{\prime}}}\right)-\frac{\delta}{2}m\dot{v}(t)\,, \tag{2}\]
where \(\gamma_{0}=6\pi\eta R\) is the Stokes friction coefficient and \(\delta=\rho_{\mathrm{f}}/\rho\) is the fluid-to-micsphere density ratio. The _vorticity diffusion time_\(\tau_{\mathrm{f}}=R^{2}\rho_{\mathrm{f}}/\eta=9\delta\tau_{\mathrm{p}}/2\) is the amount of time it takes for vorticity -- the curl of velocity -- to diffuse across the sphere and \(\tau_{\mathrm{p}}=m/\gamma_{0}\) is the momentum diffusion time. The first, second, and third terms of Eq. (2) describe, respectively, Stokes drag (independent of \(\delta\)), viscous damping due to the flow history (proportional to \(\delta^{1/2}\)), and inertia of the mass added by the fluid that follows the microsphere (proportional to \(\delta\)). For a silica microsphere in air \(\delta\sim 10^{-3}\ll 1\) hence Eq. (2) reduces to \(F_{\mathrm{d}}[v(t)]\approx-\gamma_{0}v(t)\).
In the frequency domain, we may write \(F_{\mathrm{d}}[v(\omega))]=-\gamma(\omega)v(\omega)\) where \(\omega=2\pi f\) is the circular frequency, the frequency-dependent damping is [47; 48]
\[\gamma(\omega)=\gamma_{0}\left(1+\sqrt{-i\tau_{\mathrm{f}}\omega}-i\frac{\tau _{\mathrm{f}}\omega}{9}\right)\,, \tag{3}\]
and \(\sqrt{-i}=(1-i)/\sqrt{2}\) defines the square-root's branch cut.
Next, we consider two cases: _noise_ when \(F_{\mathrm{ext}}=0\) and _signal_ when \(F_{\mathrm{th}}=0\) and \(F_{\mathrm{ext}}\) is caused by an acoustic wave.
### Noise
The thermal force is \(F_{\mathrm{th}}(t)=\sqrt{2k_{\mathrm{B}}T\gamma_{0}}\xi(t)\) where \(\xi(t)\) is a zero-mean, possibly-time-correlated [51] random variable, and \(k_{\mathrm{B}}\) is Boltzmann's constant. When \(F_{\mathrm{ext}}=0\), the equation of motion (1) may be solved in the frequency domain for the _admittance_\(v(\omega)/F_{\mathrm{th}}(\omega)=(\gamma(\omega)-i\omega m+i\kappa/\omega)^{-1}\) The corresponding (one-sided) velocity power spectral density is given by the Kubo-Green formula [52] as
\[S_{vv}(\omega)=4k_{\mathrm{B}}T\,\mathrm{Re}\left[(\gamma(\omega)-i\omega m+i \kappa/\omega)^{-1}\right]\,. \tag{4}\]
Equation (4) describes the microsphere's thermal fluctuations and hence the inherent noise which must be overcome to detect \(F_{\mathrm{ext}}\neq 0\). However, beyond noise limitations, thermal fluctuations enable an accurate detector calibration scheme.
The split-beam detection method, depicted in Fig. 1, generates a linear voltage signal \(V(t)=\beta x(t)\) where \(\beta\) is the displacement-to-voltage calibration factor. For silica microspheres in air, the radius \(R\), temperature \(T\), and viscosity \(\eta\) can be considered known to within a couple percent [53; 22; 52]. Since \(S_{xx}=S_{vv}/\omega^{2}\) we can predict the detector's (one-sided) Brownian-motion-driven voltage power spectral density
\[S_{VV}(\omega) =\frac{\beta^{2}}{\omega^{2}}S_{vv}(\omega) \tag{5}\] \[\approx\beta^{2}\frac{4k_{\mathrm{B}}T\gamma_{0}}{(m\omega^{2}- \kappa)^{2}+\gamma_{0}^{2}\omega^{2}}\,. \tag{6}\]
The second approximate equality (6) is accurate for thermal fluctuations in air and assumes \(\gamma(\omega)\approx\gamma_{0}\)
As shown in Fig. 2 (a), by averaging experimental periodograms of thermally-driven voltage signals and maximum-liklihood fitting [50, 54] to Eq. (6), we can learn [22]\(\rho=1.7(1)\,\mathrm{g/cm^{3}}\), \(\kappa=21.3(7)\,\mathrm{f\SIUnitSymbolMicro N}/\mathrm{nm}\), and \(\beta=2.1(1)\,\mathrm{mV/nm}\). At high frequencies, the spectrum (6) decays as \(\sim\omega^{-4}\) until the detector's constant noise floor \(\chi=0.49(2)\,\mathrm{\SIUnitSymbolMicro V}^{2}/\mathrm{Hz}\) dominates the signal. Our detector's narrow-band position sensitivity is therefore \(\sqrt{\chi}/\beta=333(21)\,\mathrm{fm}/\sqrt{\mathrm{Hz}}\). The inset of Fig. 2 (a) shows that subtle hydrodynamic effects described by Eq. (5) are perceptible in thermally driven motion above \(\sim 50\,\mathrm{kHz}\), but may be ignored for calibration purposes by restricting the fit domain. In the next section, we will calculate the response of the microsphere to a harmonic acoustic wave.
### Signal
When impinging on the trapped microsphere along the direction \(x\) of position measurement, a sound wave of fluid velocity \(u\) and acoustic pressure \(p\) applies an external force [46]\(F_{\mathrm{ext}}=F_{\nabla}(p)+F_{\mathrm{d}}(-u)\). The pressure gradient force is \(F_{\nabla}(p)=-4\pi R^{3}\nabla p/3\). Using Euler's (linearized) equation \(\nabla p=-\rho_{\mathrm{f}}\dot{\mathbf{u}}\), the pressure gradient force is \(F_{\nabla}=\delta m\dot{u}=2\gamma_{0}\tau_{\mathrm{f}}\dot{u}/9\). Taking \(F_{\mathrm{th}}=0\), one can solve the equation of motion (1) in the frequency domain for the transfer function \(H(\omega)=v(\omega)/u(\omega)\), yielding
\[H(\omega)=\frac{\gamma(\omega)-i\omega\delta m}{\gamma(\omega)-i\omega m+i \kappa/\omega}. \tag{7}\]
The transfer function, shown in Fig. 2 (b), describes the microsphere's velocity amplitude and phase relative to that of the fluid. Though \(\gamma(\omega)\approx\gamma_{0}\) is appropriate for thermal fluctuations and system calibration in air, driven motion can occur at much higher frequencies, so we retain all three terms in Eq. (3). For example, at \(1\) MHz, taking \(\gamma(\omega)\approx\gamma_{0}\) underestimates the amplitude of \(H\) by a factor of \(\sim 2\) and overestimates the phase by \(\sim\pi/6\) radians. The primary correction to \(H\) beyond \(\gamma(\omega)\approx\gamma_{0}\) comes from the history term in Eq. (3); the added mass and pressure gradient effects are both proportional to the density ratio \(\delta\) and hence small in air. We retain all terms so that our model remains valid for liquid media for which \(\delta\sim 0.1-1\).
The detector's voltage signal is converted to an acoustic velocity signal using a frequency domain deconvolution \(u(t)=\mathcal{F}^{-1}[\mathcal{F}[V(t)]/\psi_{u}(\omega)]\) where \(\mathcal{F}\) is the Fourier transform, and the microsphere's frequency-dependent velocity sensitivity is
\[\psi_{u}(\omega)=\frac{-i\beta H(\omega)}{\omega}\,. \tag{8}\]
The sensitivity is proportional to the transfer function \(H\), the calibration factor \(\beta\), and the factor \(-i/\omega\) that affects the required position-to-velocity derivative. For experimental data sampled at a rate \(1/dt\), the derivative factor consistent with a central finite difference in the time-domain is \(-i/\omega\to-idt/\sin(\omega dt)\)[55]. Acoustic pressure and velocity are related through the impedance \(Z(\omega)=p(\omega)/u(\omega)\), hence the pressure sensitivity is \(\psi_{p}=\psi_{u}/Z\). For plane acoustic waves \(Z=\rho_{\mathrm{f}}c_{0}\) is a constant. We will assume planar acoustic waves throughout and use the factor \(Z\) to freely convert between pressure and velocity.
Commercial acoustic detectors are typically calibrated by comparing the sensor's output voltage to a well-characterized input sound under anechoic conditions. By contrast, our thermo-mechanical position calibration and hydrodynamic transfer function enable self-calibration. The sensitivity amplitudes of our commercial microphone and Microflown are provided by the manufacturers and shown in Fig. 3 compared to the sensitivity of our microsphere system.
Figure 2: (a) Experimental position power spectral density (open circles) of a \(R=1.51(5)\,\mathrm{\SIUnitSymbolMicro m}\) silica microsphere thermally driven by air at \(T=23.97(1)\,\mathrm{\SIUnitSymbolMicro C}\) with a relative humidity of \(57(1)\%\), which has a viscosity \(\eta=18.23(1)\,\mathrm{\SIUnitSymbolMicro Pa}\)[49]. The experimental spectrum is an average periodogram of \(550\) signals of length \(3\) ms. For visualization, each point of the experimental spectrum is an average over logarithmically-spaced frequency bins. Calibration is performed by fitting the voltage spectrum in the \(1\) kHz to \(30\) kHz band to Eq. (6) (dashed line). The spectrum and fit are shown here in physical units using the calibration result. The solid line uses the fit results to include hydrodynamic effects that are imperceptible up to \(\sim 50\,\mathrm{kHz}\). However, the \(50\) kHz to \(100\) kHz band (gray shaded region) does exhibit subtle hydrodynamic effects, as suggested by the the data-to-theory ratio’s probability density (inset), wherein the hydrodynamic theory (solid red line) follows much more closely the expected Erlang distribution of ratios (solid black line) [22, 50]. (b) Theoretical transfer function relating microsphere velocities to fluid velocities. The red lines show the amplitude on the left axis while the black lines show the phase on the right axis. The solid line corresponds to the hydrodynamic theory while the dashed lines makes the approximation \(\gamma(\omega)\approx\gamma_{0}\). The microsphere, trap, and fluid parameters are chosen to be consistent with the calibration shown in (a).
### Detection limits
The above considerations for signal and noise allow us to estimate our microsphere's minimum detectable acoustic signal. A voltage signal derived from only thermal fluctuations (5) then transformed to a fluid velocity via the sensitivity (8) will exhibit a self-noise spectrum [Fig. 4 (a)]
\[S_{\mathrm{nn},u}(\omega)=\frac{S_{VV}(\omega)}{|\psi_{u}(\omega)|^{2}}=\frac{4 k_{\mathrm{B}}\mathrm{TRe}[\gamma(\omega)]}{|\gamma(\omega)-i\omega\delta m|^{2}}. \tag{9}\]
The self-noise is quite flat and near the DC value \(S_{\mathrm{nn},u}(\omega\to 0)=4k_{\mathrm{B}}T/\gamma_{0}\). From the self-noise spectrum, the minimum-detectable signal is given by the band-limited variance [Fig. 4 (b)] \(u_{\mathrm{min}}=\sqrt{\int_{0}^{f}\mathrm{d}f^{\prime}\,S_{\mathrm{nn},u}(2 \pi f^{\prime})}\,.\) One can include the effects of a constant detector noise floor by making the replacement \(S_{VV}(\omega)\to S_{VV}(\omega)+\chi\) in Eq. (9).
## III Results
Having established the operating principle and expected performance of optically-trapped microspheres as acoustic sensors, we next describe experimental results. Using a two-channel high-speed digitizer, we record the microsphere signal and either the microphone or the Microflow signal when driven by various sound sources. Each channel is analog-low-pass filtered to 4 MHz then sampled at a rate of \(1/dt=25\,\mathrm{MHz}\) to minimize aliasing. In post processing, the recorded voltage signals are further low-pass filtered by averaging together adjacent points of non-overlapping segments, thereby adjusting the effective sampling rate and signal bandwidth. Once filtered, the voltage signals are converted to either pressure or velocity using the appropriate sensitivity curves.
### Tone-burst sound source
Tone bursts, consisting of a certain number of sinusoidal cycles at a single frequency, provide a simple and repeatable test signal for our various acoustic detectors. In our first set of experiments, we launch tone bursts using a function generator to drive piezoelectric buzzers held a distance \(\Delta x=44\,\mathrm{mm}\) from the optical trap. \(\Delta x\) is varied by mounting the piezo buzzers on a motorized platform. We drive one of two buzzers at their resonant frequencies \(4\,\mathrm{kHz}\) or \(40\,\mathrm{kHz}\). We observe excellent agreement between our commercially-calibrated reference sensors and our thermo-mechanically calibrated research system, as shown in Fig. 5. The agreement between sensors lessens as source distance \(\Delta x\) or time \(t\) increases (see Fig. 7 of the Appendix). The loss of agreement could be due to a number of effects including acoustic scattering and diffraction, and differences in sensor directivity, placement, and size.
### Laser ablation sound source
A pulsed laser focused to a small spot on a surface can deposit a vast amount of energy in a short amount of time [56]. This phenomenon has fueled diverse technologies including micro-machining [57], laser-induced-breakdown spectroscopy [58], thin film growth [59], and a platform for studies of light-plasma interactions [60]. The sharp acoustic impulse generated by laser ablation has spurred its own research thrusts on non-contact damage detec
Figure 3: Comparing acoustic detector velocity sensitivities. The microsphere parameters are consistent with the calibration shown in Fig. 2 (a). The microphone sensitivity is provided by the manufacturer and includes corrections for operation without the protective grid and in free-field conditions. The nominal pressure sensitivity is \(0.68\) mV/Pa and is converted to velocity via the plane-wave impedance of air for comparison with the velocity sensors. The microphone calibration known up to \(200\) kHz (dashed amber line).
Figure 4: (a) Thermally-driven self-noise spectrum for microsphere-based acoustic sensing. (b) The minimum-detectable acoustic disturbance estimated from the self-noise spectrum’s band-limited variance. In both panels: The solid lines include effects of a constant detection noise floor while the dashed line assumes perfect detection. All other parameters are consistent with the calibration shown in Fig. 2 (a). The left axis quantifies results in terms of acoustic velocity while the right axis converts to pressure via the plane-wave impedance of air.
tion [61], medical imaging [62], and scale-modeling of sonic booms [63]. The impulse has an N-shaped acoustic signature, consisting of a sharp rise, followed by a decay through a zero-crossing into a slower-timescale trough.
In our second set of experiments, we use laser ablation to generate high-frequency-content impulsive sounds to test the high-frequency measurement capabilities of our microsphere-based acoustic sensor. The ablation laser operates at a wavelength of 532 nm with a pulse width of 5 ns and an energy of \(\sim 7\) mJ. The pulse has a flat-top mode shape that is focused with a 65 mm focal length lens to \(\sim 75\,\mathrm{\SIUnitSymbolMicro m}\) on an aluminum target. The ablation target, focusing lens, and laser steering mirror are all mounted on the motorized platform used to vary the source distance \(\Delta x\). The ablation target is further motorized to rotate and reveal a fresh target spot every ten shots. For this experiment, we do not measure the Microflown signal because of its limited high-frequency sensitivity.
Figure 6 shows the microphone and microsphere signals at \(\Delta x=100\,\mathrm{mm}\). It is well known that standard microphones are unable to resolve the rising edge of the acoustic impulse sourced by laser ablation [64], necessitating alternative methods such as laser deflection or interference [35]. Our results indicate optically-trapped microspheres offer another alternative that is capable of measuring impulsive signals with a \(\sim 1\,\mathrm{\SIUnitSymbolMicro s}\) rising edge, defined as the time for the signal to change from 10% to 90% of its peak value. By comparison, the microphone measures a rise-time of \(\sim 5\,\mathrm{\SIUnitSymbolMicro s}\). As \(\Delta x\) decreases, the microsphere signal becomes more intricate, featuring two or more initial peaks (see Fig. 8 of the Appendix). The details of these features are very sensitive to the orientation of the target and its lateral offset from the trap center.
## IV Discussion
We now turn to a discussion of the results presented in the previous section. We then contextualize the results by reviewing similar work using optically-trapped microspheres for flow measurements. Finally, we outline possible extensions and applications left for future work.
From the tone-burst experiments, we conclude that our microsphere-based acoustic sensor is capable of making calibrated acoustic measurements. All three sensors agree well when converted to the same units, suggesting the plane-wave impedance model is acceptable and that our microsphere calibration and sensing protocol are correct. The laser ablation sound source highlights the microsphere's superior bandwidth in the form of a steeper rising edge and higher peak pressure as compared to the microphone. In the trough portion of the ablation signal, the two sensors are in better quantitative agreement because acoustic variations are slower and therefore less
Figure 5: Comparing measurements of tone-burst signals between three acoustic sensors. (a) Ten cycles of a 40 kHz tone (9 V peak-to-peak drive voltage). All sensors are post-processed to a bandwidth of 200 kHz (b) Three cycles of a 4 kHz tone (7 V peak-to-peak drive voltage). All sensors are post-processed to a bandwidth of 20 kHz. In both panels, 100 independent trials are averaged, and the origin in time is aligned for each sensor manually.
Figure 6: Microsphere and microphone response to an acoustic impulse generated by laser ablation, averaged over 10 shots. (a) A trace showing the initial noise level, leading edge arrival, and subsequent reverberations. The microphone is processed with its maximum bandwidth of 200 kHz, and the microsphere is processed with a bandwidth of 1 MHz. The time origin is set to the first zero crossing following the leading edge. (b) A trace of the same impulse over a 20\(\times\) shorter time window. The solid red line is the microsphere data shown in (a), the open squares are the microphone data, and the open circles are the microsphere data filtered to a bandwidth of 200 kHz.
susceptible to band-limited distortion. When the analysis bandwidth of the microsphere is restricted to that of the microphone [open-circles in Fig. 6 (b)], the rise times and peak pressures are in much better agreement. Unlike the tone-burst sources, shorter source distances \(\Delta x\) result in worse agreement between the microsphere and microphone for laser ablation sources. We understand this as a near-field source impedance effect. Indeed, laser ablation acoustic waves are typically modeled as spherical or cylindrical waves for which the impedance is a complex-valued function that approaches to the plane-wave value at large source distances \(\Delta x\). Taken together, our experiments show that optically-trapped microspheres enable calibrated and high-bandwidth sensing of an acoustic wave's velocity quadrature.
Let us next contrast our microsphere-based sensing protocol with other experiments in the recent literature. First, one other work has couched their experiments as acoustic sensing using optically-trapped microspheres [29], but in a dramatically different regime. In that work, a 60 nm gold sphere is trapped in water and imaged at 50 Hz with a camera. Sounds are generated by intensity-modulating a CW laser beam focused onto a nearby cluster of gold nanoparticles at 10 Hz to 50 Hz, or by a needle attached to a 300 Hz loudspeaker. Since the detection method is slow, the methodology hinges on measuring the particle's position variance in response to sound, hence no time-dependent waveforms may be constructed. The authors claim to be able to detect sound power down to a level of -60 \(\mathrm{dB_{re\,1pW}}\). Similar frequency-domain analysis of camera-captured microsphere trajectories is used in [27], where flow is generated by the rotating flagella bundle of an optically-trapped bacterium, and in [28] where flow is generated by periodically blocking and unblocking one of two transversly-separated traps, causing a drive particle to periodically jump. In [32], a microsphere is trapped in water contained within a 6.8 MHz, piezo-driven, standing-wave chamber. The time-averaged microsphere position is recorded using a camera at 150 Hz. The steady-state displacement of the microsphere from its equilibrium position maps the standing-wave profile. In a more-recent work termed _optical tweezer-based velocimetry_[30], a position-sensitive detector monitors a microsphere optically trapped in a water-filled sample chamber. The sample chamber is driven at frequencies of 1 Hz - 90 Hz. Velocity amplitudes of 1.5 \(\mathrm{\SIUnitSymbolMicro m}\)/s - 70 \(\mathrm{\SIUnitSymbolMicro m}\)/s are detected in real-time. Such low amplitudes beat the thermal limit by using a Kalman filter to deduce the flow velocity from microsphere position measurements in the presence of Brownian motion. In another recent work [31], a silica microsphere is optically trapped in water and driven transversely at 50 Hz to 400 Hz. An additional 30 smaller polystyrene tracer particles, initially optically trapped at fixed locations near the drive particle, are released upon starting the drive and observed to follow Lissajous trajectories. Compared to previous efforts, our work is unique because it is performed in air, it makes quantitative acoustic field measurements that are bench marked against well-calibrated detectors, and it does so with enough time resolution to observe acoustic waveforms at 4 kHz and 40 kHz, as well as impulsive waveforms with frequency content in the megahertz-range. Like some of the above methods, our method measures the flow velocity of the surrounding fluid. However, instead of inferring flow velocity through microsphere displacement, we rely on microsphere velocity measurements and a hydrodynamic model of the viscous coupling between fluid and microsphere, thereby dramatically increasing the detection bandwidth.
Our results set up numerous opportunities for follow-up work. First, incorporating a Kalman filter could increase the signal-to-noise ratio while preserving the ability to self-calibrate. Second, our demonstration was in air, but the theory is equally valid in liquid. Acoustic transduction in a liquid is more efficient than in a gas due to a greater similarity in acoustic impedance between the solid transducer and the medium in which the sound propagates. Therefore, it would be interesting to compare our method to state-of-the-art acoustic sensors for water, such as a needle hydrophone. Finally, since the microsphere measures acoustic velocity, it could be combined with novel opto-acoustic methods that are capable of high-bandwidth pressure measurement to elucidate the impedance of unique sources like blast-waves from laser ablation, surface acoustic waves, and surface vibrations in the near-field. Further, since velocity is a vector-quantity, the microsphere could be useful in sound-source localization, opening the door to several applications. Applications of high-bandwidth acoustic velocity sensing could include locating where a firearm has been discharged, real-time monitoring in proton-therapy for cancer treatment [65; 66], and event discrimination in bubble-chamber searches for dark matter [67; 68; 69].
## V Conclusions
By monitoring an optically-trapped microsphere's instantaneous velocity, we infer fluid flow of sonic, ultrasonic, and impulsive perturbations in air. We validate the accuracy of our technique by comparing tone-burst measurements made with two commercially-available devices, a high-bandwidth pressure microphone and a dual-hot-wire anemometer -- the Microflowan -- which measures acoustic velocity. We then test the bandwidth of our sensor by exposing it to impulsive test sounds generated by laser ablation. Beyond the direct extensions mentioned in the previous section, we hope this work inspires other sensing protocols enabled by the resolution of a Brownian particle's instantaneous velocity.
###### Acknowledgements.
We thank Neal Hall for several useful discussions.
## Appendix: Sound detection results for various source distances |
2309.04382 | Emergent learning in physical systems as feedback-based aging in a
glassy landscape | By training linear physical networks to learn linear transformations, we
discern how their physical properties evolve due to weight update rules. Our
findings highlight a striking similarity between the learning behaviors of such
networks and the processes of aging and memory formation in disordered and
glassy systems. We show that the learning dynamics resembles an aging process,
where the system relaxes in response to repeated application of the feedback
boundary forces in presence of an input force, thus encoding a memory of the
input-output relationship. With this relaxation comes an increase in the
correlation length, which is indicated by the two-point correlation function
for the components of the network. We also observe that the square root of the
mean-squared error as a function of epoch takes on a non-exponential form,
which is a typical feature of glassy systems. This physical interpretation
suggests that by encoding more detailed information into input and feedback
boundary forces, the process of emergent learning can be rather ubiquitous and,
thus, serve as a very early physical mechanism, from an evolutionary
standpoint, for learning in biological systems. | Vidyesh Rao Anisetti, Ananth Kandala, J. M. Schwarz | 2023-09-08T15:24:55Z | http://arxiv.org/abs/2309.04382v2 | # Emergent learning in physical systems as feedback-based aging in a glassy landscape
###### Abstract
By training linear physical networks to learn linear transformations, we discern how their physical properties evolve due to weight update rules. Our findings highlight a striking similarity between the learning behaviors of such networks and the processes of aging and memory formation in disordered and glassy systems. We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces in presence of an input force, thus encoding a memory of the input-output relationship. With this relaxation comes an increase in the correlation length, which is indicated by the two-point correlation function for the components of the network. We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems. This physical interpretation suggests that by encoding more detailed information into input and feedback boundary forces, the process of emergent learning can be rather ubiquitous and, thus, serve as a very early physical mechanism, from an evolutionary standpoint, for learning in biological systems.
## I Introduction
Given the prevalence of emergent behavior, physicists, computer scientists, and biologists have long asked whether or not some subset of emergent behavior results in the capacity of a system of many interacting components to learn, i.e., to have intelligence [1; 2]. While there has been much focus looking for emergent learning in brain-like systems, such as neuronal networks in biology or artificial neural networks in physics and computer science, recent research has demonstrated that simple physical systems, such as a spring network, have the potential to exhibit learning behavior similar to that of artificial neural networks [3; 4; 5; 6; 7; 8; 9]. In this context, learning refers to the ability to modify the properties of a physical system by adjusting its learning degrees of freedom in order to more efficiently achieve some task. For example, in a spring network, the spring stiffness and rest lengths represent the learning degrees of freedom, while the nodes of the springs correspond to the usual physical degrees of freedom.
In these physical learning systems, once input boundary nodes, output boundary nodes, and a cost function are all chosen, the learning process is composed of two steps:
1. _Signaling_ : System's response to a given input is compared with the desired output and an update signal is sent which provides information on the necessary adjustments to each learning degree of freedom, so that the system's response aligns more closely with the desired output.
2. _Weight update_ : Each learning degree of freedom, or weight, is updated in response to the update signal. This weight update should allow the system to perform gradient descent.
The two steps are repeatedly applied to train the system to learn.
The major challenge applying this algorithm is to find physical processes that implement the above two steps. While methods such as Equilibrium Propagation (EP) [4], Multi-mechanism Learning (MmL) [3; 5], and Coupled Learning (CP) [6] have made strides in addressing this challenge, they are not entirely physical in nature. In particular, the learning stages involved, _Signaling_ and _Weight update_, require the artificial modifications to the physical system. For instance, in EP and CL, to send the gradient information into the system, one needs to store the free state in some memory, which is not possible in typical systems such as spring networks or resistor networks. In our previous work unveiling MmL, we demonstrated that this issue of memory storage could be addressed by encoding the feedforward and feedback signal into two non-interfering physical quantities [3; 5].
Despite this demonstration, however, a significant problem remains: we do not know of any physical process that can update the weights in the system. To physically implement weight updates, recent experimental efforts have resorted to using complex components such as transistors in the training of electrical networks [10; 11], and actuators and sensors in mechanical networks [12]. Yet, the reliance on such intricate and varied tools introduces challenges in terms of scalability and robustness in these approaches. Here, we explore the central question: Do effects of the weight update procedure resemble any natural physical phenomena? The answer to such a question will point us in the direction of a fully physical learning system, weight update included. To begin to answer this question, we train linear physical networks and investigate how the physical properties of this system change, given the weight update rule.
Our manuscript consists of revisiting our MmL training procedure, as detailed in our prior work [3; 5], in a general manner that emphasizes its physical plausibility. We then review the specifics of multi-mechanism learning, followed by details of what we measure as well as data generation and network generation. Results are then presented. We conclude with a discussion of the impact of our results.
Figure 1: _Training linear networks to learn linear transformations._ [1a & 1b] : _Network undergoes trimming_. A network with 40 nodes and 390 edges is trained to learn a linear transformation of size \(10\times 10\). Weights of the network are uniformly sampled from \([10^{-5},0.2]\). Colorbar on right shows weight values of each edge. [1c] _Non-exponential relaxation_ : Training curve for the case shown in 1a and 1b but for 50 different initializations (shown in green). Y axis shows error defined as square root of mean square error, X axis shows epoch. In one epoch the network goes through 100 data points. All green curves are obtained after normalization with their respective initial errors. The blue curve shows the average over these 50 runs. The blue curve is fit to a non-exponential curve of the form \(a+be^{-\lambda\cdot t^{\beta}}\). Fit parameters are shown in the legend. \(\beta>1\) shows the relaxation shows a compressed exponential behaviour. The sum of squared residuals (SSR) is used to assess the goodness of fit, it is defined as: \(\text{SSR}=\sum_{i=1}^{n}(y_{i}^{fit}-y_{i}^{data})^{2}\) [1d] _Eigenvalues decrease while learning_: Eigenvalues of graph Laplacian before and after training for runs shown in 1c. These initial and final eigenvalues are averaged over those 50 runs. The eigenvalues are sorted in increasing order. The x-axis shows eigenvalue index. The network has 40 nodes so there are 40 eigenvalues. [2a, to d] These plots show the training performance for a network with less number of edges (78 edges), due to which it does not learn well. When compared with case 1, we see that trimming is less prominent and the eigenvalues do not decrease. The training curve shows a stretched exponential relaxation (\(\beta<1\)) and saturates well above zero error. [3a, to d] _Training on random data_: Networks initialized with same parameters as that of 1a are trained on randomly generated data. No trimming is observed, eigenvalues increase over training and the error curve does not decrease with the number of epochs.
The Learning Process
We now demonstrate the process of physical learning within our system. Initially, we impose an input boundary condition, denoted by \(I\). The system's response is then captured by the Laplace equation \(Lv=I\), where \(L\) is Laplacian, which depends on the learning degrees of freedom \(w\), and \(v\) is the state of the system. To attain its intended functionality, the system need to update \(w\) to minimize the cost function \(C(v(w))\). We encode the cost function as an interaction energy between the system and the environment. This energy causes a feedback boundary condition of the form \(-\eta\dfrac{\partial C(v)}{\partial v}\) to act on the system, due to which the state of the system evolves along a direction that decreases \(C(v)\):
\[L(v+\delta v)=I-\eta\dfrac{\partial C(v)}{\partial v}. \tag{1}\]
For a mechanical network, these input and feedback boundary conditions are applied as external stresses on the system. When the feedback stress is removed, the system tends to revert to its initial state \(v\). However, with continuous exposure to feedback boundary forces, there's a lasting change in the system's learning degrees of freedom. This change is akin to a plastic deformation in materials where repeated stress leads to permanent alterations.
Note that unlike the input boundary condition, the feedback boundary condition is a function of the state of the system. As a result, there exists an optimal state where the system experiences minimal feedback stress. Our hypothesis is that, through repeated application of these feedback stresses, the system's learning parameters \(w\) evolve such that this optimal state is reached. The objective of this evolution is to minimize the external stress \(-\eta\dfrac{\partial C(v)}{\partial v}\), by changes in state of the system \(v\), through changes in \(w\). This adaptation is represented as:
\[\Delta w_{ij}=-\alpha\eta\dfrac{\partial C(w)}{\partial w_{ij}}, \tag{2}\]
where \(C\) is a function of \(w\) via \(C(v(w))\). In our previous work [3], we showed that the above weight update rule can be written purely in terms of local physical quantities
\[\Delta w_{ij}=-\alpha v_{ij}\delta v_{ij}. \tag{3}\]
Where, \(w_{ij}\) is the weight connecting nodes \((i,j)\), and \(v_{ij}\) is the potential drop \(v_{i}-v_{j}\), \(\delta v_{ij}\) is the change in this potential drop due to feedback[2 ]. Intriguingly, this learning rule exhibits a Hebbian-like behavior.
The input and feedback boundary conditions encode a particular-type of information, and given that they are applied repeatedly, a parallel with memory formation in driven disordered systems seems plausible [13]. For example, in granular systems, the particles rearrange in response to a particular sequence of driving amplitudes[14; 15]. Additionally, if the network topology is fixed, then the learning degrees of freedom are updated much like in the context of directed aging [16; 17], keeping in mind that the update rule in our system depends on both feedforward signal (\(v_{ij}\)) and feedback signal (\(\delta v_{ij}\)), rather than a reduction of spring constants over time based on the stress experienced by a particular spring.
Due to the evolution of the learning degrees of freedom, once reaching steady state, the system's response is :
\[L^{\prime}(v+\delta v)=I, \tag{4}\]
where \(L^{\prime}\) is the updated Laplacian that encodes the memory of the feedback stress by adapting to it, i.e; \(C(v+\delta v)<C(v)\).
In summary, the learning process goes as follows. An input is introduced to the system as an external force. Subsequently, based on the system's reaction to this input, feedback forces are consistently applied. We postulate that such a process enables the system to adapt and become attuned to these feedback boundary forces. This continuous adaptation to feedback forces, in presence of the input, ingrains a memory of the input-output relationship within the system. This concept is elucidated further in the subsequent section.
## III A Brief Review of Multi-Mechanism Learning
We study a network comprised of nodes and connected by weighted edges. Let us represent the weight of the edge between node \(x\) and node \(y\) as \(w_{xy}\), which could signify conductances in an electrical network, spring constants in a
mechanical spring network, or pipe thickness in a flow network, etc.
**Input Nodes:** An "input" node pair is pair of nodes \((b_{j}^{+},b_{j}^{-})\) such that an input current \(I_{j}\) enters the network via node \(b_{j}^{+}\) and exits through \(b_{j}^{-}\).(For mechanical networks input current can be thought of as external forces acting at input nodes). Let there be \(q\) such input node pairs in the network, denoted by \(\{(b_{1}^{+},b_{1}^{-}),(b_{2}^{+},b_{2}^{-}),\ldots,(b_{q}^{+},b_{q}^{-})\}\).
**Output Nodes:** In response to the input currents, the system develops an electric potential at each node. The network's output is defined to be the set of potential differences across certain "output" node pairs, obtained as \(v(o_{i}^{+},o_{i}^{-})=v(o_{i}^{+})-v(o_{i}^{-})\) for each output node pair \((o_{i}^{+},o_{i}^{-})\). Let there be p such output node pairs in the network, represented as \(\{(o_{1}^{+},o_{1}^{-}),(o_{2}^{+},o_{2}^{-}),\ldots,(o_{p}^{+},o_{p}^{-})\}\).
**Cost Function:** The goal of training is to adjust the weights \(\{w_{xy}\}\) so that for a given set of input currents, the desired potential drops \(\{v_{d}(o_{i}^{+},o_{i}^{-})\}\) are achieved across all the output nodes. We employ a Mean Squared Error (MSE) cost function:
\[C=\frac{1}{2}\sum_{i=1}^{p}(v(o_{i}^{+},o_{i}^{-})-v_{d}(o_{i}^{+},o_{i}^{-})) ^{2}. \tag{5}\]
**Feedback Mechanism:** To optimize this cost function, we introduce a feedback signal into the network at the output nodes. For each output node pair, the feed-back current is calculated as:
\[\epsilon_{i}=-\eta(v(o_{i}^{+},o_{i}^{-})-v_{d}(o_{i}^{+},o_{i}^{-})) \tag{6}\]
This current enters the network through node \(o_{i}^{+}\) and exits via \(o_{i}^{-}\), with \(\eta\) being a positive "nudging" factor. The feedback currents change the potentials at each node and let the change in the potential at node \(j\) be denoted by \(u_{j}\).
**Weight Update Rule:** The weights are then updated as:
\[\Delta w_{xy}=-\alpha u(x,y)v(x,y), \tag{7}\]
where \(\alpha\) is the learning rate. This rule effectively performs gradient descent on the cost function:
\[\Delta w_{xy}=-\alpha\eta\frac{\partial C}{\partial w_{xy}} \tag{8}\]
**Considerations:** The weight update is local, and its sign depends on the potential drops due to input and feedback. We assume the system's relaxation time is much shorter than the weight update time, ensuring a steady state during weight adjustments. The two quantities in the weight update must be independent. This can be ensured by encoding them into distinct physical quantities[5]. (Further details on the learning procedure and its physical implementation are given in Ref.[3]). For larger networks, a higher learning rate is necessary to maintain the magnitude of weight changes. To address this, we conduct a trial run for one epoch, adjusting the learning rate to ensure \(||\Delta w||\approx 10^{-3}\). Additionally, we impose regularization by (1) Limiting each weight update: \(|\Delta w_{xy}|<\epsilon\),and (2) Constraining weight values: \(w_{min}\leq w_{xy}\leq w_{max}\). This ensures a smooth training process and prevents weights from becoming too large or too small. In our simulations, we set \(w_{min}=0.00001\), \(w_{max}=0.2\), and \(\epsilon=0.01\).
## IV Methodology
**Network Generation**: We aim to create networks consisting of \(N\) nodes, with a varying number of edges \(M\). For this, we first create a Barabasi-Albert network with connection parameter 1. This graph generation algorithm connects a new node with 1 existing node in a manner that nodes with higher degree have a stronger likelihood for selection.This creates a network with \(N\) nodes and \(N-1\) edges. To create a network with \(M\) edges, we add \(M-(N+1)\) unique edges. This way, we can create networks with varying connectivity, ranging from being minimally connected to being maximally connected. Note that it is highly unlikely to create such minimally connected networks using the Erdos-Renyi model.
The generated networks are then trained on data generated using a linear transformation. Note that in spite of using linear networks to learn linear transformations, the optimization needs to take place in a cost landscape which is non-convex, high- dimensional, and disordered.
**Data Generation**: The input vector \(\mathbf{x}\) (eg; \((x_{1},x_{2},x_{3})\)) is encoded as external currents across input nodes \(\{(b_{1}^{+},b_{1}^{-}),(b_{2}^{+},b_{2}^{-}),(b_{3}^{+},b_{3}^{-})\}\) with currents \(+x_{q}\) and \(-x_{q}\) applied across nodes \(b_{q}^{+}\) and \(b_{q}^{-}\) respectively. The output vector \(\mathbf{y}\) (eg; \((y_{1},y_{2},y_{3})\) ) is the potential drop across nodes \(\{(o_{1}^{+},o_{1}^{-}),(o_{2}^{+},o_{2}^{-}),(b_{3}^{+},o_{3}^{-})\}\). When the network is trained we want the network's output to closely approximate the matrix \(R\), that is we want \(\mathbf{y}\approx R\mathbf{x}\). To do so, we first
generate training data of the form \(\{(\mathbf{x},R\mathbf{x})\}\) by randomly sampling \(\mathbf{x}\) from the surface of a unit sphere, and train the network using the procedure described in the previous section. To shorten the training time, we want the magnitude of output \(\mathbf{y}\) to be of the same order as that of the input, therefore we make sure that the maximum eigenvalue of \(R\) is close to one. We do this by first generating an arbitrary matrix \(R^{\prime}\) with random entries between -1 and 1, and then normalizing it by dividing it with maximum eigenvalue : \(R=R^{\prime}/max\{eig(R^{\prime})\}\). Input and output data is generated using this matrix \(R\). The network is trained using this ideal data, meaning each training step sees an entirely new data point. In the computer science community, this type of task is known as linear regression.
## V Results
Figure 1.1a,1b shows the network before and after training for a network of \(N=40\) and \(M=390\). Since the intensity of the color indicates the magnitude of the weight, note that many of the weights of a trained network reach the minimum value. In other words, there is a trimming effect, where only the important edges remain. To ascertain whether or not the network has learned the linear transformation, we plot the square root of the mean-squared error in Fig. 1.1c as a function of epoch. Given that the error nearly vanishes at longer epoch, this network has successfully learned the task. This shows the dynamics through which the system relaxes to the feedback boundary forces due to the evolution of learning degrees of freedom. Interestingly, we performed a phenomenological fit for this curve. The curve is well-approximated by a non-exponential relaxation of the form \(\sqrt{MSE}=a+b\exp(-\lambda\,t^{\beta})\), where \(a\), \(b\), \(\lambda\), \(\beta\) are the fit parameters and \(t\) denotes the epoch number. Interestingly, these dynamics are quantitatively similar to what is observed in molecular glassy systems [19]. This finding demonstrates the existence of a glassy landscape. Appendix A addresses the reasonableness of this non-exponential fit.
We seek to quantify further the relaxation of the system as it learns. We, therefore, compute the eigenvalues of the Laplacian matrix. Figure1.1d shows how learning results in decreasing Laplacian eigenvalues. Note that these eigenvalues are the square root of normal mode frequencies. Decreasing eigenvalues is evidence that the network is getting "softer" as the normal mode excitations become longer in wavelength. This observation demonstrates that the network moves from a state of stress to that of less stress due to repeated application feedback boundary forces. The network is, thus, "adapting" to these feedback forces indicating a transition towards a state that encodes a memory of the input-output relationship. Additionally, it draws parallels between this behavior and the self-organization observed in periodically sheared suspensions, where the system adapts to the periodic driving in a similar manner [14]. Moreover,
Figure 2: _Learning performance with overparametrization._ Error curve is fit to \(a+be^{-\lambda\,t^{\beta}}\) for networks with varying edges and the fit parameters are plotted (Error bars shown are calculated using the diagonal terms of covariance matrix). The Tuning Parameter (\(TP\)) serves as a metric to quantify the degree of connectivity in a network. Specifically, it is calculated by taking the ratio of the number of edges \(M\) present in the graph to the number of edges that would exist in a fully connected network with the same number of nodes. (a) We observe that after adding a certain number of edges, the saturation value of the error curve begins to asymptote to zero. (b) We also observe that the exponent \(\beta\) increases from less than one to greater than one, showing a shift from stretched exponential to compressed exponential relaxation. (c) \(\lambda\) value also becomes very small after adding a certain number of edges. We have done a fit robustness analysis for these plots in Appendix A. (In Fig.1, 390 and 78 edge networks correspond to a \(TP\) of 0.5 and 0.1, respectively.)
when amorphous solids, modeled as purely repulsive particles in the jammed phase, are shear-stabilized by minimizing the energy with respect to the shear degrees of freedom, one finds longer wavelength excitations emerging [20]. Finally, recent work demonstrates that using a similar multiplicative learning rule as given in Eq. 7 to train physical networks to learn linear transformations also shows a decrease in the lowest eigenvalues of the Hessian [21]. Appendix B shows that the trends hold for larger system sizes.
Figures 1.2(a-d) show the same quantities as Figure 1.1, however, for a network with \(N=40\) and \(M=78\). Given the smaller number of learning degrees of freedom, a network with this architecture does not successfully learn, as indicated by the square root of the mean-squared error not decreasing to zero as the number of epochs increase. Moreover, the eigenvalues of the Laplacian do not decrease and so the system does not relax, or soften. For comparative purposes, we also train the network to learn, if you will, random data. Fig. 1.(3a to d) shows the physical effects of learning random data. Here, the system, exposed to random input and feedback boundary conditions, does not relax, as indicated by the unchanged initial and final eigenvalues. With random input-output forces, the weight update signal in Eq. 3 averages to zero due to the absence of correlation between \(v_{ij}\) and \(\delta v_{ij}\). This null result suggests that the system's relaxation is driven by correlations between input and feedback boundary conditions and for certain network architectures.
Given the nontrivial dependence of learning on the network architecture, we further extend our analysis by incrementally increasing the network connectivity to examine the implications of overparametrization (see Fig. 2). We denote the ratio of the number of edges \(M\) to the number of edges in the fully connected equivalent network as \(TP\) for tuning parameter. The results indicate that as more edges are introduced, the cost landscape becomes steeper due to a reduced number of flat directions [22], leading to accelerated relaxation and enhanced learning performance. Notably, a parallel can be drawn with glasses; in these systems, increased connectivity also speeds up relaxation dynamics [23; 24]. Both these studies, as well as ours, show a shift in relaxation dynamics from a stretched to a compressed exponential upon increasing connectivity. This further underscores the intrinsic link between learning processes and relaxation in disordered systems.
Given the changes in the weights as the networks learns, in Fig. 3, we examine the relationship between trimming, eigenvalue reduction, and network connectivity. As network connectivity increases by increasing \(TP\), the fractional eigenvalue decrease tends to plateau, reaching a saturation point around \(TP\approx 0.3\). A comparison of Fig. 3(a) and Fig. 2(a) reveals a notable correlation: the point of eigenvalue saturation aligns with the disappearance of saturation error. This suggests a fundamental link between the processes of learning and eigenvalue reduction. Furthermore, Fig. 3(b) underscores the ubiquity of the trimming effect across networks of varying connectivity. Notably, the magnitude of the trimming effect intensifies as network size grows.
Figure 4 illustrates the evolution of the resistance distance distribution during the learning process. In an electrical network, the effective resistance between two nodes can be interpreted as a measure of distance (more details in Appendix C). By calculating the average distribution of resistance distances over all possible pairs of nodes, a two
Figure 3: _Eigenvalue decrease and trimming with overparametrization_. (a) Shows fractional decrease in the sum of eigenvalues due to learning, averaged over 50 runs. (b) Shows fractional decrease in number of effective weights due to learning, averaged over 50 runs. Here, the term ‘effective weights’ refers to those weights that fall within the top 99 percent of the permissible weight value range(\([10^{-5},0.2]\)).
point correlation function \(p(r)\) can be derived, which can be extended to spring and flow networks as well. As learning progresses, we observe a broadening of the two-point correlation function, indicating that the average conductance between two arbitrary nodes decreases. This phenomenon is analogous to a reduction in "stiffness" in elastic networks, as the system becomes more soft during learning.
## VI Discussion
In summary, in learning about the physical signatures of multi-mechanism learning we find that:
1. The error curve for networks with low connectivity resembles a stretched exponential. However, as network connectivity increases, the error curve transitions to a compressed exponential form (Fig. 2).
2. Eigenvalues of the graph Laplacian decrease with epoch and long wavelength modes are generated (Fig. 1).
3. The network undergoes trimming, i.e., lot of the weights go to zero (Fig. 1 & 3).
4. The two point correlation function for the network broadens while learning (Fig. 4).
Figure 4: _Resistance Distance Distribution and Learning._ (a) The figure showcases the average resistance distance distribution, \(p(r)\), during learning, with the x-axis denoting resistance magnitude and the y-axis its normalized frequency. This is averaged over 50 network initializations. The inset illustrates the outcome when the network is trained on random data (note that the scale in the inset differs, making the initial distributions appear distinct, though they are identical). (b) Represents a network with suboptimal learning performance due to a limited number of edges.
The non-exponential relaxation indicates the presence of a glassy learning landscape. In such a landscape, many local minima exist thereby allowing multiple memories to form. Interestingly, in prior work the optimization landscapes of Deep Neural Networks (DNNs) were compared with those of glasses [22; 25]. A fundamental distinction was observed: while glasses exhibited slow relaxation over extended timescales, DNNs did not manifest such slow relaxation at long times. This discrepancy was hypothesized to arise from the overparametrization inherent to DNNs. Our findings, as illustrated in Fig. 3, corroborate this hypothesis. We demonstrate that even in physically disordered systems, increasing network connectivity can eliminate slow relaxation. This further suggests a potential SAT-UNSAT transition [26] in these physical learning systems.
Experiments on directed aging [16] reveal that materials subjected to external stress undergo alterations in their physical properties and by meticulously controlling the application of this external stress, one can tailor a material to exhibit specific desired properties. A trivial example of this principle in action can be observed with a simple piece of paper. If one aims to create a material with a negative Poisson's ratio, the paper can be crumpled into a ball. When this crumpled paper ball is stretched horizontally, it also expands vertically for small strains, indicating a negative Poisson's ratio. To capture the essence of this behavior, previous studies have introduced a model where springs decrease their spring constants over time based on the local stress experienced by each spring [17; 27]. We posit that the model detailed in Section II offers a comprehensive explanation for this phenomenon. This is because directed aging, at its core, can be viewed as an adaptation to external stresses. Additionally, we believe that this approach can potentially explain adaptation to external driving that was observed in particulate systems, keeping in mind there are differences in memory formation between unjammed and jammed systems [14; 15; 18; 28].
Moreover, the softening of the system as it learns and the associated increase in of the correlation length suggest that the system is indeed relaxing into the imposed boundary conditions that encode the linear transformation. However, these boundary conditions contain more information than a simple scalar quantity such as a strain amplitude[13]. If the environment is simple enough, then the physical system can learn. However, given too complex an environment, it may not be able to learn. Of course, we have restricted ourselves to linear networks. Nonlinear networks enhance the learning capability, as has been clearly shown in ANNs and even in mechanical networks [29]. Neuromorphic researchers have been actively seeking physical counterparts to facilitate autonomous weight updates. This pursuit has led to the development of physical learning systems utilizing memristors [30], nanoscale devices [31], and transistors [32]. However, the intricate design requirements for each component presents challenges in terms of robustness and scalability. We propose that soft materials might offer a more streamlined solution. These materials inherently exhibit self-adjustment to external conditions, as evidenced by the self-organization of granular systems in response to external driving [14; 15; 27] the adaptability of other disordered systems to external strain [16; 17]. Consequently, they emerge as promising candidates for crafting physical learning systems. Moreover, the model introduced in Section III provides insights into a potential training methodology for soft materials, be it particulate-based, such as a granular learner, where the topology of the system can change, or spring-based, such a spring network learner, where the topology of the network is fixed. By iteratively applying input and feedback boundary forces, the learning parameters can autonomously adapt to these forces to optimize a cost function. This approach paves the way for the creation of innovative disordered materials with neural network-like learning potential. We aim to validate this concept in our forthcoming research.
Finally, by using multi-mechanism learning to train physical networks to learn linear transformations, we demonstrate a simple, brain-like task in a typically non-brain-like material. As brains began to emerge several hundred million years ago in planarians [33], physical learning mechanisms are ripe candidates for life learning to survive in their environment before planarians. We, therefore, seek to validate such mechanisms in pre-planarian organisms.
The authors thank Benjamin Scellier, Arvind Murugan, Eli Hawkins, Shabeeb Ameen and Samuel Ropert for helpful discussion. JMS acknowledges financial support from NSF-DMR-2204312.
|
2309.05922 | A Survey of Hallucination in Large Foundation Models | Hallucination in a foundation model (FM) refers to the generation of content
that strays from factual reality or includes fabricated information. This
survey paper provides an extensive overview of recent efforts that aim to
identify, elucidate, and tackle the problem of hallucination, with a particular
focus on ``Large'' Foundation Models (LFMs). The paper classifies various types
of hallucination phenomena that are specific to LFMs and establishes evaluation
criteria for assessing the extent of hallucination. It also examines existing
strategies for mitigating hallucination in LFMs and discusses potential
directions for future research in this area. Essentially, the paper offers a
comprehensive examination of the challenges and solutions related to
hallucination in LFMs. | Vipula Rawte, Amit Sheth, Amitava Das | 2023-09-12T02:34:06Z | http://arxiv.org/abs/2309.05922v1 | # A Survey of Hallucination in "Large" Foundation Models
###### Abstract
Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information. This survey paper provides an extensive overview of recent efforts that aim to identify, elucidate, and tackle the problem of hallucination, with a particular focus on "Large" Foundation Models (LFMs). The paper classifies various types of hallucination phenomena that are specific to LFMs and establishes evaluation criteria for assessing the extent of hallucination. It also examines existing strategies for mitigating hallucination in LFMs and discusses potential directions for future research in this area. Essentially, the paper offers a comprehensive examination of the challenges and solutions related to hallucination in LFMs.
## 1 Introduction
Foundation Models (FMs), exemplified by GPT-3 Brown et al. (2020) and Stable Diffusion Rombach et al. (2022), marks the commencement of a novel era in the realm of machine learning and generative artificial intelligence. Researchers introduced the term **"foundation model"** to describe machine learning models that are trained on extensive, diverse, and unlabeled data, enabling them to proficiently handle a wide array of general tasks. These tasks encompass language comprehension, text and image generation, and natural language conversation.
### What is a Foundation Model
Foundation models refer to massive AI models trained on extensive volumes of unlabeled data, typically through self-supervised learning. This training approach yields versatile models capable of excelling in a diverse range of tasks, including image classification, natural language processing, and question-answering, achieving remarkable levels of accuracy.
These models excel in tasks involving generative abilities and human interaction, such as generating marketing content or producing intricate artwork based on minimal prompts. However, adapting and implementing these models for enterprise applications can present certain difficulties Bommasani et al. (2021).
### What is Hallucination in Foundation Model?
Hallucination in the context of a foundation model refers to a situation where the model generates content that is not based on factual or accurate information. Hallucination can occur when the model produces text that includes details, facts, or claims that are fictional, misleading, or entirely fabricated, rather than providing reliable and truthful information.
This issue arises due to the model's ability to generate plausible-sounding text based on patterns it has learned from its training data, even if the generated content does not align with reality. Hallucination can be unintentional and may result from various factors, including biases in the training data, the model's lack of access to real-time or up-to-date information, or the inherent limitations of the model in comprehending and generating contextually accurate responses.
Addressing hallucination in foundation models and LLMs is crucial, especially in applications where factual accuracy is paramount, such as journalism, healthcare, and legal contexts. Researchers and developers are actively working on techniques to mitigate hallucinations and improve the reliability and trustworthiness of these models. With the recent rise in this problem Fig. 2, it has become even more critical to address them.
### Why this survey?
In recent times, there has been a significant surge of interest in LFMs within both academic and industrial sectors. Additionally, one of their main challenges is _hallucination_. The survey in [14] describes hallucination in natural language generation. In the era of **large** models, [15] have done another great timely survey studying hallucination in LLMs. However, besides not only in LLMs, the problem of hallucination also exists in other foundation models such as image, video, and audio as well. Thus, in this paper, we do the first comprehensive survey of hallucination across all major modalities of foundation models.
#### 1.3.1 Our contributions
The contributions of this survey paper are as follows:
1. We succinctly categorize the existing works in the area of hallucination in LFMs, as shown in Fig. 1.
2. We offer an extensive examination of large foundation models (LFMs) in Sections 2 to 5.
3. We cover all the important aspects such as i. detection, ii. mitigation, iii. tasks, iv. datasets, and v. evaluation metrics, given in Table 1.
4. We finally also provide our views and possible future direction in this area. We will regularly update the associated open-source resources, available for access at [https://github.com/vr25/hallucination-foundation-model-survey](https://github.com/vr25/hallucination-foundation-model-survey)
#### 1.3.2 Classification of Hallucination
As shown in Fig. 1, we broadly classify the LFMs into **four** types as follows: i. Text, ii. Image, iii. video, and iv. Audio.
The paper follows the following structure. Based on the above classification, we describe the hallucination and mitigation techniques for all four modalities in: i. text (Section 2), ii. image (Section 3), iii. video (Section 4), and iv. audio (Section 5). In Section 6, we briefly discuss how hallucinations are NOT always bad, and hence, in the creative domain, they can be well-suited to producing artwork. Finally, we give some possible future directions for addressing this issue along with a conclusion in Section 7.
## 2 Hallucination in Large Language Models
As shown in Fig. 4, hallucination occurs when the LLM produces fabricated responses.
### LLMs
SELFCHECKGPT [13], is a method for zero-resource black-box hallucination detection in generative LLMs. This technique focuses on identifying instances where these models generate inaccurate or unverified information without relying on additional resources or labeled data. It aims to enhance the trustworthiness and reliability of LLMs by providing a mechanism to detect and address hallucinations without external guidance or datasets. Self-contradictory hallucinations in LLMs are explored in [13]. and addresses them through evaluation, detection, and mitigation techniques. It refers to situations where LLMs generate text that contradicts itself, leading to unreliable or nonsensical outputs. This work presents methods to evaluate the occurrence of such hallucinations, detect them in LLM-generated text, and mitigate their impact to improve the overall quality and trustworthiness of LLM-generated content.
PURR [12] is a method designed to efficiently edit and correct hallucinations in language models. PURR leverages denoising language model corruptions to identify and rectify these hallucinations effectively. This approach aims to enhance the quality and accuracy of language model outputs by reducing the prevalence of hallucinated content.
Hallucination datasets:Hallucinations are commonly linked to knowledge gaps in language models (LMs). However, [15] proposed a hypothesis that in certain instances when language models attempt to rationalize previously generated hallucinations, they may produce false statements that they can independently identify as inaccurate. Thus, they created three question-answering datasets where ChatGPT and GPT-4 frequently provide incorrect answers and accompany them with explanations that contain at least one false assertion.
HaluEval [11], is a comprehensive benchmark designed for evaluating hallucination in LLMs. It serves as a tool to systematically assess LLMs' performance in terms of hallucination
across various domains and languages, helping researchers and developers gauge and improve the reliability of these models.
**Hallucination mitigation using external knowledge:** Using interactive question-knowledge alignment, [14] presents a method for mitigating language model hallucination Their proposed approach focuses on aligning generated text with relevant factual knowledge, enabling users to interactively guide the model's responses to produce more accurate and reliable information. This technique aims to improve the quality and factuality of language model outputs by involving users in the alignment process. LLM-AUGMENTER [15] improves LLMs using external knowledge and automated feedback. It highlights the need to address the limitations and potential factual errors in LLM-generated content. This method involves incorporating external knowledge sources and automated feedback mechanisms to enhance the accuracy and reliability of LLM outputs. By doing so, the paper aims to mitigate factual inaccuracies and improve the overall quality of LLM-generated text. Similarly, [11] introduces a framework called "Chain of Knowledge" for grounding LLMs with structured knowledge bases. Grounding refers to the process of connecting LLM-generated text with structured knowledge to improve factual accuracy and reliability. The framework utilizes a hierarchical approach, chaining multiple knowledge sources together to provide context and enhance the understanding of LLMs. This approach aims to improve the alignment of LLM-generated content with structured knowledge, reducing the risk of generating inaccurate or hallucinated information.
Smaller, open-source LLMs with fewer param
Figure 1: Taxonomy for Hallucination in Large Foundation Models
Figure 3: An illustration of hallucination [12]. Incorrect information is highlighted in Red.
Figure 2: The evolution of “hallucination” papers for Large Foundation Models (LFMs) from March 2023 to September 2023.
eters often experience significant hallucination issues compared to their larger counterparts (Elaraby et al., 2023). This work focuses on evaluating and mitigating hallucinations in BLOOM 7B, which represents weaker open-source LLMs used in research and commercial applications. They introduce HALOCHECK, a lightweight knowledge-free framework designed to assess the extent of hallucinations in LLMs. Additionally, it explores methods like knowledge injection and teacher-student approaches to reduce hallucination problems in low-parameter LLMs.
Moreover, the risks associated with LLMs can be mitigated by drawing parallels with web systems (Huang and Chang, 2023). It highlights the absence of a critical element, "citation," in LLMs, which could improve content transparency, and verifiability, and address intellectual property and ethical concerns.
Hallucination mitigation using prompting techniques:"Dehallucinating" refers to reducing the generation of inaccurate or hallucinated information by LLMs. Dehallucinating LLMs using formal methods guided by iterative prompting is presented in (Jha et al., 2023). They employ formal methods to guide the generation process through iterative prompts, aiming to improve the accuracy and reliability of LLM outputs. This method is designed to mitigate the issues of hallucination and enhance the trustworthiness of LLM-generated content.
### Multilingual LLMs
Large-scale multilingual machine translation systems have shown impressive capabilities in directly translating between numerous languages, making them attractive for real-world applications. However, these models can generate hallucinated translations, which pose trust and safety issues when deployed. Existing research on hallucinations has mainly focused on small bilingual models for high-resource languages, leaving a gap in understanding hallucinations in massively multilingual models across diverse translation scenarios.
To address this gap, (Pfeiffer et al., 2023) conducted a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a versatile LLM that can be prompted for translation. The investigation covers a wide range of conditions, including over 100 translation directions, various resource levels, and languages beyond English-centric pairs.
### Domain-specific LLMs
Hallucinations in mission-critical areas such as medicine, banking, finance, law, and clinical settings refer to instances where false or inaccurate information is generated or perceived, potentially leading to serious consequences. In these sectors, reliability and accuracy are paramount, and any form of hallucination, whether in data, analysis, or decision-making, can have significant and detrimental effects on outcomes and operations. Consequently, robust measures and systems are essential to minimize and prevent hallucinations in these high-stakes domains.
Medicine:The issue of hallucinations in LLMs, particularly in the medical field, where generating plausible yet inaccurate information can be detrimental. To tackle this problem, (Umapathi et al., 2023) introduces a new benchmark and dataset called Med-HALT (Medical Domain Hallucination Test). It is specifically designed to evaluate and mitigate hallucinations in LLMs. It comprises a diverse multinational dataset sourced from medical examinations across different countries and includes innovative testing methods. Med-HALT consists of two categories of tests: reasoning and memory-based hallucination tests, aimed at assessing LLMs' problem-solving and information retrieval capabilities in medical contexts.
Law:ChatLaw (Cui et al., 2023), is an open-source LLM specialized for the legal domain. To ensure high-quality data, the authors created a meticulously designed legal domain fine-tuning dataset. To address the issue of model hallucinations during legal data screening, they propose a method that combines vector database retrieval with keyword retrieval. This approach effectively reduces inaccuracies that may arise when solely relying on vector database retrieval for reference data retrieval in legal contexts.
## 3 Hallucination in Large Image Models
Contrastive learning models, employing a Siamese structure (Wu et al., 2023), have displayed impressive performance in self-supervised learning. Their success hinges on two crucial conditions: the presence of a sufficient number of positive pairs and the existence of ample variations among them. Without meeting these conditions, these frameworks may lack meaningful semantic distinctions and become susceptible to overfitting. To tackle these
challenges, we introduce the Hallucinator, which efficiently generates additional positive samples to enhance contrast. The Hallucinator is differentiable, operating in the feature space, making it amenable to direct optimization within the pre-training task and incurring minimal computational overhead.
Efforts to enhance LVLMs for complex multimodal tasks, inspired by LLMs, face a significant challenge: object hallucination, where LVLMs generate inconsistent objects in descriptions. This study [11] systematically investigates object hallucination in LVLMs and finds it's a common issue. Visual instructions, especially frequently occurring or co-occurring objects, influence this problem. Existing evaluation methods are also affected by input instructions and LVLM generation styles. To address this, the study introduces an improved evaluation method called POPE, providing a more stable and flexible assessment of object hallucination in LVLMs.
Instruction-tuned Large Vision Language Models (LVLMs) have made significant progress in handling various multimodal tasks, including Visual Question Answering (VQA). However, generating detailed and visually accurate responses remains a challenge for these models. Even state-of-the-art LVLMs like InstructBLIP exhibit a high rate of hallucinatory text, comprising 30 percent of non-existent objects, inaccurate descriptions, and erroneous relationships. To tackle this issue, the study [14] introduces MHalDetect1, a Multimodal Hallucination Detection Dataset designed for training and evaluating models aimed at detecting and preventing hallucinations. MHalDetect contains 16,000 finely detailed annotations on VQA examples, making it the first comprehensive dataset for detecting hallucinations in detailed image descriptions.
## 4 Hallucination in Large Video Models
Hallucinations can occur when the model makes incorrect or imaginative assumptions about the video frames, leading to the creation of artificial or erroneous visual information Fig. 5.
The challenge of understanding scene affordances is tackled by introducing a method for inserting people into scenes in a lifelike manner [13]. Using an image of a scene with a marked area and an image of a person, the model seamlessly integrates the person into the
Figure 4: Instances of object hallucination within LVLMs [11]. Ground-truth objects in annotations are indicated in **bold**, while red objects represent hallucinated objects by LVLMs. The left case occurs in the conventional instruction-based evaluation approach, while the right cases occur in three variations of POPE.
Figure 5: A video featuring three captions generated by various captioning models [11], with factual errors highlighted in red italics.
scene while considering the scene's characteristics. The model is capable of deducing realistic poses based on the scene context, adjusting the person's pose accordingly, and ensuring a visually pleasing composition. The self-supervised training enables the model to generate a variety of plausible poses while respecting the scene's context. Additionally, the model can also generate lifelike people and scenes on its own, allowing for interactive editing.
VideoChat [14], is a comprehensive system for understanding videos with a chat-oriented approach. VideoChat combines foundational video models with LLMs using an adaptable neural interface, showcasing exceptional abilities in understanding space, time, event localization, and inferring cause-and-effect relationships. To fine-tune this system effectively, they introduced a dataset specifically designed for video-based instruction, comprising thousands of videos paired with detailed descriptions and conversations. This dataset places emphasis on skills like spatiotemporal reasoning and causal relationships, making it a valuable resource for training chat-oriented video understanding systems.
Recent advances in video inpainting have been notable [21], particularly in cases where explicit guidance like optical flow can help propagate missing pixels across frames. However, challenges arise when cross-frame information is lacking, leading to shortcomings. So, instead of borrowing pixels from other frames, the model focuses on addressing the reverse problem. This work introduces a dual-modality-compatible inpainting framework called Deficiency-aware Masked Transformer (DMT). Pretraining an image inpainting model to serve as a prior for training the video model has an advantage in improving the handling of situations where information is deficient.
Video captioning aims to describe video events using natural language, but it often introduces factual errors that degrade text quality. While factuality consistency has been studied extensively in text-to-text tasks, it received less attention in vision-based text generation. In this research [15], the authors conducted a thorough human evaluation of factuality in video captioning, revealing that 57.0% of model-generated sentences contain factual errors. Existing evaluation metrics, mainly based on n-gram matching, do not align well with human assessments. To address this issue, they introduced a model-based factuality metric called FactVC, which outperforms previous metrics in assessing factuality in video captioning.
## 5 Hallucination in Large Audio Models
Automatic music captioning, which generates text descriptions for music tracks, has the potential to enhance the organization of vast musical data. However, researchers encounter challenges due to the limited size and expensive collection process of existing music-language datasets. To address this scarcity, [16] used LLMs to generate descriptions from extensive tag datasets. They created a dataset known as LP-MusicCaps, comprising around 2.2 million captions paired with 0.5 million audio clips. They also conducted a comprehensive evaluation of this large-scale music captioning dataset using various quantitative natural language processing metrics and human assessment. They trained a transformer-based music captioning model on this dataset and evaluated its performance in zero-shot and transfer-learning scenarios.
Ideally, the video should enhance the audio, and in [14], they have used an advanced language model for data augmentation without human labeling. Additionally, they utilized an audio encoding model to efficiently adapt a pre-trained text-to-image generation model for text-to-audio generation.
## 6 Hallucination is _not_ always harmful: A different perspective
Suggesting an alternative viewpoint, [23] discusses how hallucinating models could serve as "collaborative creative partners," offering outputs that may not be entirely grounded in fact but still provide valuable threads to explore. Leveraging hallucination creatively can lead to results or novel combinations of ideas that might not readily occur to most individuals.
"Hallucinations" become problematic when the statements generated are factually inaccurate or contravene universal human, societal, or particular cultural norms. This is especially critical in situations where an individual relies on the LLM to provide expert knowledge. However, in the context of creative or artistic endeavors, the capacity to generate unforeseen outcomes can be quite advantageous. Unexpected responses to queries can surprise humans and stimulate the discovery of novel idea connections. |
2309.13198 | Associative memory by virtual oscillator network based on single
spin-torque oscillator | A coupled oscillator network may be able to perform an energy-efficient
associative memory operation. However, its realization has been difficult
because inhomogeneities unavoidably arise among the oscillators during
fabrication and lead to an unreliable operation. This issue could be resolved
if the oscillator network were able to be formed from a single oscillator.
Here, we performed numerical simulations and theoretical analyses on an
associative memory operation that uses a virtual oscillator network based on a
spin-torque oscillator. The virtual network combines the concept of coupled
oscillators with that of feedforward neural networks. Numerical experiments
demonstrate successful associations of $60$-pixel patterns with various
memorized patterns. Moreover, the origin of the associative memory is shown to
be forced synchronization driven by feedforward input, where phase differences
among oscillators are fixed and correspond to the colors of the pixels in the
pattern. | Yusuke Imai, Tomohiro Taniguchi | 2023-09-22T22:23:58Z | http://arxiv.org/abs/2309.13198v3 | # Associative memory by virtual oscillator network based on single spin-torque oscillator
###### Abstract
A coupled oscillator network may be able to perform an energy-efficient associative memory operation. However, its realization has been difficult because inhomogeneities unavoidably arise among the oscillators during fabrication and lead to an unreliable operation. This issue could be resolved if the oscillator network were able to be formed from a single oscillator. Here, we performed numerical simulations and theoretical analyses on an associative memory operation that uses a virtual oscillator network based on a spin-torque oscillator. The virtual network combines the concept of coupled oscillators with that of feedforward neural networks. Numerical experiments demonstrate successful associations of 60-pixel patterns with various memorized patterns. Moreover, the origin of the associative memory is shown to be forced synchronization driven by feedforward input, where phase differences among oscillators are fixed and correspond to the colors of the pixels in the pattern.
The human brain has a sophisticated function called associative memory [1], whereby it can remember a pattern when shown a portion of that pattern. This function has been modeled in various ways with the goal of achieving a better understanding of brain activity and realizing energy-efficient bio-inspired computing. Since the development of an autocorrelation model in the 1970s [2, 3, 4], several theoretical models, such as the Hopfield model [5], have been developed that draw their inspiration from the characteristics of neural activity [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. These models have also been implemented in experimental devices. For example, the associative memory operation was recently performed in a spintronic memory consisting of a nanometer-scale ferromagnetic multilayer [16]. In addition to these efforts embodying neuronal dynamics, it has been proposed that synchronized phenomena in coupled oscillator networks can be used to perform the associative memory operation [17, 18, 19, 20, 21]. For example, a detailed analysis was conducted on an _LC_-circuit oscillator network performing the operation [21]. A network of spintronic oscillators, called spin-torque oscillators (STOs), has also been shown to perform an associative memory operation [22].
There are two major issues with using an oscillator network for the associative memory operation. One is unstable operation due to inhomogeneity in the oscillator's parameters. For example, variations in frequency among the oscillators are unavoidable in experimental realizations; they prevent a synchronization between the oscillators and decrease the accuracy of the associative memory [21]. The other issue is that the required number of oscillators grows with the amount of input data. There are numerous challenges in fabricating a large number of oscillators and getting them to interact with each other. These issues might be resolved if we can construct an oscillator network virtually by using a single physical oscillator [23]. Such a network would have no inhomogeneities in its parameters as only one oscillator would have to be fabricated. However, there are questions on how such a network could be realized and how it could show synchronization phenomena.
In this work, we demonstrate an associative memory operation by a virtual oscillator network through numerical simulations and theoretical analyses. First, we provide a detailed description of the virtual oscillator network consisting of a single physical oscillator. In particular, we discuss the principles involved, i.e., those of the coupled oscillator networks and feedforward neural networks. Next, we show that a virtual oscillator network consisting of a single STO can recognize several different 60-pixel patterns by numerically simulating the motion of the STO. We reveal that the feedforward input in the virtual network forces the virtual oscillators to synchronize and that this phenomenon results in the associative memory operation.
## Results
### Associative memory operation of this study
The associative memory operation studied here is to associate a pattern, called the pattern to be recognized, with a pattern in a stored set of patterns, called memorized patterns. For example, suppose that the three patterns, "0", "1", and "2", shown in Fig. 1(a) are memorized, and the one shown in Fig. 1(b) is the pattern to be recognized: we can see that the pattern to be recognized is similar to the memorized pattern "1". Throughout this paper, we will suppose the memorized patterns use
10(rows)\(\times\)6(columns)= 60-pixels patterns for memorized patterns and patterns to be recognized.
In the following subsections, we describe the concept of our virtual oscillator network after briefly reviewing a conventional oscillator network for comparison. Then, we demonstrate through numerical simulations that the virtual oscillator network can perform the associative memory operation.
### Associative memory operation by conventional oscillator network
The associative memory operation by a conventional coupled oscillator network consists of two steps [21]. The first step is to give a correspondence between the phases of the oscillators and the colors of the pattern to be recognized. We prepare \(N\) oscillators corresponding to the pixels of the pattern to be recognized, where \(N\) is the number of oscillators (pixels). We introduce phases \(\psi_{i}\) (\(i=1,\cdots,N\)) and phase differences \(\Delta\psi_{i}=\psi_{i}-\psi_{1}\). The color of the \(i\)th pixel is determined by \(\cos\Delta\psi_{i}\), which is white (black) when \(\Delta\psi_{i}=0\) (\(\pi\)). According to this definition, the color of the first pixel is always white (see also the Methods for the definitions of color). Initially, there are no interactions between the oscillators. Thus, their phases are arbitrary, and the colors in the pattern are random, as schematically shown on the left of Fig. 2(a). When interactions between the oscillators are introduced and the interaction strengths are appropriately determined by the Hebbian rule, all the phase differences become 0 or \(\pi\) in correspondence with the white and black pixels of the pattern to be recognized, as shown in the middle of Fig. 2(a) (see also Methods for model of the conventional oscillator network). Here, the Hebbian rule means that the interaction strength between the \(j\)th and \(i\)th oscillator is proportional to the weight,
\[w_{ij}^{(1)}=\xi_{i}^{\rm R}\xi_{j}^{\rm R}, \tag{1}\]
where \(\xi_{i}^{\rm R}=+(-)1\) when the color of the pattern to be recognized at the \(i\)th pixel is white (black). Thus, \(w_{ij}^{(1)}=+(-)1\) when the colors of the \(i\)th and \(j\)th are the same (opposite).
The second step is to replace the weights by the following ones, which can be regarded as an average of the weights among the memorized patterns,
\[w_{ij}^{(2)}=\frac{1}{N_{\rm m}}\sum_{m=1}^{N_{\rm m}}\xi_{i}^{m}\xi_{j}^{m}, \tag{2}\]
where \(N_{\rm m}\) is the number of memorized patterns. The symbol \(m=1,2,\cdots,N_{\rm m}\) is used to distinguish the memorized patterns. For example, the memorized patterns "0", "1", and "2" in Fig. 2(a) are labelled \(m=1\), \(2\), and \(3\). The parameter \(\xi_{i}^{m}\) is \(+(-)1\) when the color of the \(i\)th pixel in the \(m\)th memorized pattern is white (black). Then, the oscillator phases change to those of the memorized pattern most resembling the pattern to be recognized, and the association is achieved, as shown in the right in Fig. 2(a).
### Description of associative memory operation by virtual oscillator network
The associative memory operation by a virtual oscillator network consists of three steps.
First, we measure an oscillation of a single oscillator and divide it into \(N\) parts, as schematically shown on the first line of Fig. 2(b). The \(i\)th part of the measured data is regarded as the output from the \(i\)th oscillator in a virtual network. In this step, the
Figure 1: Examples of memorized patterns and a pattern to be recognized. (a) Three (\(N_{\rm m}=3\)) memorized patterns, “0”, “1”, and “2”. (b) The pattern to be recognized resembles memorized pattern “1”. The oscillator network tries to associate the pattern to be recognized with the pattern “1”. In an associative memory operation performed by a system consisting of \(N\) oscillators, the color of the \(i\)th (\(i=1,2,\cdots,N\)) pixel is determined by the phase \(\psi_{i}\) of the corresponding oscillator. The color is white (black) when the phase difference, \(\Delta\psi_{i}=\psi_{i}-\psi_{1}\), is 0 (\(\pi\)). The color is on a gray scale when the phase difference is \(0<\Delta\psi_{i}<\pi\).
Figure 2: Schematic illustration of conventional and virtual oscillator networks. (a) In the conventional oscillator network, the oscillators are initially uncoupled (left). Therefore, the phase of each oscillator is arbitrary. When the oscillators interact with appropriate weights [\(w_{ij}^{(1)}\)], the phases saturate to values corresponding to the pattern to be recognized (middle). When the weight changes [\(w_{ij}^{(2)}\)], the phases change so that the corresponding pattern resembles one of memorized patterns (right). (b) In a virtual oscillator network, we drive an oscillation of a single oscillator and divide its output into \(N\) parts. The \(i\)th part is regarded as an output from the \(i\)th virtual oscillator. First [top of (b)], we measure the \(N\) outputs. The corresponding pattern in this step is arbitrary because there is no correlation among the oscillators. Second [middle of (b)], an external force is added to the oscillator. This force is a linear combination of the outputs in the first step with appropriated weights [\(w_{ij}^{(1)}\)]. The phase of each part eventually saturates to a value corresponding to the pixel color in the pattern to be recognized. Third [bottom of (b)], the second step is repeated while the force is a linear combination of the outputs in the second step with weights \(w_{ij}^{(2)}\). Eventually, the phases saturate to the values corresponding to the memorized pattern most resembling the pattern to be recognized.
phase of each part is arbitrary, and therefore, the pattern arising from it is random. The measured data should be stored in a computer in order for it to be used in the next step.
Second, we excite another oscillation and divide the measured data into \(N\) parts again. At the initial time of each part, the phase, as well as the pattern determined from it, is arbitrary, as shown in the middle of Fig. 2(b). This time, however, we apply an external force to the oscillator that is proportional to a linear combination of the measured data in the first step with weights (1). For example, in this study, the external force comes from a torque excited by an external magnetic field, which applied during the \(i\)th part of the oscillation is given by
\[H_{i}^{(1)}=\mathcal{H}\sum_{j=1}^{N}w_{ij}^{(1)}y_{j}^{(1)}, \tag{3}\]
where \(\mathcal{H}\) denotes the amplitude and \(y_{j}^{(1)}\) is the output from the \(j\)th oscillator measured in the first step [see also Methods for the detailed definition of \(y_{j}^{(1)}\) in the numerical simulations]. Therefore, Eq. (3) is an oscillating function with the frequency of the oscillator. Because of the application of the magnetic field, the phase in each part eventually saturates to a certain value, and the pattern to be recognized is output, as shown in the middle of Fig. 2(b). Note that the output signal of this process should be stored in a computer.
Third, we perform a measurement similar to one in the second step but the magnetic field applied during the \(i\)th part is replaced by
\[H_{i}^{(2)}=\mathcal{H}^{\prime}\sum_{j=1}^{N}w_{ij}^{(2)}y_{j}^{(2)}, \tag{4}\]
where \(\mathcal{H}^{\prime}\) denotes the amplitude, while \(y_{j}^{(2)}\) is the output from the \(j\)th oscillator measured at the second step (see also Methods pertaining to the numerical simulations). The weights \(w_{ij}^{(2)}\) are given by Eq. (2). The phase at the end of each part saturates to a value corresponding to the memorized pattern most resembling the pattern to be recognized, as shown in the bottom of Fig. 2(b); i.e., the associative memory operation is completed.
There are several differences between the conventional and virtual oscillator networks (see also Methods for the models). For example, the oscillators in the conventional oscillator network interact instantaneously, and their phase differences saturate to values corresponding to pixel colors as a result of mutual synchronization. On the other hand, the oscillators in the virtual oscillator network do not interact each other instantaneously. As can be seen in Eqs. (3) and (4), the oscillator outputs from the previous steps are used in the magnetic field in the current step. From perspective, the virtual oscillator network is similar to a feedforward neural network because the information on the oscillator phases in one step is sent to the oscillation in the next step. At the same time, we should note that the weights in the virtual oscillator network are fixed, as in the case of the conventional oscillator network. This is in contrast with a feedforward neural network used in deep learning, in which weights are updated by backpropagation. Thus, the virtual oscillator network can be regarded as a hybrid combination of a coupled oscillator network and a feedforward neural network. In the discussion below, we will reveal that the feedforward inputs cause forced synchronization among the divided parts and result in the associative memory operation. Before that, however, we must demonstrate that this virtual oscillator network can actually perform the associative memory operation.
### Equation of motion of oscillator
As an oscillator in the virtual oscillator network, we use a vortex STO, which has various advantages for practical applications and has been frequently used in spintronics experiments on bio-inspired computing [24, 25, 26, 27]. An STO consists of a ferromagnetic/nonmagnetic multilayer on the nanometer scale, as schematically shown in Fig. 3(a). A vortex of magnetic moments appears when a diameter and thickness of a cylinder-shape ferromagnet are on the order of 100 and 1 nm, respectively. When an electric current and/or magnetic field are applied to the STO, magnetic moments show precessions around their equilibrium direction. According to a recent experiment on chaos excitation in an STO [28], we assume that a force added to the virtual oscillator network corresponds to a torque excited by magnetic field, as mentioned above. It has been shown both experimentally and theoretically that the dynamics in a vortex STO are well described by the Thiele equation [29, 30, 31, 32, 33, 34, 35, 36], which is the equation of motion for a center of the vortex structure, called the vortex core (see also Methods for Thiele equation):
\[-G\mathbf{e}_{z}\times\mathbf{\dot{X}}-|D|\left(1+\xi s^{2}\right)\mathbf{ \dot{X}}-\kappa\left(1+\zeta s^{2}\right)\mathbf{X}+a_{J}JP_{z}\mathbf{e}_{z} \times\mathbf{X}+ca_{J}JR_{0}p_{x}\mathbf{e}_{x}+c\mu^{*}\mathbf{e}_{z}\times \mathbf{H}=\mathbf{0}, \tag{5}\]
where \(\mathbf{X}=(X,Y,0)\) represents the position of the vortex core in the \(xy\) plane. While the physical meanings and the values of many parameters are explained in Methods, two quantities should be explained here. The first is the current density \(J\), which
causes a limit-cycle oscillation of the vortex core. The other is the external magnetic field \(\mathbf{H}\), which is used to excite a torque. It is useful to notice that Eq. (5) can be approximated as (see also Methods for the analytical solution of the Thiele equation)
\[\dot{s}=as-bs^{3}-\frac{c\mu^{*}}{GR}H_{y}\cos\psi, \tag{6}\]
\[\dot{\psi}=\frac{\kappa}{G}\left(1+\zeta s^{2}\right)+\frac{c\mu^{*}}{GRs}H_{y }\sin\psi, \tag{7}\]
where \(s=|\mathbf{X}|/R\) (\(0\leq s\leq 1\)) is the distance of the vortex core from the center of the ferromagnet normalized by the disk radius \(R\), while \(\psi=\tan^{-1}(Y/X)\) is the phase. Here, \(a=(|D|\kappa/G^{2})[(J/J_{\mathrm{c}})-1]\) and \(b=(|D|\kappa/G^{2})(\xi+\zeta)\), where \(J_{\mathrm{c}}=|D|\kappa/(Ga_{J}p_{z})\). The magnetic field \(\mathbf{H}\) is assumed to have only a \(y\) component \(H_{y}\). Note that Eqs. (6) and (7) are similar to the equation of motion of the Stuart-Landau oscillator [37]. Therefore, the vortex core shows a limit-cycle oscillation around the disk center in the \(xy\) plane with an oscillating amplitude \(s_{0}=\sqrt{a/b}\) when \(J\) exceeds a threshold value \(J_{\mathrm{c}}\), while the terms related to \(H_{y}\) act as a perturbation. The connection to such a fundamental nonlinear oscillator model indicates that our results are also valid for various oscillators in nature and engineering. Figure 3(b) shows an example of nonperturbative vortex dynamics, showing an approximately circular oscillation of the vortex core around the disk center. The phase difference of the oscillation was used to define the colors in the patterns in the associative memory operation. Readers should note that the plots in Fig. 3(b), as well as the results of the numerical simulations shown below, were obtained by solving Eq. (5), while the approximate equations, Eqs. (6) and (7), are used in the model analyses described below.
### Demonstration of associative memory
Figure 3(c) shows the time evolution of the phase difference, \(\Delta\psi_{i}\), obtained by solving Eq. (5) with Eq. (3) substituting for \(H_{y}\). Note that this solution corresponds to the second step in Fig. 2(b). The phase differences saturate to \(0\) or \(\pi\) within a few hundred nanoseconds. Snapshots of patterns corresponding to this time evolution of the phases are shown in Fig. 3(d). The patterns eventually settle to the one to be recognized. Figure 2(b) shows the result of solving Eq. (5) with Eq. (4) substituting for \(H_{y}\). Here, Eq. (2) in Eq. (4) is for the three memorized patterns in Fig. 1(a). Figures 3(e) and 3(f) show the time evolution of the phase differences and snapshots of the corresponding patterns. Remind that the information of the phases corresponding to the colors of the pixels in the pattern to be recognized is included in the magnetic field in Eq. (4) through \(y_{j}^{(2)}\). Consequently, even though the initial pattern is random, the oscillator phases finally saturate to values corresponding to one of the memorized patterns [Fig. 3(f)].
The associative memory operation becomes more difficult when there are similar memorized patterns. To clarify this point, let us examine what happens when the number of the memorized patterns is increased, as shown in Fig. 4(a) from the three in Fig. 1(a). The added patterns do not affect the second step in Fig. 2(b). For the association corresponding to the third step in Fig. 2(b), the magnetic field, defined by Eq. (4), is changed by these new memorized patterns. As a result, the final pattern output resembles none of the memorized ones [Fig. 4(b)].
This failure of the associative memory operation is due to two reasons. The first is that the pattern "7" is similar to the pattern "1", which should be the one associated. When "7" is excluded from the memorized patterns, the association succeeds, as shown in Fig. 4(c). The second reason is that the number of memorized patterns is large. As shown in Fig. 4(d), the association succeeds when the memorized patterns include only "1" and "7", the association is succeeded. Therefore, we conclude that an association may fail when the memorized patterns include similar patterns and the number of memorized patterns is large.
To quantify the similarity between patterns \(A\) and \(B\), we introduce the degree of overlap:
\[\mathcal{O}(\boldsymbol{\xi}^{A},\boldsymbol{\xi}^{B})\equiv\frac{1}{N}\bigg{|} \sum_{i=1}^{N}\xi_{i}^{A}\xi_{i}^{B}\bigg{|}, \tag{8}\]
where \(\boldsymbol{\xi}^{A}=(\xi_{1}^{A},\cdots,\xi_{N}^{A})\) is defined from the color of the \(i\)th pixel of pattern \(A\) [\(\xi_{i}^{A}=+(-)1\) when the \(i\)th pixel is white (black)]. The overlap becomes \(1\) when the two patterns are completely identical or their black and white colors are all exchanged (see also Methods for the definitions of color and overlap). For example, in the example shown in Figs. 1 and 3, the degree of overlap between the pattern to be recognized and the memorized pattern "0" is \(\mathcal{O}(\boldsymbol{\xi}^{R},\boldsymbol{\xi}^{1})=18/60=0.30\). It is\(\mathcal{O}(\boldsymbol{\xi}^{R},\boldsymbol{\xi}^{2})=44/60\simeq 0.73\) for pattern "1", and \(\mathcal{O}(\boldsymbol{\xi}^{R},\boldsymbol{\xi}^{3})=6/60=0.10\) for pattern "2" (the memorized patterns are labelled as \(m=1,2,3,\cdots\) while the examples of memorized patterns in this work are "0", "1", "2", etc; thus, the label \(m\) and the corresponding number are off by one). Since the degree of overlap of the pattern to be recognized and "1" is large in the examples in Figs. 1 and 3, pattern "1" should be associated in this case. On the other hand, in the example shown in Fig. 4,
Figure 3: Description of STO and demonstration of associative memory by a virtual oscillator network. (a) Schematic illustration of vortex spin torque oscillator and (b) vortex-core dynamics driven by electric current. The STO has a cylindrical shape, and the \(z\) axis is orthogonal to the circular plane. Magnetic moments, shown as colored arrows in top ferromagnet, form a circular structure. The black dot around which the moments turn is the vortex core. Electric current is injected into the STO; positive current flows from bottom to top in the figure. When the electric current density \(J\) exceeds a threshold value, the vortex core oscillates around the disk center. The output signals from the STO during the first (second) step in Fig. 2(b) are stored, and their linear combination with weights \(w_{ij}^{(1)}\) [\(w_{ij}^{(2)}\)] defined from the pattern to be recognized (memorized patterns) is used as magnetic field during the second (third) step. For simplicity, the dynamics in the absence of the magnetic field is shown. The components of the vortex-core’s position, \(X/R\), and \(Y/R\), oscillate around the disk center, and a trajectory is approximately a circle. The distance of the vortex-core’s position from the disk center, \(s\), is approximately constant value, \(s_{0}\). The phase measured from the \(x\) axis is denoted as \(\psi\). (c) Time evolutions of the 59 phase differences, \(\Delta\psi_{i}\) (\(i=2,3,\cdots,60\)) and (d) snapshots of generating a pattern to be recognized on 60-pixels. (e) Time evolutions of the phase difference and (f) snapshots of the corresponding pattern for association from memorized patterns.
Figure 4: Problem of associative memory operation when the similarity between the memorized patterns is high and the number of patterns is large. (a) Ten (\(N_{\text{m}}=10\)) memorized patterns, “0”, “1”,“-,“9”. (b) Time evolution of the phase difference during the association and snapshots of the corresponding pattern. In this case, the memorized patterns include both “1” and “7”. Because of their similarity, the pattern does not finally saturate to “1”. (c) When “7” is removed from the memorized patterns (\(N_{\text{m}}=9\)), the association is successful, even though there are nine remaining memorized patterns. (d) The association is successful when the memorized patterns include only “1” and “7”.
the overlap between the pattern to be recognized [Fig. 1(b)] and "7" is also relatively large, i.e., \(\mathcal{O}(\xi^{\rm R},\xi^{\rm R})=32/60\simeq 0.53\). In addition, the overlap between the memorized patterns "1" and "7", \(\mathcal{O}(\xi^{\rm R},\xi^{\rm R})=28/60\simeq 0.47\), is also relatively large compared with those between the other patterns; for example, the overlap between "1" and "8" is \(\mathcal{O}(\xi^{\rm R},\xi^{\rm R})=2/60\simeq 0.03\) (see also Supplementary Information, where the overlaps of the ten memorized patterns are summarized). Accordingly, when the memorized patterns include "1" and "7", the virtual oscillator network cannot associate a correct pattern, and the final pattern produced corresponds to none of the memorized ones. Similarly, when the number of memorized patterns is large, there might be patterns having large overlaps and the association fails.
In summary, we have shown that the virtual oscillator network based on the algorithm in Fig. 2(b) can perform the associative memory operation. Its accuracy, however, is low when the memorized patterns include some patterns having large overlaps and there is a large number of memorized patterns. Note that the maximum number of patterns that can be memorized by neural network is approximately \(N/(2\log N)^{8}\). It would be of interest if such a formula can be derived for virtual oscillator networks in future.
We examined the associative memory operation for various cases, i.e., for different patterns to be recognized, and studied the rate of the accurate association; see Supplementary Information.
## Discussion
Here we discuss the principles of the associative memory operation analytically by using Eqs. (6) and (7). As mentioned above, the operation consists of three steps, and in each step, the oscillator output is divided into \(N\) parts. In what follows, we denote the phase of the vortex core during the \(i\)th part of the \(k\)th step as \(\psi_{i}^{(k)}\). We also assume that the oscillation amplitude \(s_{0}\) is approximately constant because the current density is fixed. Therefore, the oscillation frequency, \(f=\Omega/(2\pi)=[\kappa/(2\pi G)](1+\zeta_{*0}^{*2})\), is also approximately constant (see also Methods for the analytical solution of the Thiele equation).
The phase in the second step obeys,
\[\psi_{i}^{(2)}=\Omega+\frac{c\mu^{*}}{GRs_{0}}\mathcal{H}\sum_{\ell=1}^{N} \xi_{\ell}^{\rm R}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\sin\psi_{i}^{(2)}. \tag{9}\]
Thus, the phase difference between the \(i\)th and \(j\)th parts obeys,
\[\dot{\psi}_{i}^{(2)}-\dot{\psi}_{j}^{(2)}=\frac{c\mu^{*}}{GRs_{0}}\mathcal{H} \left(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\right)\left(\xi_{i }^{\rm R}\sin\psi_{i}^{(2)}-\xi_{j}^{\rm R}\sin\psi_{j}^{(2)}\right). \tag{10}\]
The steady state condition on the phase difference leads to
\[\xi_{i}^{\rm R}\sin\psi_{i}^{(2)}-\xi_{j}^{\rm R}\sin\psi_{j}^{(2)}=0. \tag{11}\]
Note that \(\xi_{i}^{\rm R}=+(-)1\) when the color at the \(i\)th pixel of the pattern to be recognized is white (black). Therefore, \(\psi_{i}^{(1)}\) and \(\psi_{j}^{(1)}\) will be in-phase \(\psi_{i}^{(1)}=\psi_{j}^{(1)}\) [anti-phase \(\psi_{i}^{(1)}=\psi_{j}^{(1)}\pm\pi\)] when the colors of the \(i\)th and \(j\)th pixels are the same (opposite). As a result, the phase differences in the second step saturate to 0 or \(\pi\) corresponding to the white or black in the pattern to be recognized. Note that this synchronization is caused by a feedforward input from the first step, which corresponds to the second term on the right-hand side in Eq. (9). Here, the term \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\) in Eq. (9) is the sum of the \(N\) oscillator outputs \(y_{\ell}^{(1)}\) in the first step, multiplied by the factor \(\xi_{\ell}^{\rm R}\) determining the pixel color of the pattern to be recognized, and is common for all \(i\) of Eq. (9). Equation (9) also includes a factor \(\xi_{i}^{\rm R}\), which determines the sign of the input. Regarding these facts, the feedforward input has only two values, depending on the value of \(\xi_{i}^{\rm R}\). The phase synchronization among the \(N\) parts in the second step is the result of forced synchronization with respect to this feedforward input, and the phase difference has only two values, 0 or \(\pi\), depending on the value of \(\xi_{i}^{\rm R}\). This mechanism is in contrast with that of the previous work [21], where a mutual synchronization is the origin of the associative memory operation. Also, the method is different from the previous works [38, 39]. In Ref. [38], a forced synchronization of frequency with respect to an external signal was studied, while the input signal in the present work is generated by the oscillator output itself and the phase synchronization plays the central role in the associative memory operation. In Ref. [39], a delayed-feedback was used to generate input signal, while the input signal in the present work is generated by multiplying appropriated weight to perform the associative memory operation.
We also note that, when \(y_{\ell}^{(1)}\) is a simple trigonometric function, its linear combination, \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\), is also a trigonometric function with the same frequency and a different phase. According to the above discussion, the phase of the term \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\) does not play any role to excite forced synchronization among the \(N\) parts. Thus, the term \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\) could be replaced by,
for example, \(y_{1}^{(1)}\). In this case, it is unnecessary to measure other \((N-1)\)\(y_{\ell}^{(1)}\) (\(\ell=2,3,\cdots,N\)) in the first step in Fig. 2(b), although we solved the equation of motion for \(N\) virtual oscillators to clarify similarities and differences between the second and third step. When \((N-1)\) parts in the first step are omitted for simplicity, the power consumption to drive the oscillator in the virtual oscillator network is proportional to \(2N+1\), where \(2N\) comes from the second and third steps in Fig. 2(b). On the other hand, the power consumption in the conventional oscillator network is proportional to \(2N\) because \(N\) oscillators are driven two times, as implied in Fig. 2(a). For a large \(N\), the power consumption of two oscillator networks are comparable. The time required for the operation increases linearly as \(N\) increases, which is not suitable for practical applications, although the same might be true for a conventional oscillator network because the relaxation time of the phase will depend on the number of the oscillators. the time of a conventional (coupled) oscillator network might also increase as \(N\) increases. However, the virtual oscillator network has an advantage from a viewpoint of reliability, as discussed below.
Next, we focus on the third step, where the phase during the \(i\)th part obeys
\[\psi_{i}^{(3)}=\Omega+\frac{c\mu^{*}}{GRs_{0}}\mathcal{H}^{0}\frac{1}{N_{\rm m }}\sum_{m=1}^{N_{\rm m}}\sum_{\ell=1}^{N}\xi_{\ell}^{m}\xi_{\ell}^{m}y_{\ell} ^{(2)}\sin\psi_{i}^{(3)}. \tag{12}\]
Since the oscillators in the second step are in the synchronized state, the output \(y_{\ell}^{(2)}\) can be expressed as \(y_{\ell}^{(2)}=\xi_{\ell}^{\rm R}\xi_{1}^{\rm R}y_{1}^{(2)}\), where \(y_{1}^{(2)}\) is the output of the first part in the second step. We substitute this relation into Eq. (12) and assume that
\[\sum_{\ell=1}^{N}\xi_{\ell}^{m}\xi_{\ell}^{\rm R}\simeq\delta_{m,\mathcal{A}} \sum_{\ell=1}^{N}\xi_{\ell}^{m}\xi_{\ell}^{\rm R}, \tag{13}\]
where the symbol \(\mathcal{A}\) corresponds to a pattern in the memorized patterns that resembles the pattern to be recognized. The assumption (13) means that only a pattern having a large degree of overlap with the pattern to be recognized contributes to the feedforward input. The other memorized patterns, which are greatly different from the pattern to be recognized, do not contribute to the feedforward input because of their small overlap. When the assumption is satisfied, Eq. (12) becomes
\[\psi_{i}^{(3)}=\Omega+\frac{c\mu^{*}}{GRs_{0}}\mathcal{H}^{0}\frac{1}{N_{\rm m }}y_{1}^{(2)}\xi_{1}^{\rm R}\left(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm z\sigma} \xi_{\ell}^{\rm R}\right)\xi_{i}^{\rm z\sigma}\sin\psi_{i}^{(3)}. \tag{14}\]
Equation (14) is similar to Eq. (9), and therefore, the steady-state condition of the phase difference between the \(i\)th and \(j\)th parts in the third step is given by
\[\xi_{i}^{\rm z\sigma}\sin\psi_{i}^{(3)}-\xi_{j}^{\rm z\sigma}\sin\psi_{j}^{(3 )}=0. \tag{15}\]
Equation (15) means that in-phase or anti-phase synchronization between the \(N\) parts occurs, and the phase differences in the third step saturate to \(0\) or \(\pi\) corresponding to the white or black colors in a memorized pattern most resembling the one to be recognized.
The operation principle is based on Eq. (13). Equation (13) is satisfied if there is only one pattern that has a large degree of overlap with the pattern to be recognized. On the other hand, if there are other patterns having large overlaps with the pattern to be recognized, Eq. (13) is not satisfied. In this case, Eq. (15) is not necessarily satisfied, and the colors in the steady state in the third step might be different from the pattern most resembling the one to be recognized or they might be gray (neither black nor white); see also Supplementary Information.
Our analysis also assumed that the oscillation frequencies of the \(N\) parts are the same. This assumption is a natural one because each part is obtained from a single oscillator. Technically speaking, the oscillation frequency in each part is varied by changing the magnitude of the electric current. If the oscillation frequencies of the \(i\)th and \(j\)th parts, denoted as \(\Omega_{i}/(2\pi)\) and \(\Omega_{j}/(2\pi)\), are different, the right-hand side of Eq. (10) has an additional term \(\Omega_{i}-\Omega_{j}\). In such a case, the phase difference is not well defined because \(\psi_{i}\) and \(\psi_{j}\) oscillate with different frequencies. Even if we introduce an instantaneous phase by, for example, making a Hilbert transformation, as was done in experiments [40], the phase difference still does not necessarily saturate to \(0\) or \(\pi\). In such a case, the associative memory operation fails. Therefore, there is no reason to change the oscillation frequency in each part. This fact also indicates an advantage to using the virtual oscillator network. In the conventional oscillator network, variations in the oscillation frequency naturally appear because inhomogeneities in the parameters of the oscillators are unavoidable, and such variations lead to the failure of the associative memory operation [21]. The virtual oscillator network does not have such variation and thus would be a more reliable associative memory. A weak point of the present proposal is, on the other hand, that the method requires a computer to store the output signal in each step, which is not preferable for practical applications. We would like to keep this issue as a future work.
In conclusion, we described the concept of the associative memory operation by a virtual oscillator network and performed numerical simulations. The operation consists of three steps, where the output of one step is sent to the next step with weights defined by the Hebbian rule. In this sense, the virtual oscillator network can be regarded as a hybrid combination of a coupled oscillator network and a feedforward neural network. The network successfully associated black-and-white patterns with a few memorized patterns. However, it failed to make an association when the number of memorized patterns was large (ten compared to three) and some of the memorized patterns resembled each other. We also developed a theoretical analysis and clarified that the origin of the associative memory operation is forced synchronization driven by feedforward input. Either in-phase or anti-phase synchronization was excited among the oscillators and provides appropriate correspondence between the oscillator phases and the colors in the patterns. The virtual oscillator network is more reliable than a conventional oscillator network, which is affected by unavoidable inhomogeneities among the oscillators.
## Methods
### Definitions of color and overlap
By convention, the first pixel (the pixel in the top left-hand corner of a pattern) is always white. The pattern should be regarded as the same even when all of the black and white pixels are swapped for each other, Mathematically, this means that \(\sum_{i=1}^{N}\xi_{i}^{A}\xi_{i}^{B}=N\) when the patterns \(A\) and \(B\) are completely the same, and \(\sum_{i=1}^{N}\xi_{i}^{A}\xi_{i}^{B}=-N\) when patterns \(A\) and \(B\) represent the same pattern but their black and white colors are completely swapped. According to this definition of the same figure, the maximum number of the difference between two patterns is \(N/2\); in this case, the degree of overlap is zero (see also the discussion on noise in Supplementary Information).
### Models of conventional and virtual oscillator networks
The conventional oscillator network for the associative memory operation [21] is based on the Kuramoto model [37]. The Kuramoto model describes the oscillator dynamics with a generalized phase, \(\theta\). Moreover, the oscillators interact instantaneously, and the phase of the \(i\)th oscillator obeys
\[\hat{\theta}_{i}=\omega+\mathcal{D}\sum_{j=1}^{N}w_{ij}\sin\left(\theta_{i}- \theta_{j}\right), \tag{16}\]
where \(\omega/(2\pi)\) is the oscillation frequency while \(\mathcal{D}\) is the interaction strength. For simplicity, we will assume that all oscillators share the same values of \(\omega\) and \(\mathcal{D}\). The weight \(w_{ij}\) is given by Eq. (1) or (2) depending on the step of the procedure. In the \(LC\)-circuit model [21], \(\mathcal{D}w_{ij}\) is proportional to the transconductance. The phase difference between the \(i\)th and \(j\)th oscillators obeys
\[\hat{\theta}_{i}-\hat{\theta}_{j}=\mathcal{D}\left[\sum_{\ell=1}^{N}w_{\ell }\sin\left(\theta_{i}-\theta_{\ell}\right)-\sum_{\ell=1}^{N}w_{\ell\ell}\sin \left(\theta_{j}-\theta_{\ell}\right)\right]. \tag{17}\]
In a limiting case of only two oscillators (\(N=2\)), the phase difference obeys
\[\hat{\theta}_{1}-\hat{\theta}_{2}=2\mathcal{D}w_{12}\sin\left(\theta_{1}- \theta_{2}\right), \tag{18}\]
and the in-phase (anti-phase) synchronization of \(\theta_{1}\) and \(\theta_{2}\) is a stable fixed point when \(\mathcal{D}w_{12}\) is negative (positive). The phase differences of \(\theta_{i}-\theta_{j}=0,\pi\) are always fixed points even when there are a large number of oscillators (\(N\geq 3\)). Accordingly, the phase differences in the conventional oscillator network saturate to the in-phase or anti-phase state, which thereby enables the associative memory operation.
In the presence of frequency variations, the right-hand side of Eq. (17) has an additional term \(\omega_{i}-\omega_{j}\). In this case, the phase difference is not stabilized, and this instability leads to an inaccurate associative memory operation [21].
The Thiele equation is slightly different from the Kuramoto model in the following ways. First, the Thiele equation uses the phase \(\psi\), which describes the vortex core's position in the \(xy\) plane, instead of a generalized phase. This is because the quantity measured in experiments is the vortex core's position, and the phase synchronization studied in the experiments [40] corresponds to that of \(\psi\), not a generalized phase \(\theta\). Note that we can introduce a generalized phase analytically as \(\theta=\psi+[\zeta\kappa/(Gb)]\ln(s/s_{0})\) with a phase sensitivity function \(\mathbf{Z}=(-\sin\theta+[\zeta\kappa/(Gb)]\cos\theta,\cos\theta+[\zeta\kappa/ (Gb)]\sin\theta,0)/s_{0}\). The analysis is mostly unchanged with the generalized phase, so we decided to use \(\psi\) for simplicity. Second, the equation of motion for the phase difference, Eq. (10), includes a term \(\sin\psi_{i}-\sin\psi_{j}\) whereas the Kuramoto model often uses an interacting term proportional to \(\sin(\theta_{i}-\theta_{j})\). More generally, the interaction term in the Kuramoto model can be assumed to be a function of the phase difference, \(\theta_{i}-\theta_{j}\) after applying an averaging technique with respect to a fast variable (see Ref. [37] for details). The difference between
the two models might however be insignificant; notice that, by using formulas, \(\sin x-\sin y=2\cos[(x+y)/2]\sin[(x-y)/2]\) and \(\sin x+\sin y=2\sin[(x+y)/2]\cos[(x-y)/2]\) and applying the averaging technique, the interaction term in our model can be approximated as a function of \(\theta_{i}-\theta_{j}\). Third, as mentioned above, the input term in the virtual oscillator network consists of the oscillator output from the previous step, while the interaction in the Kuramoto model is instantaneous. Because of these differences, the associative memory operation by the virtual oscillator network is significantly different from those of conventional coupled oscillator networks on which previous experiments and the theoretical analyses have been conducted.
### Parameters in the Thiele equation
Spin torque oscillators (STOs) mainly consist of a ferromagnetic metal/insulating layer/ferromagnetic metal trilayer. The first ferromagnetic layer of the trilayer is called the free layer and is where the magnetic vortex forms. The second ferromagnetic layer having a uniform magnetization is called the reference layer. When electric current is injected into STOs, spin-transfer torque [41, 42, 43] is excited on the magnetic moments in the free layer and drives their dynamics [35, 36]. The output signal from the STOs depends on the relative angle between the magnetizations in the free and reference layers.
The definitions and physical meanings of the parameters in Eq. (5) are as follows. The parameters \(G=2\pi pML/\gamma\) and \(D=-(2\pi\alpha ML/\gamma)[1-(1/2)\ln(R_{0}/R)]\) consist of the polarity \(p(=\pm 1)\) of the vortex core, the saturation magnetization \(M\), the thickness \(L\) of the ferromagnet, the gyromagnetic ratio \(\gamma\), the Gilbert damping constant \(\alpha\), and the vortex radius \(R_{0}\). The chirality \(c(\pm 1)\) of the vortex core also appears in Eq. (5). The parameters \(\kappa\) and \(\zeta\) relate to a magnetic potential energy defined as \(W=(\kappa/2)[1+(\zeta/2)s^{2}]|\mathbf{X}|^{2}\). The dimensionless parameter \(\xi\) is introduced to describe the nonlinear damping in a highly excited state [35]. The parameter \(\kappa\) relates to the material parameters as \(\kappa=(10/9)4\pi M^{2}L^{2}/R^{35}\). The parameter \(a_{J}=\pi\hbar P/(2e)\) includes the reduced Planck constant \(\hbar\), spin polarization \(P\) of the electric current, and the elementary charge \(e(>0)\). The vector \(\mathbf{p}=(p_{x},0,p_{z})\) is the unit vector pointing in the magnetization direction in the reference layer. Here, we assume that \(\mathbf{p}\) lies in the \(xz\) plane, by convention. As a result, the output signal from the vortex STO is proportional to the \(y\) component of the vortex core's position. The parameter \(\mu^{*}\) is \(\pi MLR\).
The material parameters used in this study were taken from typical experiments and simulations [35, 36, 44]: \(M=1300\) emu/cm\({}^{3}\), \(\gamma=1.764\times 10^{7}\) rad/(Oe s), \(\alpha=0.01\), \(L=5\) nm, \(R=187.5\) nm, \(R_{0}=10\) nm, \(P=0.7\), \(\xi=2.0\), and \(\zeta=0.1\). The polarity and chirality were assumed to be \(p=+1\) and \(c=+1\), for simplicity. The magnetization direction in the reference layer was \(\mathbf{p}=(\sin 60^{\circ},0,\cos 60^{\circ})\). An electric current \(I\) of 1 mA corresponded to a current density \(J\) of 0.9 MA/cm\({}^{2}\). The electric current in the numerical simulations was set to 4.0 mA.
We do not include field-like torque in the Thiele equation, which is expressed as \(-cbJ\)\(R\)\(p_{x}\)\(\mathbf{e}_{y}\) in Eq. (5); see, for example, Ref. [45]. This is because its magnitude was not visible in an experiment using CoFeB/MgO based STO [23]. One might consider to inject the input through the field-like torque, instead of the torque due to the external magnetic field as we have done. However, the modulation of the field-like torque requires that of electric current, which leads to the modulation of the frequency of the STO. Since the advantage of our proposal is that the frequency is unique during the operation, we do not prefer to use the field-like torque for injecting the input.
### Analytical solution of the Thiele equation
The Gilbert damping constant \(\alpha\) is often small, in such cases, \(|D|/G\simeq\alpha\ll 1\). Also, the radius \(R_{0}\) of the vortex core is much shorter than the disk radius, \(R\). Therefore, by neglecting terms related to \(R_{0}\) and higher-order terms of \(\alpha\), we can approximate Eq. (5) as Eqs. (6) and (7) in terms of \(s=|\mathbf{X}|/R\) and \(\psi=\tan^{-1}(Y/X)\). The approximated Thiele equation without magnetic field is
\[\dot{s}=as-bs^{3}, \tag{19}\]
\[\dot{\psi}=\frac{\kappa}{G}\left(1+\zeta s^{2}\right). \tag{20}\]
These equations are identical to the Stuart-Landau equation [37], which was introduced by Landau to describe the evolution of turbulence phenomenologically and was derived from hydrodynamics by Stuart. This equation provides one of the simplest example of Hopf bifurcation. A stable solution of \(s\) is \(s_{0}=\sqrt{a/b}\) (0) for \(a>(<)0\), or equivalently, \(J/J_{\mathrm{c}}>(<)1\). When \(J/J_{\mathrm{c}}>1\), i.e., the current density \(J\) exceeds a threshold value \(J_{\mathrm{c}}\), the vortex core oscillates around the disk center with an oscillation amplitude \(s_{0}\) and the frequency \(f=[\kappa/(2\pi G)](1+\zeta s_{0}^{2})\). Note that the oscillation frequency is proportional to the current density \(J\) through the term \(s_{0}^{2}=a/b\) (\(a\propto J\)), which has been confirmed by both experiments and simulations [35, 36]. Even in the presence of the magnetic field, the oscillation frequency remains \(f\), if the input strength is weak.
The solution of \(s\) obtained from the exact Thiele equation, Eq. (5), shows a small oscillation around \(s_{0}\)[46]. This means that the trajectory of a limit-cycle oscillation is approximately circular but also has a small amplitude modulation. This small
modulation is caused by the term \(ca_{J}JR_{0}p_{x}\mathbf{e}_{x}\) in Eq. (5), which breaks the axial symmetry of the dynamics around the \(z\)-axis. The deviation of \(s\) from \(s_{0}\) is, however, negligible, and the oscillation trajectory is approximately circular, as shown in Fig. 3(b). Therefore, it is reasonable to omit the term from Eqs. (6) and (7). Note that this term arises from the in-plane component \(p_{x}\) of the magnetization in the reference layer. \(p_{x}\) plays a role in experiments for the following reason. Recall that the output signal measured in experiments depends on the relative angle of the magnetizations in the free and reference layers. Since the vortex core is located in the \(xy\) plane, a finite \(p_{x}\) is necessary to detect its position. On the other hand, the \(z\) component \(p_{z}\) is also necessary because the spin-transfer torque originating from it excites the limit-cycle oscillation of the vortex core. In fact, the threshold current density \(J_{\mathrm{c}}=|D|\kappa/(Ga_{J}p_{z})\) is inversely proportional to \(p_{z}\); therefore, if \(p_{z}\) is zero, \(J_{\mathrm{c}}\) becomes infinite and the oscillation cannot be excited. In experiments [28, 40], the magnetization initially pointed in an in-plane direction, where \(p_{z}=0\). A finite \(p_{z}\) was induced by applying an external magnetic field in the \(z\) direction.
According to Eqs. (6) and (7), one might consider that the magnetic field changes the value of \(s\) from \(s_{0}\) and modifies the oscillation frequency. Such a frequency shift is, however, negligibly small, which can be discussed accordingly. First, remind that the frequency of the magnetic field applied during the second step is the frequency of the vortex core without the magnetic field because it consists of the output during the first step. The fact that the phases in the second step are saturated to \(0\) or \(\pi\), as shown in Fig. 3(c), indicates that the forced phase synchronization occurs, and the frequency of the vortex core in the second step is the same with that in the first step. Second, let us roughly estimate the frequency shift by the application of the magnetic field. The change of \(s\) by the magnetic field will be maximized when the phase of the magnetic field \(H_{\mathrm{y}}\) in Eq. (6) is the same with \(\psi\). In this case, the magnitude of the last term in Eq. (6), averaged over a precession period \(\tau=1/f\), is about \([c\mu^{*}/(2GR)]H_{\mathrm{y}}\tau\sim(\gamma/2)\mathcal{H}\tau\). The period \(\tau\) is about \(5\) ns while \(\mathcal{H}\) is on the order of \(1\) Oe; see next section. Accordingly, the shift \(\Delta s\) of \(s\) by the application of the magnetic field is less than \(0.1\) at maximum. As mentioned, the oscillation frequency is proportional to \(1+\zeta s^{2}\). Using \(\zeta=0.1\) and \(s_{0}\simeq 0.6\), estimated from Fig. 3(b), the frequencies with and without \(\Delta s\), which are proportional to \(1+\zeta s_{0}^{2}\) and \(1+\zeta(s_{0}+\Delta s)^{2}\), respectively, differ only \(1\) % at maximum. Therefore, we consider that the frequency modulation by the application of the magnetic field is negligible.
One might be of interested in the applicability of the Thiele equation. While the original Thiele equation assumes a translation symmetry in an infinite space, a finite-size effect of nanostructured may restrict the applicability of the equation. Therefore, the Thiele equation had been applied to analyses on small-amplitude dynamics [47]. There have been, at the same time, several efforts to make the equation applicable to large-amplitude dynamics. For example, adding nonlinear frequency and damping terms is one approach [35, 36], which is also used in the present work, where the additional terms are characterized by the dimensionless parameters \(\xi\) and \(\zeta\). Adding further higher-order nonlinear terms is also investigated recently [48, 49, 50]. It was also shown that the Thiele equation is applicable to analyze small-amplitude dynamics, and effort has been made to extrapolating it to a large-amplitude dynamics, such as vortex-core expulsion, although there are some limitations [51]. In the present study, we use the model developed in Refs. [35, 36] due to the following reasons. First, the applicability of the model to wide ranges of parameters has been verified by comparison with experiments [35, 36, 44]. Second, adding higher-order nonlinear terms does not change main conclusion in this work. These terms might change, for example, current dependence of the oscillation frequency. In the present work, however, the frequency is kept constant, and thus, adding such terms do not play a central role in the associative memory operation. Third, the Thiele equation with the present approximation clarifies the connection between spintronics and other research fields such as nonlinear science and computer science. This is because the equation can be reduced to the Stuart-Landau equation, as mentioned above. The Stuart-Landau equation has a long history, as in the case of the Thiele equation, and has been frequently used in nonlinear science [37, 52]. The present work indicates that the Stuart-Landau oscillator can be emulated in nanostructures and therefore, prompts communications between spintronics and other research fields. Therefore, although we understand that there have been great efforts [48, 49, 50, 53] for the validity and applicability of the Thiele equation, we use the model developed in Refs. [35, 36]. Note that the Oersted field generated in the current, discussed in these previous works, does not play a role in the associative memory operation because the current magnitude is kept constant during the operation. Also, since the external magnetic field induces forced synchronization, a frequency shift due to an external magnetic field studied in the previous work [48] does not exist in the present algorithm.
### Details of the numerical simulations
The associative memory operation in the virtual oscillator network consists of three steps. The initial state of the vortex core in each step is prepared by adding a thermal activation to the Thiele equation and solving it in the absence of magnetic field, as is done in Ref. [45]. The torque due to the thermal activation gives an additional term, \(-\eta_{\mathrm{z}}\mathbf{e}_{x}-\eta\mathbf{e}_{y}\), to the left-hand side of Eq. (5), which obeys the fluctuation-dissipation theorem,
\[\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=2k_{\mathrm{B}}T|D|\delta_{ij} \delta(t-t^{\prime}), \tag{21}\]
where the temperature \(T\) is \(300\) K. The solution of the Thiele equation in each step is divided into \(N=60\) parts, where the time width of each part is denoted as \(\tilde{t}\). In the experiment [23], a certain time period was inserted between these parts to remove
their correlation. In contrast, our numerical simulations used parallel computations, wherein the initial state of each part was randomly prepared using the method described above. The value of \(\tilde{t}\) was changed depending on the number of memorized patterns, as well as the number of noisy pixels in the pattern to be recognized. For example, \(\tilde{t}\) is 750 ns in Fig. 3(c). For all cases, \(\tilde{t}\) was divided into \(\tilde{n}=\tilde{t}/t_{\rm p}\) parts, where \(t_{\rm p}=0.125\) ns.
Now let us explain the meanings of \(y_{\ell}^{(1)}\) and \(y_{\ell}^{(2)}\) in Eqs. (3) and (4). Since they are defined in a similar manner, we will describe only \(y_{\ell}^{(1)}\). When defining the magnetic field in Eq. (3), it is convenient to reset the time origin for each part; i.e., each of the \(N\) parts runs from \(t=0\) to \(t=\tilde{t}\). Remember that the output from the STO is proportional to the \(y\) component of the vortex core's position, \(Y\). We denote the solution of the normalized \(y\) component, \(y=Y/R\) (\(0\leq y\leq 1\)), during the \(\ell\)th part in the first step as \(y_{\ell}\). Then, \(y_{\ell}^{(1)}\) is made from \(y_{\ell}\) as follows,
\[y_{\ell}^{(1)}=\sum_{n=0}^{\tilde{n}-1}y_{\ell}(nt_{\rm p})\left\{\Theta(t-nt_ {\rm p})-\Theta[t-(n+1)t_{\rm p}]\right\}, \tag{22}\]
where \(\Theta(t)\) is a step function. Note that \(\Theta(t-nt_{\rm p})-\Theta[t-(n+1)t_{\rm p}]\) is 1 for \(nt_{\rm p}\leq t<(n+1)t_{\rm p}\) and is zero for the other times; thus, it has a pulse shape. Equation (22) means that the input strength is constant for \(nt_{\rm p}\leq t<(n+1)t_{\rm p}\) and is proportional to \(y_{\ell}(t)\) at \(t=nt_{\rm p}\). \(\tilde{n}\) is the number of input pulses. There are two reasons to shape the output \(y\) into a pulse. The first one relates to numerical simulations. In this work, the Thiele equation was solved within a time increment of \(\Delta t=0.005\) ns, which is shorter than the pulse width \(t_{\rm p}\). It was, however, impractical to store the output at each \(\Delta t\) step because the amount would have been huge. Second, there is a technical limitation in real experiments on a measurable time step. The value we used, \(t_{\rm p}=0.125\) ns, is close to shortest possible time step in an experiment [23]. Because of these reasons, we define \(y_{\ell}^{(1)}\) used in the magnetic field, Eq. (3), as a pulse input. At the same time, we emphasize that \(t_{\rm p}\) is much shorter than an oscillation period of the vortex core, \(1/f=4.48\) ns (\(f=223\) MHz). In addition, the pulse-shaped \(y_{\ell}^{(1)}\)s are continuously injected. Therefore, the magnetic field can be approximately regarded as a continuously oscillating signal with respect to the STO.
The strength of the input \(\mathcal{H}\) in the second step is 1.0 Oe, while that in the third step is \(\mathcal{H}^{\prime}=N_{\rm m}\times 0.2\) Oe. Here, we increase \(\mathcal{H}^{\prime}\) as the number \(N_{\rm m}\) of memorized patterns increases. This is because the time necessary to reach a steady state becomes long as \(N_{\rm m}\) increases; therefore, to perform the numerical simulations efficiently, the input strength should be made to increase with \(N_{\rm m}\).
|
2309.12292 | Constraining dark energy cosmologies with spatial curvature using
Supernovae JWST forecasting | Recent cosmological tensions, in particular, to infer the local value of the
Hubble constant $H_0$, have developed new independent techniques to constrain
cosmological parameters in several cosmologies. Moreover, even when the
concordance Cosmological Constant Cold Dark Matter ($\Lambda$CDM) model has
been well constrained with local observables, its physics has shown deviations
from a flat background. Therefore, to explore a possible deviation from a flat
$\Lambda$CDM model that could explain the $H_0$ value in tension with other
techniques, in this paper we study new cosmological constraints in spatial
curvature dark energy models. Additionally, to standard current Supernovae Type
Ia (SNIa) catalogs, we extend the empirical distance ladder method through an
SNIa sample using the capabilities of the James Webb Space Telescope (JWST) to
forecast SNIa up to $z \sim 6$, with information on the star formation rates at
high redshift. Furthermore, we found that our constraints provide an
improvement in the statistics associated with $\Omega_{m}$ when combining SNIa
Pantheon and SNIa Pantheon+ catalogs with JW forecasting data. | Pablo M. Maldonado Alonso, Celia Escamilla-Rivera, Rodrigo Sandoval-Orozco | 2023-09-21T17:53:54Z | http://arxiv.org/abs/2309.12292v2 | # Constraining dark energy cosmologies with spatial curvature using Supernovae JWST forecasting
###### Abstract
Recent cosmological tensions, in particular, to infer the local value of the Hubble constant \(H_{0}\), have developed new independent techniques to constrain cosmological parameters in several cosmologies. Moreover, even when the concordance Cosmological Constant Cold Dark Matter (\(\Lambda\)CDM) model has been well constrained with local observables, its physics has shown deviations from a flat background. Therefore, to explore a possible deviation from a flat \(\Lambda\)CDM model that could explain the \(H_{0}\) value in tension with other techniques, in this paper we study new cosmological constraints in spatial curvature dark energy models. Additionally, to standard current Supernovae Type Ia (SNIa) catalogs, we extend the empirical distance ladder method through an SNIa sample using the capabilities of the James Webb Space Telescope (JWST) to forecast SNIa up to \(z\sim 6\), with information on the star formation rates at high redshift. Furthermore, we found that our constraints provide an improvement in the statistics associated with \(\Omega_{m}\) when combining SNIa Pantheon and SNIa Pantheon+ catalogs with JW forecasting data.
## 1 Introduction
The first direct evidence of the late time cosmic acceleration was obtained through measurements of Type Ia Supernovae (SNIa) [1; 2]. Over the years, subsequent observations confirmed this result, such as the cosmic microwave background (CMB) [3], baryon acoustic oscillations (BAO) [4; 5], and weak gravitational lensing [6]. However, the capability of supernovae to prove the accelerating expansion remains invaluable since these objects are bright enough to be seen at large distances. Furthermore, SNIa are common enough to be found in large quantities, and their properties make them standardized with a precision of \(\sim 0.1\) mag in brightness or \(\sim 5\%\) in distance per object [7]. Also, the increasing number of SNIa observations has considerably reduced the associated statistical errors and the uncertainties in estimating cosmological parameters dominated by them [8; 9].
Nevertheless, the nature of this cosmic acceleration is one of the current inquiries in precision cosmology since still we do not fully understand the component with which it is associated, the _dark energy_. However, due to the well-constrained \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) model, this dark energy could be evidence for a component with a negative Equation-of-State (EoS) constant value [10; 11; 12] or a dynamical EoS [13; 14; 15]. Furthermore, dark energy can be associated with components that can be derived from first principles in alternative theories of gravity [16; 17; 18] and extended theories of gravity [19; 20], showing a late cosmic acceleration.
On the nature of dark energy, several missions have been working to find better cosmological constraints adjoint with better systematics and increasing data baselines. Some of them as large-scale structure (LSS) observations with measurements from the Dark Energy Survey (DES) [21], the Dark Energy Spectroscopic Instrument (DESI) [22], the Legacy Survey of Space and Time (LSST) on the Vera Rubin Observatory [23], and Euclid [24], among
others, have extended the concordance cosmological model to include EoS parameters of dark energy with some shifts within \(1\sigma\).
In particular, the recently launched James Webb Space Telescope (JWST) is a very interesting experiment that can help to elucidate the nature of dark energy. JWST is a 6.5-meter primary mirror, space-based observatory operating in the visible and infrared light equipped with four main science instruments including a near-infrared camera, spectrograph, and imager slitless spectrograph; and a mid-infrared camera and spectrograph [25]. It is expected that it will have an estimated lifespan of 20 years in which the research will be focused on several astrophysical and cosmology areas such as galaxy formation in the early universe as [26; 27; 28; 29], exoplanet detection [30; 31; 32], metallicity and chemical exploration [33; 34; 35], and life detection [36; 37]. All these potential and current observations could allow us to explore physics further than before as testing dark energy models with structure formation [38; 39], corroborate the Cepheid calibrations in the distance ladder [40] and adding more SNIa observations [9], cosmic chronometers [41], and XA/UV quasars [42; 43] to the constraint analysis of them.
Recently, numerous studies related to the implications of JWST to cosmology have been developed. In [9] a JWST simulated sample of SNIa within a redshift range \(2\lesssim z\lesssim 6\), was employed to constrain standard cosmological parameters. Using combinations of the mock sample and SN Pantheon dataset [44] it was possible to constrain dark energy models with constant EoS. This analysis was performed using two different forms of the intrinsic evolution of the standardized SNIa luminosity distance. On one hand, it is assumed a linear redshift dependence of magnitude evolution. On the other hand, we can consider other logarithmic evolutions. Analysing the cases with and without systematic evolution, it was found that the addition of the simulated SNIa sample would successfully remove the evolutionary effects. However, even though the SN Pantheon dataset size was increased by a factor of \(\approx 16\) data points, it is still not able to constrain the systematic evolution and the cosmological parameters as effectively as the very high redshift SN data.
A further study about the first galaxies discovered by JWST was carried out in [45]. According to several works [46; 47; 48; 49; 50; 51; 52], there are some common aspects within the structure morphology, which indicate that galaxies discovered by JWST do not have enough time to evolve into what it is observed today. It is important to notice that this study is within the framework of the standard cosmological \(\Lambda\)CDM model. Therefore, the new JWST dataset includes near and mid-infrared images and near-infrared spectra to perform analyses based on cosmographic theories of the angular size-redshift relationship [45]. The \(\Lambda\)CDM interpretation of JWST observations is compared with the interpretation based on Zwicky's static universe model [53], where the origin of the cosmological redshift can be explained through the photon-energy loss mechanism. However, the redshifted objects detected by the JWST are not aligned with such interpretation, but before any final conclusion, data from this mission should be increased.
As a step forward, using the capabilities of the JWST described, in this work, we develop the forecasting of SNIa up to \(z\sim 6\), with information on the Star Formation Rates (SFR) at high redshift. Once this data is at hand, we perform a statistical analysis combined with SN Pantheon [44] and SN Pantheon+1 to constrain spatial curvature dark energy cosmologies. We based our cosmological models inspired by bidimensional EoS parameterisation, which preserves the expanding and accelerating behaviour at late times. Our goal is to show that
a simple deviation in the spatial curvature of a dark energy EoS model can verify a well-constrained analysis with SNIa JWST forecasting.
This paper is divided as follows: In Sec. 2 we summarise the theory behind dark energy bidimensional parameterisations inspired in Taylor series around the cosmological scale factor \(a\). All of these parameterisations are described through their normalised \(E(z)\) Friedmann evolution equation, including the curvature term. Furthermore, we are going to consider standard \(\Lambda\)CDM and \(w\)CDM models in addition to the dark energy cosmologies to proceed with comparisons between them. Also, we include the latest constraints reported in the literature so far. In Sec. 3 we present the methodology employed for observables. We include the description of current SNIa data baselines and how we can proceed with their forecasting using JWST characteristics. In Appendix A we describe the technicalities behind this forecasting. The results on new constraints for the models described are developed in Sec. 4. Finally, the conclusions are presented in Sec. 5.
## 2 Standard dark energy parameterisations
The standard cosmological scenario \(\Lambda\)CDM is a remarkable fit for most cosmological data. Nonetheless, we are still searching for the nature of inflation, dark matter, and dark energy. Physical evidence for these components comes only from astrophysical and cosmological observations [54]. Therefore, an increase in experimental sensitivity can produce deviations from the standard \(\Lambda\)CDM scenario that could lead to a deeper understanding of the gravity theory. If it is not a consequence associated with systematic errors, the cosmological tensions [55] existing between the different experimental probes could indicate a failure of the \(\Lambda\)CDM model, and a better cosmological model should be able to be found.
In this section, we are going to describe, in addition to the \(\Lambda\)CDM model with EoS \(w=-1\), five dark energy bidimensional in \(z\) parameterisations of \(w(z)\), which can be constant or redshift-dependent. Notice that to describe dark energy, we need to achieve cosmic acceleration with a negative pressure at late times [11].
* \(\Lambda\)**CDM model.** In this model, the universe is composed of cosmological fluids with different EoS's \(w\) that contribute to the energy constraint. At present cosmic times, the non-relativistic matter contribution, \(\Omega_{m}\simeq 0.27\), is the sum of the ordinary baryonic matter term, \(\Omega_{b}\simeq 0.044\), and the (cold) dark matter term, \(\Omega_{c}\simeq 0.22\). Dark energy (\(\Omega_{\Lambda}\simeq 0.73\)) is described by \(w=-1\), associated with a cosmological constant \(\Lambda\) or a vacuum energy density [56]. Radiation represents a negligible contribution, \(\Omega_{r}\simeq 9\times 10^{-5}\), but it dominated the early cosmic stages, after the end of the inflationary stage and before matter-radiation decoupling [57]. Additionally, \(\Lambda\)CDM can be characterized with a flat geometry, which corresponds to an energy density parameter, \(\Omega_{\Lambda}=1-\Omega_{m}-\Omega_{r}\), where the only parameter to be constrained is \(\Omega_{m}\). The cosmological evolution for this model can be expressed as \[E(z)\equiv\frac{H(z)}{H_{0}}=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{r}(1+z)^{4}+ \Omega_{\Lambda}},\] (1) where \(H_{0}\) is the Hubble constant today. We also consider the non-flat \(\Lambda\)CDM cosmological model, as an extension of the \(\Lambda\)CDM model but with curvature \(k\neq 0\), with its constraint equation as \(\Omega_{k}=1-\Omega_{m}-\Omega_{r}-\Omega_{\Lambda}\)
where \(\Theta=(\Omega_{m},\Omega_{\Lambda})\) is the vector with free parameters. The evolution for this case can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{r}(1+z)^{4}+\Omega_{k}(1+z)^{2}+\Omega_{ \Lambda}}.\] (2) Then, the model-dependent luminous distance can be calculated according to the \(\Omega_{k}\) value [58]: \[D_{L}(z)=\left\{\begin{aligned} &\frac{c}{H_{0}}(1+z)\frac{ \sinh\left(\sqrt{\Omega_{k}}\int_{0}^{z}\frac{dz^{\prime}}{E(z^{\prime})} \right)}{\sqrt{\Omega_{k}}},&\Omega_{k}>0\\ &\frac{c}{H_{0}}(1+z)\int_{0}^{z}\frac{dz^{\prime}}{E(z^{\prime}) },&\Omega_{k}=0\\ &\frac{c}{H_{0}}(1+z)\frac{\sin\left(\sqrt{-\Omega_{k}}\int_{0}^ {z}\frac{dz^{\prime}}{E(z^{\prime})}\right)}{\sqrt{-\Omega_{k}}},& \Omega_{k}<0,\end{aligned}\right.\] (3) where \(c\) the speed of light and \(E(z)\) the background evolution equation of the cosmological models. The base test for \(\Lambda\)CDM is the analysis provided by the Planck collaboration measuring CMB anisotropies finding the base parameters values \(\Omega_{m}=0.316\pm 0.008\) and \(H_{0}=67.27\pm 0.6\) km s\({}^{-1}\) Mpc\({}^{-1}\)[59] in the context of a flat \(\Lambda\)CDM. Using late-time data in [58] was found that using SNIa, BAO, and a quasar sample the values are \(\Omega_{m}=0.300\pm 0.012\), with a fixed \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), while using a non-flat \(\Lambda\)CDM model the results were \(\Omega_{m}=0.364\pm 0.021\) and \(\Omega_{\Lambda}=0.829\pm 0.035\), finding a slight deviation from the flat background. Furthermore, with a Cosmic Chronometers (CC - \(H(z)\)) sample it was found that \(H_{0}=66.7\pm 5.3\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.33^{+0.08}_{-0.06}\) for the same flat model [60]. Using only SNIa Pantheon+ compilation [7] the values for the flat \(\Lambda\)CDM model are \(H_{0}=73.6\pm 1.1\) km s\({}^{-1}\) Mpc\({}^{-1}\). The latter assumes a Gaussian prior with \(\Omega_{m}=0.334\pm 0.018\). While for the non-flat \(\Lambda\)CDM the results are \(\Omega_{m}=0.306\pm 0.057\) and \(\Omega_{\Lambda}=0.625\pm 0.084\) in concordance with the flat counterpart at \(2\sigma\).
* \(w\)**CDM model.** The simplest extension of the \(\Lambda\)CDM model is the one in which \(w\neq-1\), yet still constant in time meaning that \(w(z)=w_{0}\). From [9, 58] we express \(E(z)\) for this model as, \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3(1+ w_{0})}}.\] (4) Non-flat \(w\)CDM has \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0})\) as free parameters. However, under flatness assumption, \(\Omega_{k}=0\), so the only free parameters are \(\Theta=(\Omega_{m},w_{0})\). This model reduces to \(\Lambda\)CDM when \(w_{0}=-1\). Using SNIa, BAO, and quasars the values obtained in [58] were: \(\Omega_{m}=0.369^{+0.022}_{-0.023}\) and \(w_{0}=-1.283^{+0.094}_{-0.027}\), corresponding to a deviation from the \(\Lambda\)CDM model in more than \(1\sigma\) range with the same fixed \(H_{0}\) value. While using a non-flat \(w\)CDM results in \(\Omega_{m}=0.280^{+0.041}_{-0.037}\), \(\Omega_{\Lambda}=1.662^{+0.041}_{-0.048}\) and \(w_{0}=-0.667^{+0.024}_{-0.027}\), where a difference from a flat model is reported of more than \(3\sigma\) using only SNIa and quasars. Furthermore, adding BAO to the quasar sample [61] results in \(\Omega_{m}=0.31\pm 0.03\) and \(w_{0}=-1.00^{+0.14}_{-0.13}\)
that is consistent with \(\Lambda\)CDM assuming a Gaussian prior of \(H_{0}=67.32\pm 4.7\) km s\({}^{-1}\) Mpc\({}^{-1}\). When using only SNIa [7] the flat \(w\)CDM model gives \(\Omega_{m}=0.309^{+0.063}_{-0.069}\), \(H_{0}=73.5\pm 1.1\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(w_{0}=-0.90\pm 0.14\), returning the confirmation for a \(\Lambda\)CDM model.
* **Chevallier-Polarski-Linder (CPL) model.** One of the most used redshift-dependent parameterisations corresponds to the Chevallier-Polarski-Linder [62; 63] proposal: \(w(z)=w_{0}+w_{a}z/(1+z)\). In which, \(w(z)=w_{0}+w_{a}\) at \(z=\infty\) and \(w(z)=w_{0}\) at \(z=0\), but it diverges in the future for \(z\rightarrow(-1)^{+}\). In this bidimensional model, \(w_{0}\) denotes the dark energy EoS today, and \(w_{a}\) describes its evolution. This parameterisation has several advantages including the well behaviour at high redshift, the linear feature at low redshift, a simple physical interpretation, and the accuracy in reconstructing a scalar field EoS [63]. The normalised Hubble parameter for this model can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3( 1+w_{0}+w_{a})}\exp\!\left(\frac{-3w_{a}z}{1+z}\right)}.\] (5) For the non-flat and flat cases, we consider as free parameters (\(\Omega_{m}\), \(\Omega_{\Lambda}\), \(w_{0}\), \(w_{a}\)) and (\(\Omega_{m}\), \(w_{0}\), \(w_{a}\)), respectively. This model can be reduced to \(\Lambda\)CDM with \(w_{0}=-1\) and \(w_{a}=0\). Using SNIa, BAO, and quasars in [58] was discussed a deviation from the \(\Lambda\)CDM with \(\Omega_{m}=0.354^{+0.032}_{-0.030}\), along with \(w_{0}=-1.323^{+0.103}_{0.112}\) and \(w_{a}=0.745^{+0.483}_{-0.974}\) for a flat CPL model, values that correspond to a confirmation of the \(\Lambda\)CDM model. Using quasars and \(H(z)\) measurements for a non-flat CPL parametrization it was found that \(\Omega_{m}=0.44\pm 0.10\), \(\Omega_{k}=-0.36\pm 0.24\), \(H_{0}=71.8^{+4.6}_{-7.7}\), with \(w_{0}=-1.2\pm 1.0\), and \(w_{a}=-5.0^{+9-0}_{-2.0}\)[64] showing a clear deviation of more than \(2\sigma\) from the flat \(\Lambda\)CDM model. Using SNIa [7] for the flat CPL with a Gaussian prior in \(H_{0}\) results in \(H_{0}=73.3\pm 1.1\) km s\({}^{-1}\) Mpc\({}^{-1}\), with \(\Omega_{m}=0.403^{+0.054}_{-0.098}\), \(w_{0}=-0.93\pm 0.15\) and \(w_{a}=-0.1^{+0.9}_{-2.0}\). This latter corresponds to a flat \(\Lambda\)CDM confirmation. Furthermore, adding BAO, CMB to the previous SNIa sample results in \(H_{0}=67.41^{+0.52}_{-0.82}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.316^{+0.009}_{-0.005}\) and \(w_{0}=1.267^{+0.196}_{-0.191}\) and \(w_{a}=-3.771^{+2.113}_{-2.496}\)[58].
* **Jassal-Bagla-Padmanabhan (JBP) model.** In [65] the parameterisation \(w(z)=w_{0}+w_{a}z/(1+z)^{2}\), in which \(w(0)=w_{0}\), \(w^{\prime}(0)=w_{1}\), and \(w(\infty)=w_{0}\) is presented based on trying to explain the accelerated universe covering both CMB and the SNIa measurements. This model was proposed to solve the high \(z\) issues within the CPL parameterisation [11]. Using the behaviour of this function allows us to have the same EoS at the present epoch and high-\(z\) with a rapid variation at small redshifts. Considering the corresponding term related to curvature, the derivative expression for \(E(z)\) for this model can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3(1+ w_{0})}\exp\!\left[\frac{3w_{a}z^{2}}{2(1+z)^{2}}\right]},\] (6) where \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0},w_{a})\) for the non-flat JBP model, and \(\Theta=(\Omega_{m},w_{0},w_{a})\) for a flat JBP model. Using SNIa, BAO, and quasars the JBP model is tested obtaining \(\Omega_{m}=0.354^{+0.032}_{-0.030}\), \(w_{0}=-1.371\pm 0.141\) and \(w_{a}=1.127^{+1.293}_{-1.547}\)[58] which is considered a deviation from
the confirmation for \(\Lambda\)CDM for the \(w_{0}\) value. In [66] using SNIa, BAO, CMB and Gamma-Ray Bursts (GRB) it is obtained \(\Omega_{m}=0.27\pm 0.03\) with \(w_{0}=-1.02\pm 0.04\) and \(w_{a}=0.22\pm 0.23\) using a flat JBP model in concordance with a flat \(\Lambda\)CDM at \(1\sigma\).
* **Exponential model.** In [67] was examined five one-parameter dark energy parameterisations with several datasets. In particular, data from CMB observations, Joint light-curve analysis from SNIa observations (JLA), BAO distance measurements, and \(H(z)\). It was concluded that the one-parameter dark energy model can provide a solution to the \(H_{0}\) tension between local measurements and Planck indirect ones. Besides, it was found which of the five models is better fitted to the data used. This model, relatively close to \(\Lambda\)CDM, is the one with an EoS of the form: \(w(z)=w_{0}\exp[z/(1+z)]/(1+z)\), where \(w(0)=w_{0}\) and \(w(z)=0\), for both \(z=\infty\) and \(z\rightarrow{(-1)}^{+}\). As a result, the normalised Hubble parameter can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3} \exp\!\left[3w_{0}\left(\exp\!\left(\frac{z}{1+z}\right)-1\right)\right]}.\] (7) For non-flat exponential model we have \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0})\), while for flat exponential model, \(\Theta=(\Omega_{m},w_{0})\). Using SNIa, quasars, and BAO was obtained for the Exponential model \(\Omega_{m}=0.359^{+0.023}_{-0.024}\) and \(w_{0}=-1.271^{+0.092}_{-0.107}\)[58] showing again a deviation from a flat \(\Lambda\)CDM. In [68] the exponential model is constrained using CMB, SNIa, BAO, and measurements of the distance using Hydrogen II galaxies resulting in \(H_{0}=70.9\pm 7.0\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.284\pm 0.006\) and \(w_{0}=-1.202^{+0.027}_{-0.026}\), imposing a Gaussian local prior in \(H_{0}\).
* **Barboza-Alcaniz (BA) model.** In [69] was proposed a dark energy parameterization given by \(w(z)=w_{0}+w_{a}z(1+z)/1+z^{2}\). This is a well-behaved function of redshift throughout the entire cosmic evolution, \(z\in[-1,\infty]\), with \(w(z)=w_{0}+w_{a}\) for \(z=\infty\) and \(w(z)=w_{0}\), when \(z\rightarrow(-1)^{+}\). This smooth function allows to define regions on the plane \((w_{0}-w_{1})\) associated with several classes of dark energy models to exclude or confirm the models based on the constraints using observational data. Thus, it was shown that both quintessence and phantom behaviors have fully acceptable regimes. The \(E(z)\) for non-flat case of this model can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3(1+ w_{0})}\left(1+z^{2}\right)^{\frac{3w_{0}}{2}}}.\] (8) The free parameter sets for the non-flat BA model and flat BA model are \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0},w_{a})\) and \(\Theta=(\Omega_{m},w_{0},w_{a})\), respectively. The analysis using BAO, SNIa and quasars in [58] showed deviation from the flat \(\Lambda\)CDM as \(\Omega_{m}=0.307^{+0.044}_{-0.055}\), \(w_{0}=-1.303^{+0.115}_{-0.106}\) and \(w_{a}=1.010^{+0.152}_{-0.466}\), meaning that the quasar sample is responsible for a deviation from the standard model. In [66] using CMB, BAO, SNIa and GRB results in \(\Omega_{m}=0.28\pm 0.03\), \(w_{0}=-1.13\pm 0.04\) and \(w_{a}=0.37\pm 0.1\), finding a lower deviation from the \(\Lambda\)CDM model, but using only BAO and CMB the standard model is recovered with \(\Omega_{m}=0.29\pm 0.04\), \(w_{0}=-1.06\pm 0.11\) and \(w_{a}=0.35\pm 0.12\).
Data treatment: observations and forecastings
In this section, we will perform the statistical analysis for the dark energy models with and without curvature including three different datasets: SNIa Pantheon and SNIa Pantheon+ samples, along with the extracted simulated data from JWST.
* **Pantheon (PN)**[44]: The Pantheon compilation is a combination of measurements from SNIa distances combining both low and high redshifts from \(z\sim 0.01\) up to \(z=2.26\). This sample has shown an improvement in the photometric calibrations on the distance ladder through the light curve. This transforms observable quantities into distances adding up to a total of 1048 data points.
* **Pantheon+ (PN\({}^{+}\))**[70; 71]: Pantheon+ is a collection of 18 different SNIa samples based on the Pantheon compilation above described and by adding new data points recollected from different surveys as: Foundation Supernova Survey [72], the Swift Optical/Ultraviolet Supernova Archive (SOUSA) [73], the Lick Observatory Supernova Search LOSS1 [74], the second sample LOSS2 [75], and DES [70]. As a result, Pantheon+ consists of 1701 light curves of 1550 distinct SNIa along a redshift of \(z=0.001\) up to 2.26. For \(H_{0}\), a value of \(73.30\pm 1.04\) km s\({}^{-1}\) Mpc\({}^{-1}\) is assumed. This sample is represented in Figure 1 in blue color. Cosmological parameter constraints have been carried out using the affine-invariant ensemble sampler for Markov Chain Monte Carlo (MCMC) module emcee2 that uses random generation numbers to explore the parameter space based on the probability function \(P\propto\exp\bigl{(}-\chi^{2}/2\bigr{)}\) to minimize the quantity Footnote 2: emcee.readthedocs.io/en/stable/
\[\chi^{2}_{\text{SNIa}}(\Theta)=\Delta\mu^{T}(z,\Theta)\cdot C^{-1}_{\text{SNIa }}\cdot\Delta\mu(z,\Theta)+\ln\bigg{(}\frac{S}{2\pi}\bigg{)}, \tag{1}\]
where \(\Delta\mu(z,\Theta)=\mu(z)_{\text{data}}-\mu(z,\Theta)_{\text{model}}\), \(C_{\text{SNIa}}\) is the covariance matrix of the PN (or PN+) sample, \(S\) is the sum of all the components of \(C^{-1}_{\text{SNIa}}\), \(\mu(z)_{\text{data}}\) is the distance modulus of the PN (PN+) data, and \(\mu(z,\Theta)_{\text{model}}\) is the distance modulus for a cosmological model with a parameter set \(\Theta\)[76].
* signs assigned randomly for all the matrix. The value in the diagonal is the mean value of the Pantheon covariance matrix [8] and the \(0.15^{2}\) value in the diagonal is the assumed error considering the local determination of the observation errors extrapolated [9]. The covariance matrix has
the following form
\[C_{\rm JW}=\underbrace{\left(\begin{array}{ccccc}0.15^{2}&\pm 0.002^{2}&\pm 0.00 2^{2}&\ldots&\pm 0.002^{2}\\ \pm 0.002^{2}&0.15^{2}&\pm 0.002^{2}&\ldots&\pm 0.002^{2}\\ \pm 0.002^{2}&\pm 0.002^{2}&0.15^{2}&\ldots&\pm 0.002^{2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \pm 0.002^{2}&\ldots&\ldots&\ldots&0.15^{2}\end{array}\right)}, \tag{10}\]
where \(n\times n\) denotes the dimension of the matrix and (\(\pm\)) the random assignment of (\(+\)) and (-) signs. In this case,
\[\chi^{2}_{\rm JW}(\Theta)=\Delta\mu^{T}(z,\Theta)\cdot C^{-1}_{\rm JW}\cdot \Delta\mu(z,\Theta)+\ln\bigg{(}\frac{S}{2\pi}\bigg{)}, \tag{11}\]
where \(\Delta\mu(z,\Theta)=\mu(z)_{\rm data}-\mu(z,\Theta)_{\rm model}-\Gamma_{0}\), \(C_{\rm JW}\) is the covariance matrix presented in Eq.(10), \(S\) is the sum of all the components of \(C^{-1}_{\rm JW}\), \(\mu(z)_{\rm data}\) is the distance modulus of the JW data, \(\mu(z,\Theta)_{\rm model}\) is the distance modulus for a cosmological model with parameter set \(\Theta\) and \(\Gamma_{0}\) is a magnitude bias added to consider a possible systematic luminosity difference between the JW and PN (PN+) datasets. Notice that if we compare our Eq.(11) with [9], the analysis does not consider logarithmic systematics in the forecasting.
For more technical details about the methodology followed here see Appendix A.
## 4 Results: Cosmological constraints
In this section, we discuss the constraints for the dark energy models previously described. In Tables 1 and 2 are reported the values for the cosmological parameters involved in each
Figure 1: _Left:_ Histogram of the Pantheon+ data (blue) and the extracted JW (red). _Right:_ Hubble Diagram of the Pantheon data (blue) and the extracted mock JW (red).
flat and non-flat model, respectively. Additionally, we used the results from the latest SH0ES measurement for the Hubble constant [71] in which using the local distance ladder via Cepheid calibration was obtained \(H_{0}=73.04\pm 1.04\), and was introduced as a fixed value for the Hubble constant in our analyses. Also, an absolute magnitude \(M=-19.263\) was assumed, except for flat \(\Lambda\)CDM where \(M\) was taken as a free parameter. The optimal constraints on the cosmological parameters were derived using the emcee code. All Confidence Levels (C.L) presented in this work correspond to 68.3 and 95 % i.e., to 1 and \(2\sigma\) respectively. Finally, in the presented results the value of \(\Omega_{k}\) is calculated directly from \(\Omega_{k}=1-\Omega_{m}-\Omega_{\Lambda}\), combining the marginalized distributions of each fractional density using a getdist3 modified version.
Footnote 3: getdist.readthedocs.io
### \(\Lambda\)CDM model
The constraints for this model are given in Figure 2. As we can notice, using the Pantheon sample gives a relatively lower value of \(\Omega_{m}\) than using the Pantheon+ sample for the flat-\(\Lambda\)CDM model. Comparing \(\Omega_{m}=0.290\pm 0.008\) with \(\Omega_{m}=0.311^{+0.010}_{-0.009}\), shows a tendency due to the addition of JW data. Furthermore, considering the non-flat model results in a higher \(\Omega_{m}\) estimation for the Pantheon+ sample.
The non-flat \(\Lambda\)CDM model constrained by JW simulated data tends to reduce the curvature estimation towards flatness \(\Omega_{k}\sim 0\). Let us keep in mind that the results on curvature constraints are negative for all four different dataset combinations. There is a deviation from a flat universe with \(\Omega_{k}=-0.0092\pm 0.0091\) using the Pantheon compilation. Additionally, the flat \(\Lambda\)CDM model constrains \(\Gamma_{0}\) for the JW mock sample lower than \(\Gamma_{0}=0.028\pm 0.018\) mag, which implies that there is no significant difference expected for the calibrated sample.
Figure 2: 1-\(2\sigma\) C.L results for the \(\Lambda\)CDM model using SNIa Pantheon–PN (red color), SNIa Pantheon+–PN\({}^{+}\) (orange color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. Notice that the C.L for \(M\) is not associated with \(\Gamma_{0}\).
### \(w\)CDM model
The constraints for this model are given in Figure 3. As we can notice, there is a correlation between the \(w_{0}\) and \(\Omega_{m}\) values present in the flat model, while in the non-flat version of the model, this vanishes and for the Pantheon and JW datasets there is a high error determination in the \(w_{0}-\Omega_{m}\) parameter space. For this model, the fractional matter density has a similar value using Pantheon and Pantheon+ with the introduction of JW as \(\Omega_{m}\sim 0.333\). The curvature estimation is closer to \(\Omega_{k}=0\) as expected using the JW simulated data for the flat model. Furthermore, notice that we have a \(1\sigma\) deviation from a flat model using only the Pantheon+ sample of \(\Omega_{k}=0.27^{+0.17}_{-0.11}\), although both SN samples prefer a non-flat universe.
Additionally, \(\Gamma_{0}\) is constrained with a value of \(\Gamma_{0}=0.036^{+0.020}_{-0.022}\) mag using Pantheon+ and JW for the flat model. This means that there is not a significant deviation from the cosmological fits for the \(w\)CDM model using the JW mock sample compared to the observed SN samples.
### Chevallier-Polarski-Linder (CPL) model
The constraints for this model are given in Figure 4. It is interesting to note that systematics improve for this model using Pantheon+ in comparison to the previous SN catalog, Pantheon. This is expected due to the density of data points at lower redshifts, where the CPL model can be well-constrained. For the flat model and using Pantheon data, we recover the \(\Lambda\)CDM model with \(w_{0}=-1.111^{+0.110}_{-0.124}\) and \(w_{a}=-1.252^{+1.354}_{-1.709}\) both at \(1\sigma\). Something similar happened when we included the JW sample. This trend is confirmed when using Pantheon+ data, e.g. using Pantheon+ and JW results with \(\Omega_{m}=0.323^{+0.022}_{-0.023}\), \(w_{0}=-1.035^{+0.047}_{-0.056}\) and \(w_{a}=0.130^{+0.432}_{-0.37}\). However, in the non-flat model, the estimations change when using solely SN measurements. The curvature estimation is deviated more than \(1\sigma\) as for Pantheon
Figure 3: 1-2\(\sigma\) C.L results for the \(w\)CDM model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (orange color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case.
\(\Omega_{k}=0.31^{+0.22}_{-0.13}\), and for Pantheon+ \(\Omega_{k}=0.447^{+0.14}_{-0.19}\). These results also deviate from the \(\Lambda\)CDM confirmation as both \(w_{0}\) and \(w_{a}\) do not recover the basic equations with \(w_{0}=-1\) and \(w_{a}=0\).
In this model, all the results for \(\Gamma_{0}\) constraints are lower than the simulated 0.15 mag error, and therefore, we do not expect to have any systematic effects on the cosmological parameters contaminated by the simulated magnitude in the JW. The larger estimation, in comparison to previous models, was found using the Pantheon+ sample for which \(\Gamma_{0}=0.039\pm 0.022\) mag.
### Jassal-Bagla-Padmanabhan (JBP) model
The constraints for this model are given in Figure 5. As we can notice, the flat version of this model recovers the correlation between the \(\Omega_{m}\) and the \(w_{0}\) parameters for all the datasets while the \(w_{a}\) determination is done with a large error determination. Confirmation of \(\Lambda\)CDM occurs for Pantheon and JW combinations as \(w_{0}=-1\) and \(w_{a}=0\), while this not happen using Pantheon+, which results in a clear deviation as \(w_{a}=1.403^{+0.650}_{-0.845}\). It is worth mentioning that the only negative \(w_{a}\) value is obtained using the Pantheon dataset as \(w_{a}=-0.721^{+1.787}_{-2.492}\). Using Pantheon, Pantheon+ and the Pantheon and JW combination has a parameter \(w_{a}\) determination error larger than the average. For the fractional matter density, with Pantheon data, we obtain a larger estimation than Pantheon+, with \(\Omega_{m}>0.3\) for Pantheon, and \(\Omega_{m}<0.3\) for the Pantheon+ and the combination with JW.
For the non-flat model, we obtain different results, as none of the combinations return a deviation from the flat model. With Pantheon+ data we obtain \(\Omega_{k}=0.48^{+0.20}_{-0.10}\), while the JW mock data brings the estimation closer to \(\Omega_{k}=0\) without reaching flatness. It is worth noticing that Pantheon combinations result in negative \(w_{a}\) values, while Pantheon+ combinations are opposite at \(1\sigma\) with a large uncertainty.
Figure 4: 1-2\(\sigma\) C.L results for the CPL model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case.
Using the JBP model the larger \(\Gamma_{0}\) determination is obtained using the Pantheon+ and JW for the non-flat version as \(\Gamma_{0}=0.040\pm 0.022\) mag, which again discards the systematic error in the magnitude determination for the JW mock data.
### Exponential model
The constraints for this model are given in Figure 6. As we can notice, a correlation is presented in the parameter space between \(w_{0}\) and \(\Omega_{m}\) for all dataset combinations. For the flat model, all the combinations result in \(w_{0}\sim-1\) between \(1\sigma\) with a \(\Omega_{m}\sim 0.3\). For the non-flat version is worth noticing that all the combinations using Pantheon data result in curvature estimations close to the ones expected for a flat universe. Being closer to flatness only with Pantheon data. Using the Pantheon+ dataset alone results in a larger deviation from the flatness as \(\Omega_{k}=0.35^{+0.23}_{-0.11}\). This can be alleviated using the JW mock data as \(\Omega_{k}=0.135^{+0.095}_{-0.072}\) reports a larger deviation than \(1\sigma\) but near to the range of the expected value for a flat universe. Similar to the flat model, the fractional matter density value is close to \(\Omega_{m}\sim 0.3\), being the lowest one obtained with Pantheon+ with \(\Omega_{m}=0.262\pm 0.055\). In the case of \(w_{0}\) the results are slightly lower than \(w_{0}=-1\), being the same Pantheon+ the lower estimation as \(w_{0}=-1.86^{+0.74}_{-0.48}\).
Using the Exponential model results for the determination of \(\Gamma_{0}=0.036\pm 0.021\) in combination with Pantheon+ and JW means that the determination of the cosmological parameters in this model is not affected by the systematics in the JW mock data.
Figure 5: 1-2\(\sigma\) C.L results for the JBP model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case.
### Barboza-Alcaniz (BA) model
The constraints for this model are given in Figure 7. As we can notice for the flat model the lower value for \(\Omega_{m}\) is the one obtained with the Pantheon+ and JW combination resulting in \(\Omega_{m}=0.280^{+0.047}_{-0.081}\). Regarding the values of \(w_{0}\) and \(w_{a}\) for the flat models, all the combination results are consistent at \(1\sigma\) with the \(\Lambda\)CDM model, as all the results fall in the range of \(w_{0}\sim-1\) and \(w_{a}=0\). Nevertheless, it is interesting to notice that the only negative value for \(w_{a}\) is the one obtained using the Pantheon compilation with \(w_{a}=0.723^{+1.047}_{-1.531}\). In general, this model agrees with a flat \(\Lambda\)CDM at \(1\sigma\).
Meanwhile, the non-flat model shows larger deviations from the \(\Lambda\)CDM model as the curvature estimation is separated from the flatness in more than \(1\sigma\) for all the combinations, except the Pantheon compilation as \(\Omega_{k}=0.20^{+0.29}_{-0.24}\). Other results prefer \(\Omega_{k}>0\).
In this model, the larger \(\Gamma_{0}\) estimation is obtained for the non-flat model with the Pantheon+ and JW mock data combination as \(\Gamma_{0}=0.039\pm 0.021\) mag, resulting similar to the previous models where the cosmological parameter inference is not affected by the error in magnitude for the simulated dataset.
Figure 6: 1-2\(\sigma\) C.L results for the exponential model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (blue color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case.
## 5 Conclusions
In this paper, we studied new cosmological constraints in spatial curvature dark energy models. We extend the distance ladder method through an SNIa sample using the capabilities of JWST to forecast SNIa up to \(z\sim 6\), considering the information on the star formation rates at high \(z\). Comparing the results shown in Tables 1 and 2, notice that flat \(\Lambda\)CDM, flat \(w\)CDM, and flat exponential are the only models in which the value of \(\Omega_{m}\) obtained using the Pantheon sample is less than \(\Omega_{m}\) for Pantheon+, i.e. \(\Omega_{m,\rm PN}<\Omega_{m,\rm PN^{+}}\).
However, in our analysis when including the JW mock data, the flat \(\Lambda\)CDM, non-flat \(\Lambda\)CDM, non-flat \(w\)CDM, and non-flat exponential are the cosmological models in which \(\Omega_{m}\) for the combination PN+JW is less than \(\Omega_{m}\) for PN\({}^{+}\)+JW, i.e. \(\Omega_{m,\rm PN+JW}<\Omega_{m,\rm PN^{+}+JW}\). Showing a lower value of this parameter when we have higher SN at \(z<1\).
In regards to JW forecasting, all models have a \(\Gamma_{0,\rm PN^{+}+JW}>\Gamma_{0,\rm PN+JW}\). More SN at \(z<1\) (e.g. Pantheon+) seems to raise the value of this \(\Gamma_{0}\) parameter associated with JWST. Therefore, according to our \(\Delta\mu\) equation (see below Eq.(3.3)), it is statistically better to employ Pantheon with JW forecasting due that the vector uncertainty is lower in comparison to the one obtained with Pantheon+ catalog. However, notice that the JW sample has been calibrated with Pantheon, therefore \(\Gamma_{0}\) shows a preference for this SN sample.
We have studied the possibility of non-zero curvature in standard dark energy models, as demonstrated in [9]. The addition of our JW forecasting leads to an improvement in the statistics associated with \(\Omega_{m}\). It is expected that the JWST will observe more luminous structures with a well-treated morphology, which can help us find more robust statistics in dark energy cosmologies.
Figure 7: 1-2\(\sigma\) C.L results for the BA model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model (flat) & Dataset & \(\Omega_{M}\) & \(w_{0}\) & \(w_{a}\) & \(\Gamma_{0}\) \\ \hline \multirow{8}{*}{\(\Lambda\)CDM} & PN & \(0.290\pm 0.008\) & & & & \\ & PN+JW & \(0.295\pm 0.007\) & & & \(0.005^{+0.018}_{-0.018}\) \\ & PN\({}^{+}\) & \(0.311^{+0.010}_{-0.009}\) & & & & \\ & PN\({}^{+}\)+JW & \(0.312\pm 0.008\) & & & \(0.028\pm 0.018\) \\ \hline \multirow{8}{*}{\(w\)CDM} & PN & \(0.320^{+0.040}_{-0.045}\) & \(-1.073^{+0.100}_{-0.113}\) & & \\ & PN+JW & \(0.327^{+0.023}_{-0.021}\) & \(-1.086^{+0.057}_{-0.065}\) & & \(0.025\pm 0.021\) \\ & PN+ & \(0.334^{+0.036}_{-0.041}\) & \(-1.053^{+0.091}_{-0.095}\) & & \\ & PN\({}^{+}\)+JW & \(0.322^{+0.023}_{-0.022}\) & \(-1.025^{+0.052}_{-0.062}\) & & \(0.036^{+0.020}_{-0.022}\) \\ \hline \multirow{8}{*}{CPL} & PN & \(0.380^{+0.046}_{-0.058}\) & \(-1.111^{+0.110}_{-0.124}\) & \(-1.252^{+1.354}_{-1.709}\) & \\ & PN+JW & \(0.336^{+0.021}_{-0.022}\) & \(-1.065^{+0.061}_{-0.064}\) & \(-0.399^{+0.668}_{-0.798}\) & \(0.022^{+0.023}_{-0.022}\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.343^{+0.054}_{-0.065}\) & \(-1.081^{+0.103}_{-0.118}\) & \(0.150^{+0.638}_{-0.936}\) & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.323^{+0.022}_{-0.023}\) & \(-1.035^{+0.047}_{-0.056}\) & \(0.130^{+0.432}_{-0.537}\) & \(0.039\pm 0.022\) \\ \hline \multirow{8}{*}{JBP} & PN & \(0.358^{+0.064}_{-0.091}\) & \(-1.088^{+0.140}_{-0.162}\) & \(-0.721^{+1.787}_{-2.492}\) & \\ & PN+JW & \(0.310^{+0.036}_{-0.048}\) & \(-1.106^{+0.075}_{-0.079}\) & \(0.873^{+1.216}_{-1.491}\) & \(0.029^{+0.023}_{-0.022}\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.268^{+0.095}_{-0.121}\) & \(-1.013^{+0.142}_{-0.153}\) & \(1.182^{+0.707}_{-1.073}\) & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.280^{+0.039}_{-0.051}\) & \(-1.045^{+0.062}_{-0.063}\) & \(1.403^{+0.650}_{-0.845}\) & \(0.038^{+0.022}_{-0.021}\) \\ \hline \multirow{8}{*}{Exp} & PN & \(0.304^{+0.060}_{-0.066}\) & \(-1.044^{+0.128}_{-0.163}\) & & \\ & PN+JW & \(0.312^{+0.026}_{-0.025}\) & \(-1.062^{+0.060}_{-0.073}\) & & \(0.025^{+0.022}_{-0.020}\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.321^{+0.050}_{-0.058}\) & \(-1.034^{+0.118}_{-0.129}\) & & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.306^{+0.026}_{-0.025}\) & \(-1.001^{+0.058}_{-0.063}\) & & \(0.035^{+0.021}_{-0.023}\) \\ \hline \multirow{8}{*}{BA} & PN & \(0.383^{+0.061}_{-0.111}\) & \(-1.124\pm 0.159\) & \(-0.723^{+1.047}_{-1.531}\) & \\ & PN+JW & \(0.301^{+0.049}_{-0.092}\) & \(-1.045^{+0.109}_{-0.083}\) & \(0.301^{+0.254}_{-0.595}\) & \(0.028\pm 0.022\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.323^{+0.076}_{-0.110}\) & \(-1.044^{+0.162}_{-0.156}\) & \(0.158^{+0.334}_{-0.614}\) & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.280^{+0.047}_{-0.081}\) & \(-0.980^{+0.099}_{-0.075}\) & \(0.343^{+0.164}_{-0.311}\) & \(0.037\pm 0.021\) \\ \hline \end{tabular}
\end{table}
Table 1: Best-fit cosmological parameters at \(1\sigma\) for the six flat models obtained by combining the following catalogs: Pantheon (PN) and Pantheon+ (PN\({}^{+}\)) with the JW simulated sample (JW). Empty cells denote parameters not defined in the model.
## Acknowledgments
The Authors thank J. Vinko and E. Regos for their insights on using JWST forecasting data. Also, we would like to acknowledge funding from PAPIIT UNAM Project TA100122. CE-R acknowledges the Royal Astronomical Society as FRAS 10147. PMA and RS are supported by the CONACyT National Grant. The computational calculations have been carried out using facilities procured through the Cosmostatistics National Group project. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model (non-flat) & Dataset & \(\Omega_{M}\) & \(\Omega_{\Lambda}\) & \(w_{0}\) & \(w_{a}\) & \(\Gamma_{0}\) & \(\Omega_{k}\) \\ \hline \multirow{7}{*}{\(\Lambda\)CDM} & PN & \(0.332\pm 0.043\) & \(0.761\pm 0.049\) & & & \(-0.092\pm 0.091\) \\ & PN+JW & \(0.306\pm 0.010\) & \(0.725\pm 0.017\) & & \(0.018\pm 0.019\) & \(-0.031\pm 0.022\) \\ & PN+JW & \(0.328\pm 0.037\) & \(0.710\pm 0.041\) & & & \(-0.038\pm 0.076\) \\ & PN+JW & \(0.312\pm 0.010\) & \(0.695\pm 0.017\) & & \(0.027\pm 0.019\) & \(-0.008\pm 0.023\) \\ \hline \multirow{7}{*}{\(w\)CDM} & PN & \(0.302\pm 0.051\) & \(0.59^{+0.11}_{-0.20}\) & \(-1.29^{+0.38}_{-0.18}\) & & \(0.11^{+0.21}_{-0.16}\) \\ & PN+JW & \(0.333^{+0.022}_{-0.019}\) & \(0.619^{+0.066}_{-0.073}\) & \(-1.19^{+0.16}_{-0.11}\) & & \(0.023\pm 0.021\) & \(0.048\pm 0.058\) \\ & PN+JW & \(0.292\pm 0.045\) & \(0.435^{+0.008}_{-0.15}\) & \(-1.58^{+0.46}_{-0.29}\) & & \(0.27^{+0.17}_{-0.11}\) \\ & PN+JW & \(0.334^{+0.022}_{-0.018}\) & \(0.595^{+0.005}_{-0.079}\) & \(-1.158^{+0.124}_{-0.152}\) & & \(0.032\pm 0.021\) & \(0.071\pm 0.061\) \\ \hline \multirow{7}{*}{CPL} & PN & \(0.280\pm 0.077\) & \(0.412^{+0.060}_{-0.16}\) & \(-1.537^{+0.346}_{-0.451}\) & \(-3.6^{+5.4}_{-2.9}\) & & \(0.31^{+0.22}_{-0.13}\) \\ & PN+JW & \(0.358^{+0.022}_{-0.017}\) & \(0.533^{+0.054}_{-0.077}\) & \(-1.238^{+0.126}_{-0.143}\) & \(-2.02^{+1.4}_{-1.3}\) & \(0.025\pm 0.021\) & \(0.109^{+0.061}_{-0.065}\) \\ & PN+JW & \(0.239\pm 0.059\) & \(0.314^{+0.038}_{-0.11}\) & \(-2.059^{+0.465}_{-0.520}\) & \(-1.6^{+5.7}_{-2.7}\) & & \(0.447^{+0.14}_{-0.09}\) \\ & PN+JW & \(0.344^{+0.029}_{-0.020}\) & \(0.539^{+0.069}_{-0.070}\) & \(-1.17^{+0.16}_{-0.11}\) & \(-0.34^{+1.5}_{-0.1}\) & \(0.037\pm 0.021\) & \(0.116\pm 0.057\) \\ \hline \multirow{7}{*}{JBP} & PN & \(0.292\pm 0.097\) & \(0.50^{+0.12}_{-0.24}\) & \(-1.50^{+0.62}_{-0.21}\) & \(-2.1^{+4.8}_{-2.6}\) & & \(0.21^{+0.29}_{-0.21}\) \\ & PN+JW & \(0.358^{+0.031}_{-0.016}\) & \(0.493^{+0.055}_{-0.12}\) & \(-1.372^{+0.225}_{-0.281}\) & \(-3.03^{+0.49}_{-2.1}\) & \(0.031\pm 0.022\) & \(0.149^{+0.092}_{-0.061}\) \\ & PN+JW & \(0.204\pm 0.072\) & \(0.315^{+0.044}_{-0.15}\) & \(-2.320^{+0.781}_{-0.151}\) & \(2.1^{+6.7}_{-4.3}\) & & \(0.48^{+0.20}_{-0.10}\) \\ & PN+JW & \(0.328^{+0.056}_{-0.012}\) & \(0.517^{+0.076}_{-0.16}\) & \(-1.424^{+0.277}_{-0.347}\) & \(0.5^{+2.4}_{-1.4}\) & \(0.040\pm 0.022\) & \(0.155^{+0.10}_{-0.088}\) \\ \hline \multirow{7}{*}{Exp} & PN & \(0.289\pm 0.068\) & \(0.67^{+0.16}_{-0.32}\) & \(-1.24^{+0.50}_{-0.18}\) & & \(0.04^{+0.30}_{-0.24}\) \\ & PN+JW & \(0.335^{+0.035}_{-0.021}\) & \(0.573^{+0.08}_{-0.12}\) & \(-1.33^{+0.31}_{-0.19}\) & & \(0.028\pm 0.022\) & \(0.092^{+0.068}_{-0.076}\) \\ & PN+JW & \(0.262\pm 0.055\) & \(0.391^{+0.046}_{-0.018}\) & \(-1.86^{+0.74}_{-0.48}\) & & \(0.35^{+0.23}_{-0.13}\) \\ & PN+JW & \(0.340^{+0.022}_{-0.018}\) & \(0.526^{+0.073}_{-0.12}\) & \(-1.38^{+0.22}_{-0.22}\) & & \(0.036\pm 0.021\) & \(0.134^{+0.005}_{-0.072}\) \\ \hline \multirow{7}{*}{BA} & PN & \(0.287^{+0.15}_{-0.086}\) & \(0.52^{+0.11}_{-0.23}\) & \(-1.328^{+0.324}_{-0.409}\) & \(-1.1^{+2.6}_{-1.7}\) & & \(0.20^{+0.29}_{-0.24}\) \\ & PN+JW & \(0.363^{+0.029}_{-0.014}\) & \(0.465^{+0.044}_{-0.090}\) & \(-1.447^{+0.205}_{-0.255}\) & \(-2.2^{+2.9}_{-1.3}\) & \(0.031\pm 0.022\) & \(0.172^{+0.075}_{-0.054}\) \\ \cline{1-1} & PN+JW & \(0.240\pm 0.069\) & \(0.332^{+0.046}_{-0.13}\) & \(-1.990^{+0.556}_{-0.658}\) & \(-0.7^{+3.2}_{-1.3}\) & & \(0.43^{+0.19}_{-0.10}\) \\ \cline{1-1} & PN+JW & \(0.347^{+0.042}_{-0.012}\) & \(0.453^{+0.053}_{-0.10}\) & \(-1.522^{+0.254}_{-0.280}\) & \(-0.69^{+1.8}_{-0.92}\) & \(0.039\pm 0.02
SNIa JWST baseline forecasting: data and priors
The simulated data set used is derived from the FLARE project, which has the goal of searching supernovas from population III at redshift \(z\geq 10\)[77] by using the characteristics of the JWST in an area of 0.05 square degrees. Furthermore, the project employs four broadband in the NIRCAM filters (F150W, F200W, F322W2, F444W) with exposure times that can reach \(10\sigma\) limiting magnitudes of \(m\gtrsim 27\) in these filters [9]. By using Monte Carlo methods it was found that for a specific project of JWST observation, at least 200 SNe Ia could be observed. Therefore, the mock sample is constructed using a flat \(\Lambda\)CDM model with \(H_{0}=71.66\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.3\), as derived using Pantheon data [8]. To ensure consistency between the local and distant samples, we consider a Gaussian error associated with the supernovae distance modules of 0.15 mag and extrapolated them to a larger uncertainty with higher redshift, although this is an oversimplification of the typical distance measurements [80]. For the FLARE project [78] the simulations were done using the standard cosmology and a different SFR according to the redshift in which the simulation is done. Also, it is considered the calculation of the rate of occurrence to estimate the range of the detection needed in the telescope to perform the observations.
The used SFR functions are explicitly proposed to have a redshift dependence. For low \(z\lesssim 3\), the function used is [81]:
\[\mathrm{SFR}(z)=K\frac{(a+bz)h}{1+(z/c)^{d}}, \tag{10}\]
where \(h=H_{0}/(100\mathrm{kms^{-1}Mpc^{-1}})\), \(a=0.017\), \(b=0.13\), \(c=3.3\), and \(d=5.3\). These quantities constrain \(K\) with the SN rates observed. For higher redshifts between \(3<z\leq 8\), the function used is:
\[\mathrm{SFR}(z)\propto(1+z)^{-3.6}, \tag{11}\]
and for \(z>8\):
\[\mathrm{SFR}(z)\propto(1+z)^{-10.4}. \tag{12}\]
Additionally, other SFRs are proposed for the whole interval [82]:
\[\mathrm{SFR}(z)=0.015\frac{(1+z)^{2.7}}{1+[(1+z)/2.9]^{5.6}}, \tag{13}\]
or [83]:
\[\mathrm{SFR}(z)=\frac{C}{10^{A(z-z_{0})}+10^{B(z-z_{0})}}, \tag{14}\]
using the constants \(C=0.18\), \(A=-0.997\), \(B=0.241\), and \(z_{0}=1.243\). All the parameterisations studied in Sec.2 for the SFR give similar redshift dependence with a peak in star formation in \(z\sim 2\). The volume rate was calculated using the observed SN rate per redshift bin as [78]:
\[\dot{N}_{\mathrm{SN}}(z)=\frac{\dot{n}_{\mathrm{SLSN}}(z)}{1+z}\frac{\mathrm{ d}V}{\mathrm{d}z}, \tag{15}\]
where the comoving volume can be rewritten as:
\[\dot{n}_{\mathrm{SLSN}}(z)=\varepsilon(z)\mathrm{SFR}(z), \tag{16}\]
where explicitly is expressed the comoving rate of SNe. \(\varepsilon(z)\) is a factor of the efficiency taking into account the metallicity dependence. Even though, \(\varepsilon(z)\) can be studied through GRBs [78]. So, the expected number of SN can be calculated as:
\[N_{\rm SN}=\Omega T\int_{z}^{z+\Delta z}\frac{\dot{n}_{\rm SLSN}(z)}{1+z}\frac{ \mathrm{d}V}{\mathrm{d}z}\mathrm{d}z, \tag{10}\]
resulting in a number per unit of redshift interval in the survey area. \(\Omega\) is the survey area and \(T\) is the dedicated time of observation. The redshift dependence also has to consider the number of SNIa progenitors that occur (as SNIa needs a White Dwarf) and the delay for the stellar evolution in such binary systems.
For the simulations of mock data sets the code developed by [84] is used to create light curves at different redshifts taking into account the calculated rate and detection of the observatory [78]. So, for the simulation the redshift \(z\), the luminosity distance \(D_{L}\), maximum light moment \(t_{\rm max}\), V absolute magnitude \(M_{\rm V}\), stretch and color are calculated for every SN. It is important to mention that the assumptions of the FLARE project imply that the JSWT will take deep observations for at least three years with a 90-day cadence allowing us to discover between 5 and 20 supernovae events meaning at least fifty in redshift \(1<z<4\)[9]. Additionally, the simulation takes into account that the SNIa ideal observation time has to be from 2 weeks before the maximum up to one month after in which spectrum would have the ideal quality [85]. So, for the simulations, the results are the Hubble diagram for the apparent magnitude assuming detection with the NIRCam of the telescope.
|
2307.16558 | Canonical Gradings of Monads | We define a notion of grading of a monoid T in a monoidal category C,
relative to a class of morphisms M (which provide a notion of M-subobject). We
show that, under reasonable conditions (including that M forms a factorization
system), there is a canonical grading of T. Our application is to graded monads
and models of computational effects. We demonstrate our results by
characterizing the canonical gradings of a number of monads, for which C is
endofunctors with composition. We also show that we can obtain canonical grades
for algebraic operations. | Flavien Breuvart, Dylan McDermott, Tarmo Uustalu | 2023-07-31T10:37:41Z | http://arxiv.org/abs/2307.16558v1 | # Canonical Gradings of Monads
###### Abstract
We define a notion of grading of a monoid \(\mathsf{T}\) in a monoidal category \(\mathcal{C}\), relative to a class of morphisms \(\mathcal{M}\) (which provide a notion of \(\mathcal{M}\)-subobject). We show that, under reasonable conditions (including that \(\mathcal{M}\) forms a factorization system), there is a canonical grading of \(\mathsf{T}\). Our application is to graded monads and models of computational effects. We demonstrate our results by characterizing the canonical gradings of a number of monads, for which \(\mathcal{C}\) is endofunctors with composition. We also show that we can obtain canonical grades for algebraic operations.
## 1 Introduction
This paper is motivated by quantitative modelling of computational effects from mathematical programming semantics. It is standard in this domain to model notions of computational effect, such as nondeterminism or manipulation of external state, by (strong) monads [12]. In many applications, however, it is useful to be able to work with quantified effects, e.g., how many outcomes a computation may have, or to what degree it may read or overwrite the state. This is relevant, for example, for program optimizations or analyses to assure that a program can run within allocated resources. Quantification of effectfulness is an old idea and goes back to type-and-effect systems [9]. Mathematically, notions of quantified effect can be modelled by graded (strong) monads [14, 11, 5].
It is natural to ask if there are systematic ways for refining a non-quantitative model of some effect into a quantitative version, i.e., for producing a graded monad from a monad. In this paper, we answer this question in the affirmative. We show how a monad on a category can be graded with any class of subfunctors (intuitively, predicates on computations) satisfying reasonable conditions, including that it forms a factorization system on some monoidal subcategory of the endofunctor category. Moreover, this grading is canonical, namely universal in a certain 2-categorical sense. We also show that algebraic operations of the given monad give rise to _flexibly graded_ algebraic operations [6] of the canonically graded monad. Instead of working concretely with monads on a category, we work abstractly with monoids in a (skew) monoidal category equipped with a factorization system.
The structure of the paper is this. In Section 2, we introduce the idea of grading by subobjects for general objects and instantiate this for grading of functors. We then proceed to gradings of monoids and monads in Section 3. In Section 4, we explore the specific interesting case of grading monads canonically by subsets of their sets of shapes. In Section 5, we explain the emergence of canonical flexibly graded algebraic operations for canonical gradings of monads. One longer proof is in Appendix A.
We introduce the necessary concepts regarding the classical topics of monads, monoidal categories and factorization systems. For additional background on the more specific concepts of graded monad and skew monoidal category, which we also introduce, we refer to [5, 3] and [15, 8] as entry points. |
2305.00543 | Calibration Error Estimation Using Fuzzy Binning | Neural network-based decisions tend to be overconfident, where their raw
outcome probabilities do not align with the true decision probabilities.
Calibration of neural networks is an essential step towards more reliable deep
learning frameworks. Prior metrics of calibration error primarily utilize crisp
bin membership-based measures. This exacerbates skew in model probabilities and
portrays an incomplete picture of calibration error. In this work, we propose a
Fuzzy Calibration Error metric (FCE) that utilizes a fuzzy binning approach to
calculate calibration error. This approach alleviates the impact of probability
skew and provides a tighter estimate while measuring calibration error. We
compare our metric with ECE across different data populations and class
memberships. Our results show that FCE offers better calibration error
estimation, especially in multi-class settings, alleviating the effects of skew
in model confidence scores on calibration error estimation. We make our code
and supplementary materials available at: https://github.com/bihani-g/fce | Geetanjali Bihani, Julia Taylor Rayz | 2023-04-30T18:06:14Z | http://arxiv.org/abs/2305.00543v2 | # Calibration Error Estimation Using Fuzzy Binning
###### Abstract
Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities. Calibration of neural networks is an essential step towards more reliable deep learning frameworks. Prior metrics of calibration error primarily utilize crisp bin membership-based measures. This exacerbates skew in model probabilities and portrays an incomplete picture of calibration error. In this work, we propose a Fuzzy Calibration Error metric (FCE) that utilizes a fuzzy binning approach to calculate calibration error. This approach alleviates the impact of probability skew and provides a tighter estimate while measuring calibration error. We compare our metric with ECE across different data populations and class memberships. Our results show that FCE offers better calibration error estimation, especially in multi-class settings, alleviating the effects of skew in model confidence scores on calibration error estimation. We make our code and supplementary materials available at: [https://github.com/bihani-g/fce](https://github.com/bihani-g/fce).
language Models, Calibration, Fine-tuning, Fuzzy theory, Classification, Natural Language Processing
## 1 Introduction
Neural network-based decision-making systems have evolved rapidly in the recent decade. Within the domain of natural language processing, deep learning has shaped the current evolution in language modeling. These neural network-based language models are trained on large text corpora and can be
fine-tuned across a wide range of NLP tasks and further improved using synthetic semantic enhancement schemes [1], yielding state-of-the-art performance [2; 3; 4; 5]. Ideally, a neural model should output reliable and confident prediction probabilities. But recent works have shown that neural networks are unreliable and output highly overconfident predictions, resulting in over-estimation of the model's confidence in decisions [6; 7; 8]. This leads to model miscalibration, i.e. a lack of alignment between a model's decision probabilities and its actual likelihood of correctness. This lack of calibration can severely impact the trustworthiness of a model's decisions.
A widely adopted measure of the degree of miscalibration is Expected Calibration Error (ECE) [9], used to measure neural network reliability [10; 11; 12]. The highly overconfident output prediction probabilities of neural networks result in a left-skewed probability distribution [13]. Since ECE utilizes a fixed-width crisp binning scheme, this skew results in higher probability bins largely contributing to the calibration error estimation, while lower probability bins are ignored [13; 14; 15]. To overcome these limitations, prior works have proposed alternative binning strategies such as equal-frequency binning [14], adaptive binning [15], replacing binning with smoothed kernel density estimation [16], and more. Most calibration error estimation techniques rely on crisp binning, which discards edge probabilities (probabilities that typically lie on the bin edge) that could have contributed to a more accurate calibration error estimation. Although some works have utilized fuzzification of prediction probabilities for downstream NLP tasks [17], the calibration impacts of such fuzzification are yet to be studied. We hypothesize that fuzzifying the binning scheme would allow edge probabilities to contribute toward more accurate calibration error estimation. Moreover, fuzzy binning would increase the visibility of lower probability scores by allowing them to have partial membership in higher probability bins, minimizing the skew problem in calibration error estimation.
Towards testing this hypothesis, we propose a new metric for estimating calibration error, i.e. Fuzzy Calibration Error (FCE), that utilizes fuzzy binning instead of crisp binning to allow edge probability contributions and minimize skew in calculating calibration error. We perform empirical evaluation across different classification settings, comparing FCE with the baseline calibration error estimation metric ECE.
Our results show that, unlike ECE, FCE better captures miscalibration in lower probability bins and provides a tighter and less skewed estimate of calibration error. These improvements are more visible in multi-class settings, where the skew in confidence scores exacerbates the calibration error estimation problem.
The contributions of this work are summarized as follows:
* We propose Fuzzy Calibration Error (FCE) metric which uses fuzzy binning to account for edge probabilities and minimize skew in calibration error estimation
* We perform empirical evaluation across a wide range of classification settings and show the benefits of using FCE over ECE in minimizing the impacts of probability skew on calibration error estimation
## 2 Background
### Neural Network Calibration
Neural network calibration refers to the process of adjusting a neural network model's output probabilities to reflect the true probabilities of the events it is predicting. With the increased application of neural network architectures in high-risk real-world settings. their calibration has become an extensively studied topic in recent years [18; 19; 20]. Recent research has focused on improving the calibration of neural networks, particularly in the context of deep learning. Various methods have been proposed to achieve better calibration, including temperature scaling [6], isotonic regression [21], and histogram binning [22].
### Expected Calibration Error
Expected calibration error (ECE) is a scalar measure of calibration error that calculates the weighted average of the difference between the accuracy of a model and its average confidence level over a set of bins defined by the predicted probabilities. Estimation of expected accuracy from finite samples is done by grouping predictions into \(M\) interval bins (each of size \(\frac{1}{M}\)), and the accuracy of each bin is calculated. Let \(B_{m}\) be a bin containing samples whose prediction confidence lies within the interval \(I_{m}=\left(\frac{m-1}{M},\frac{m}{M}\right]\). Then the accuracy of \(B_{m}\), where \(y_{i}\) and \(\hat{y}_{i}\) portray predicted and true class labels, is calculated as shown in Eq. 1.
\[\mathrm{acc}\left(B_{m}\right)=\frac{1}{\left|B_{m}\right|}\sum_{i\in B_{m}} \mathbf{1}\left(\hat{y}_{i}=y_{i}\right) \tag{1}\]
The average predicted confidence of \(B_{m}\), is calculated as shown in Eq. 2, where \(\hat{p}_{i}\) refers to the prediction probability of the \(i^{th}\) instance in \(B_{m}\).
\[\mathrm{conf}\left(B_{m}\right)=\frac{1}{\left|B_{m}\right|}\sum_{i\in B_{m}} \hat{p}_{i} \tag{2}\]
In an ideal scenario, for a perfectly calibrated model, \(\mathrm{acc}\left(B_{m}\right)=\mathrm{conf}\left(B_{m}\right)\) for all \(m\) bins where \(m\in\{1,\ldots,M\}\).
Finally, ECE is calculated as shown in Eq. 3, where \(n\) is total number of samples [9].
\[\mathrm{ECE}=\sum_{m=1}^{M}\frac{\left|B_{m}\right|}{n}\mid\mathrm{acc}\left(B _{m}\right)-\mathrm{conf}\left(B_{m}\right) \tag{3}\]
## 3 Fuzzy Calibration Error
In this work, we propose Fuzzy Calibration Error (FCE), a metric that transforms raw prediction probabilities into soft bin membership values for calibration error estimation. This transformation has two benefits:
1. Allows edge probability contributions when calculating calibration error
2. Minimize probability skew effects by increasing visibility of lower probability bins in calibration error estimation
To perform fuzzification, we utilize trapezoidal membership functions to map raw softmax prediction probabilities to fuzzy bin membership values. The difference between crisp and fuzzy binning of model prediction probabilities is shown in Figure 3, with \(M=3\) bins, and can be extended to any number of bins where \(M>3\). While ECE only allows for crisp membership within each bin, FCE offers a more flexible binning approach, with partial memberships allowed across multiple bins.
Fuzzy Calibration Error (\(FCE\)) calculates the weighted average of the difference between accuracy and average model confidence over a set of \(M\) fuzzy bins. Estimation of expected accuracy from finite samples is done by grouping predictions into \(M\) fuzzy bins, and the accuracy of each bin is calculated. Let \(B_{m}\) be a bin containing samples whose prediction confidence lies within the interval \(I_{m}=\left(\frac{m-1}{M},\frac{m}{M}\right]\). Then the accuracy for bin \(B_{m}\), where \(y_{i}\) and \(\hat{y}_{i}\) portray predicted and true class labels, is calculated as shown in Eq. 4.
\[\mathrm{acc}\,_{fuzzy}(B_{m})=\frac{1}{|\mu_{fuzzy}(B_{m})|}\sum_{i\in B_{m}} \mu_{fuzzy}(B_{m})(\hat{y}_{i}=y_{i}) \tag{4}\]
Then, the average fuzzy predicted confidence of \(B_{m}\), is calculated as shown in Eq. 5.
\[\mathrm{conf}\,_{fuzzy}(B_{m})=\frac{1}{|\mu_{fuzzy}(B_{m})|}\sum_{i\in B_{m}} \mu_{fuzzy}(B_{m})\cdot\hat{p}_{i} \tag{5}\]
Figure 1: Crisp binning (Top left) and fuzzy binning (Bottom left) of prediction probabilities, where the number of bins \(M=3\). An example of the difference in bin assignment based on \(\hat{p_{i}}\) in crisp vs fuzzy binning (Right).
Finally, FCE is calculated as shown in Eq. 6. Unlike ECE where the average is taken over the number of samples in \(B_{m}\) i.e., \(n\), we take the average over the total fuzzy membership in \(B_{m}\) i.e., \(\sum_{m=1}^{M}\mu_{fuzzy}(B_{m})\).
\[FCE=\frac{1}{\sum_{m=1}^{M}\mu_{fuzzy}(B_{m})}\sum_{m=1}^{M}|\mu (B_{m})|\cdot|\text{acc}\,_{fuzzy}(B_{m})-\text{conf}\,_{fuzzy}(B_{m})| \tag{6}\]
## 4 Experiments
To evaluate the impact of fuzzy binning on calibration error estimation, we perform empirical evaluations across different classification settings. We fine-tune large language models for text classification and measure their calibration performance.
### Experimental Setup
**Datasets** We consider three text classification datasets to run our analyses, which vary in terms of class distributions, briefly described below.
* 20 Newsgroups (20NG): The 20 Newsgroups dataset [23] is a collection of newsgroup documents containing approximately \(20,000\) documents with an (almost) balanced class distribution across 20 newsgroups/topics.
* AGNnews (AGN): The AG's news topic classification dataset [24] is a collection of approximately \(128,000\) news articles, from 4 sources. This dataset is widely used in clustering, classification and information retrieval.
* IMDb: The IMDb Movie reviews dataset [25] is a collection of \(50,000\) movie reviews from the Internet Movie Database (IMDb). Each review is assigned either a positive or negative label, and the data is widely used to train models for binary sentiment classification tasks.
We further simulate varying data resource settings to compare miscalibration across different fine-tuning regimes. This is achieved by using a limited portion of the training data to perform fine-tuning, and has been done in prior works [26].
**Metrics** To evaluate calibration across different fine-tuning setups, we use ECE (refer to Eq. 3), FCE (refer to Eq. 6), and overconfidence (OF), described below.
* Overconfidence (OF): Overconfidence is the expectation of model prediction probabilities \(\hat{p}_{i}\) (confidence scores) over incorrect predictions and is calculated as shown in Eq. 7. \[\text{OF}=\frac{1}{|k|}\sum_{i\in incorrect}\hat{p}_{i}\] (7) Here \(k\) is the total number of incorrect predictions made by a given model.
Fine-tuning Setup
We implement text classification using a fine-tuned BERT [27]. Since the focus of our work is not to create the most accurate fine-tuned model but to compare the efficacy of ECE and FCE across skewed prediction probabilities, we only fine-tune over one epoch and collect miscalibrated prediction probabilities.
### Results
**Fuzzy binning in FCE better captures lower probability bins and edge probabilities:** While ECE bins are highly impacted by the leftward skew in prediction probabilities, FCE yields a more uniformly distributed binning scheme. This can be seen in Fig. 2, where the primary contributors of ECE calculations are the higher probability bins, barely including lower probability bins in calculations. On the other hand, FCE is more uniformly spread across the probability range, better capturing lower probability bins and offering immunity against highly skewed prediction probabilities.
**Model overconfidence in multi-class classification settings is low but continuously increasing:** Refer to Fig. 3 to observe the changes in overconfidence in model predictions. Although, a multi-class classification dataset like 20 Newsgroups results in lower overconfidence in predictions in limited
Figure 3: Variation in model overconfidence (OF) across different sample sizes
Figure 2: Variation in calibration error estimated using ECE and FCE across different bin sizes (top to bottom) and class distributions (left vs right)
data regimes, as compared to datasets with fewer classes, this overconfidence increases as the number of samples during fine-tuning increases. On the other hand, datasets with fewer classes i.e., i.e., AGNews and IMDb output highly overconfident predictions in limited data regimes, but this overconfidence plateaus as one keeps adding more samples.
**Unlike ECE, FCE is not sensitive to the binning strategy and underlying data used for training:** ECE is a highly sensitive calibration error estimation metric, and is easily influenced by slight changes in data and/or binning strategies. Table 4.2 shows variations in \(\Delta\), which calculates the average difference in estimated calibration error when binning is performed using fewer bins (\(M\in[2..7]\)) versus more bins (\(M\in[8..15]\)). While ECE displays larger variations in calibration error estimation due to binning choices, FCE is fairly immune to these choices and shows minimal \(\Delta\) in most cases. Further, Fig. 4 shows that the distribution of ECE across probability bins is highly variable, and usually leftward skewed. On the other hand, FCE bins are more evenly distributed and as shown in Table 4.2, output more conservative calibration error estimates.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & ECE & \(\Delta_{ECE}\) & FCE & \(\Delta_{FCE}\) \\ \hline Fine-tuning samples & \multicolumn{4}{c}{AGNews} \\ \hline
100 & 15.41 & **2.36** & 32.50 & 0.00 \\
1000 & 3.33 & 0.63 & 11.41 & 0.46 \\
5000 & 0.71 & 0.41 & 7.77 & 0.71 \\
10000 & 0.80 & 0.78 & 6.86 & 0.66 \\ \hline & \multicolumn{4}{c}{IMDB} \\ \hline
100 & 5.00 & **1.71** & 22.50 & 0.00 \\
1000 & 3.42 & 1.51 & 12.01 & 0.24 \\
5000 & 1.49 & 0.23 & 7.41 & 0.82 \\
10000 & 0.26 & 0.22 & 8.01 & 0.84 \\ \hline & \multicolumn{4}{c}{20 Newsgroups} \\ \hline
100 & 1.31 & 0.20 & 5.90 & 0.00 \\
1000 & 29.21 & **4.47** & 38.83 & 0.27 \\
5000 & 9.99 & 1.54 & 24.05 & 0.11 \\
10000 & 2.28 & 1.30 & 16.18 & 0.39 \\ \hline \hline \end{tabular} \({}^{1}\) ECE, FCE, \(\Delta_{ECE}\) and \(\Delta_{FCE}\) values are scaled by a factor of 10.
\end{table}
Table 1: Variations in ECE and FCE across different fine-tuning settings. Here, \(\Delta\) calculates the average difference in estimated calibration error when binning is performed using fewer bins (\(M\in[2..7]\)) versus more bins (\(M\in[8..15]\)).
## 5 Conclusion
Overconfidence in neural networks lends to the problem of erroneous estimation of calibration error. ECE, a widely adopted metric of measuring calibration error across model decisions has recently come under scrutiny for being biased towards high probability bins. To address this limitation, we propose a new calibration error metric, i.e. Fuzzy Calibration Error (FCE). This metric transforms raw model confidence scores into fuzzy bin memberships, allowing more visibility of lower probability bins within the calibration error calculations. Our results show that FCE offers a tighter estimate of calibration error and the benefits of this metric are more prominent in multi-class classification settings, where skew in model confidence largely affects calibration error estimation using ECE.
Acknowledgments.This work was partially supported by the Department of Justice grant #15PJDP-21-GK-03269-MECP.
|
2305.19774 | Ambiguity in solving imaging inverse problems with deep learning based
operators | In recent years, large convolutional neural networks have been widely used as
tools for image deblurring, because of their ability in restoring images very
precisely. It is well known that image deblurring is mathematically modeled as
an ill-posed inverse problem and its solution is difficult to approximate when
noise affects the data. Really, one limitation of neural networks for
deblurring is their sensitivity to noise and other perturbations, which can
lead to instability and produce poor reconstructions. In addition, networks do
not necessarily take into account the numerical formulation of the underlying
imaging problem, when trained end-to-end. In this paper, we propose some
strategies to improve stability without losing to much accuracy to deblur
images with deep-learning based methods. First, we suggest a very small neural
architecture, which reduces the execution time for training, satisfying a green
AI need, and does not extremely amplify noise in the computed image. Second, we
introduce a unified framework where a pre-processing step balances the lack of
stability of the following, neural network-based, step. Two different
pre-processors are presented: the former implements a strong parameter-free
denoiser, and the latter is a variational model-based regularized formulation
of the latent imaging problem. This framework is also formally characterized by
mathematical analysis. Numerical experiments are performed to verify the
accuracy and stability of the proposed approaches for image deblurring when
unknown or not-quantified noise is present; the results confirm that they
improve the network stability with respect to noise. In particular, the
model-based framework represents the most reliable trade-off between visual
precision and robustness. | Davide Evangelista, Elena Morotti, Elena Loli Piccolomini, James Nagy | 2023-05-31T12:07:08Z | http://arxiv.org/abs/2305.19774v1 | # Ambiguity in solving imaging inverse problems with deep learning based operators
###### Abstract
In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem, when trained end-to-end. In this paper, we propose some strategies to improve stability without losing to much accuracy to deblur images with deep-learning based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following, neural network-based, step. Two different pre-processors are presented: the former implements a strong parameter-free denoiser, and the latter is a variational model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness.
Neural Networks Stability Image Deblurring Deep Learning Inverse Problems in Imaging
## 1 Introduction
Image restoration is a discipline within the field of image processing focusing on the removal or reduction of distortions and artifacts from images. This topic is of interest in a wide range of applications, including medical imaging, satellite and aerial imaging, and digital photography. In this last case, blurring on images is quite frequent and several factors can cause it. To set some examples, Gaussian blur is caused by the diffraction of light passing through a lens and it is more prevalent in images captured with low-aperture lenses or in situations where the depth of field is shallow, whereas motion blur is due to handheld camera movements or low lighting conditions and slow shutter speeds [1, 2, 3]. Also noise seriously affects images; it is usually introduced by the acquisition systems.
Researchers have developed a number of algorithms for reducing blur and noise and image restoration is a very active field of research where new methods are continuously being proposed and developed. Such methodologies can be classified into two main categories: model-based and learning-based. The model-based techniques assume that the degradation process is known and it is mathematically described as an inverse problem [4]. The learning-based methods learn a map between the degraded and clean images during the training phase and use it to deblur new corrupted images [5].
**Model-based mathematical formulation.**
In model-based approaches, denoting by \(\mathcal{X}\) the compact and locally connected subset of \(\mathbb{R}^{n}\) of the \(\mathbf{x}^{gt}\) ground truth sharp images, the relation between \(\mathbf{x}^{gt}\in\mathcal{X}\) and its blurred and noisy observation \(\mathbf{y}^{\delta}\) is formulated as:
\[\mathbf{y}^{\delta}=K\mathbf{x}^{gt}+\mathbf{e},\] (P)
where \(K\) is the known blurring operator and \(\mathbf{e}\) represents noise on the image. We can say that, with very high probability, \(||\mathbf{e}||\leq\delta\). In this setting, the goal of model-based image deblurring methods is to compute a sharp and unobstructed image \(\mathbf{x}\) given \(\mathbf{y}^{\delta}\) and \(K\), by solving the linear inverse problem. When noise is present, problem (P) is typically reformulated into an optimization problem, where a data fit measure, namely \(\mathcal{F}\), is minimized. Since the blurring operator \(K\) is known to be severely ill-conditioned, a regularization term \(\mathcal{R}\) is added to the data-fidelity term \(\mathcal{F}\) to avoid noise propagation. The resulting optimization problem is formulated as:
\[\mathbf{x}^{*}=\arg\min_{\mathbf{x}\in\mathcal{X}}\mathcal{F}(K\mathbf{x}, \mathbf{y}^{\delta})+\lambda\mathcal{R}(\mathbf{x}), \tag{1}\]
where \(\lambda>0\) is the regularization parameter. This optimization problem can be solved using different iterative methods depending on the specific choice for \(\mathcal{F}\) and \(\mathcal{R}\)[6, 1, 7]. We remark that \(\mathcal{F}\) is set as the least-squares function in case of Gaussian noise, whereas te regularization function \(\mathcal{R}\) can be tuned by the users according to the imaging properties they desire to enforce. Recently, plug-and-play techniques plug a denoiser, usually a neural network, into an iterative procedure to solve the minimization problem [8, 9, 10]. The value of \(\lambda\) can also be selected by automatic routines, image-by-image [11, 12]. These features make model-based approaches mathematically explainable, flexible, and robust. However, a disadvantage is that the final result strongly depends on a set of parameters that are difficult to set up properly.
**Deep learning-based formulation.**
In the last decade, deep learning algorithms have been emerging as good alternatives to model-based approaches. Disregarding any mathematical blurring operator, convolutional neural networks (NNs) can be trained to identify patterns characterizing blur on images, thus they can learn several kinds of blur and adapt to each specific imaging task. Large and complex convolutional neural networks, called UNet, have been proposed to achieve high levels of accuracy, by automatically tuning and defining their inner filters and proper transformations for blur reduction, without needing any parameter setting [13, 14, 15, 16]. Indeed, the possibility to process large amounts of data in parallel makes networks highly efficient for image processing tasks and prone to play a key role in the development of new and more advanced techniques in the future.
However, challenges and limitations in using neural networks are known in the literature. Firstly, it is difficult to understand and precisely interpret how they are making decisions and predictions, as they act as unexplainable black boxes mapping the input image \(\mathbf{y}^{\delta}\) towards \(\mathbf{x}^{gt}\) directly. Secondly, neural networks are prone to overfitting, which occurs when they become too specialized for the training samples and perform poorly on new, unseen images. Lastly, the high performance of neural networks is typically evaluated only in the so-called _in-domain_ case, i.e. the test procedure is performed on images sharing exactly the same corruption with the training samples, hence the impact of unquantified perturbations (as noise can be) has been not widely studied yet. In other words, the robustness of NN-based image deblurring with respect to unknown noise is not guaranteed [17, 18, 19, 20].
**Contributions of the article.**
Motivated by the poor stability but high accuracy of NN-based approaches in solving inverse imaging problems such as deblurring, this paper proposes strategies to improve stability, maintaining good accuracy, acting similarly as regularization functions do in the model-based approach. Basing on a result showing a trade-off between stability and accuracy, we propose to use a very small neural network, in place of the UNet, which is less accurate, but it is much more stable than larger networks. Since it has only few parameters to identify, it consumes relatively little time and energy, thus meeting the green AI principles.
Moreover, we propose two new NN-based schemes, embedding a pre-processing step to face the network instability when solving deblurring problems as in (P). The first scheme, denoted as FiNN, applies a model-free low-pass filter to the datum, before passing it as input to the NN. This is a good approach to be applied whenever an unknown noise is present because it does not need any model information or parameter tuning. The second scheme, called Stabilized Neural Network (StNN), exploits an estimation of the noise statistics and the mathematical modeling of both noise and image corruption process. Figure 1 shows a draft of the proposed frameworks. whose robustness is evaluated from a theoretical perspective and tested on an image data set.
### Structure of the article.
The work is organized as follows. In Section 2, we formulate the NN-based action as an image reconstructor for problem (P). In Section 3 we show our experimental set-up and motivate our work on some experiments, thus we state our proposals and derive their main properties in Section 4. Finally, in Section 5 we will report the results of some experiments to test the methods and empirically validate the theoretical analysis, before concluding with final remarks in Section 6.
## 2 Solving imaging inverse problems with Deep Learning based operators
As stated in (P), image restoration is mathematically modeled as an inverse problem which derives from the discretization of Fredholm integral equations, are ill-posed and the noise on the data is amplified in the numerically computed solution of \(\mathbf{y}^{\delta}=K\mathbf{x}^{gt}+\mathbf{e}\). A rigorous theoretical analysis on the solution of such problems with variational techniques which can be formulated as in equation (1) has been performed, both in the continuous and discrete settings, and regularization techniques have been proposed to limit the noise spread in the solution [21, 1].
At our best knowledge, a similar analysis for deep learning based algorithms is not present in literature and it is quite mysterious how these algorithms behave in presence of noise on the data. In this paper we use some of the mathematical tools defined and proved in [20] and we propose here some techniques to limit noise spread. More details about the proposed mathematical framework in a more general setting can be found in [20].
In the following, if not differently stated, as a vector norm we consider the Euclidean norm. We first formalize the concept of reconstructor associated to (P) with the following definition.
**Definition 2.1**.: Denoting by \(Rg(K)\) the range of \(K\), we call \(\mathcal{Y}^{\delta}=\{\mathbf{y}^{\delta}\in\mathbb{R}^{n};\inf_{\mathbf{y} \in Rg(K)}||\mathbf{y}-\mathbf{y}^{\delta}||\leq\delta\}\) the set of corrupted images according to \(\delta\geq 0\). Any continuous function \(\psi:\mathcal{Y}^{\delta}\rightarrow\mathbb{R}^{n}\), mapping \(\mathbf{y}^{\delta}=K\mathbf{x}^{gt}+\mathbf{e}\) (where \(||\mathbf{e}||\leq\delta\) with \(\delta\geq 0\)) to an \(\mathbf{x}\in\mathbb{R}^{n}\), is called a reconstructor.
Figure 1: A graphical draft highlighting the introduction of pre-processing steps Fi and St defining the proposed frameworks FiNN and StNN, respectively.
The associated _reconstructing error_ is
\[\mathcal{E}_{\psi}(\mathbf{x}^{gt},\mathbf{y}^{\delta}):=||\psi(\mathbf{y}^{ \delta})-\mathbf{x}^{gt}||. \tag{2}\]
**Definition 2.2**.: We quantify the accuracy of the reconstructor \(\psi\), by defining the measure \(\eta>0\) as:
\[\eta=\sup_{\mathbf{x}^{gt}\in\mathcal{X}}||\psi(K\mathbf{x}^{gt})-\mathbf{x}^ {gt}||=\sup_{\mathbf{x}^{gt}\in\mathcal{X}}\mathcal{E}_{\psi}(\mathbf{x}^{gt}, \mathbf{y}^{0}). \tag{3}\]
We say that \(\psi\) is \(\eta^{-1}\)-accurate [21].
We now consider a neural network as a particular reconstructor.
**Definition 2.3**.: Given a neural network architecture \(\mathcal{A}=(\nu,S)\) where \(\nu=(\nu_{0},\nu_{1},\ldots,\nu_{L})\in\mathbb{N}^{L+1}\), \(\nu_{L}=n\), is the width of each layer and \(S=(S_{1,1},\ldots,S_{L,L}),S_{j,k}\in\mathbb{R}^{\nu_{j}\times\nu_{k}}\) is the set of matrices representing the skip connections, we define the parametric family \(\Xi_{\theta}^{\mathcal{A}}\) of neural network reconstructors with architecture \(\mathcal{A}\), parameterized by \(\theta\in\mathbb{R}^{s}\), as:
\[\Xi_{\theta}^{\mathcal{A}}=\{\psi_{\theta}:\mathcal{Y}^{\delta}\to\mathbb{R}^ {n};\theta\in\mathbb{R}^{s}\} \tag{4}\]
where \(\psi_{\theta}(\mathbf{y}^{\delta})=\mathbf{z}^{L}\) is given by:
\[\begin{cases}\mathbf{z}^{0}=\mathbf{y}^{\delta}\\ \mathbf{z}^{l+1}=\rho(W^{l}\mathbf{z}^{l}+\mathbf{b}^{l}+\sum_{k=1}^{l}S_{l,k} \mathbf{z}^{k})\quad\forall l=0,\ldots,L-1\end{cases} \tag{5}\]
and \(W^{l}\in\mathbb{R}^{\nu_{l+1}\times\nu_{l}}\) is the weight matrix, \(\mathbf{b}^{l}\in\mathbb{R}^{\nu_{l+1}}\) is the bias vector.
We now analyze the performance of NN-based reconstructors when noise is added to their input.
**Definition 2.4**.: Given \(\delta\geq 0\), the \(\delta\)-stability constant \(C_{\psi_{\theta}}^{\delta}\) of an \(\eta^{-1}\)-accurate reconstructor is defined as:
\[C_{\psi_{\theta}}^{\delta}=\sup_{\begin{subarray}{c}\mathbf{x}^{gt}\in \mathcal{X}\\ ||\mathbf{e}||\leq\delta\end{subarray}}\frac{\mathcal{E}_{\psi}(\mathbf{x}^{ gt},\mathbf{y}^{\delta})-\eta}{||\mathbf{e}||_{2}}. \tag{6}\]
Since from Definition 2.4 we interestingly observe that the stability constant amplifies the noise in the data:
\[||\psi_{\theta}(\mathbf{y}^{0}+\mathbf{e})-\mathbf{x}||_{2}\leq\eta+C_{\psi_ {\theta}}^{\delta}||\mathbf{e}||_{2}\quad\forall\mathbf{x}\in\mathcal{X},\; \forall\mathbf{e}\in\mathbb{R}^{n},||\mathbf{e}||_{2}\leq\delta, \tag{7}\]
with \(\mathbf{y}^{0}\) the noiseless datum, we can give the following definition:
**Definition 2.5**.: Given \(\delta\geq 0\), a neural network reconstructor \(\psi_{\theta}\) is said to be \(\delta\)-stable if \(C_{\psi_{\theta}}^{\delta}\in[0,1)\).
The next theorem states an important relation between the stability constant and the accuracy of a neural network as a solver of an inverse problem.
**Theorem 2.1**.: _Let \(\psi_{\theta}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be an \(\eta^{-1}\)-accurate reconstructor. Then, for any \(x^{gt}\in\mathcal{X}\) and for any \(\delta>0\), \(\exists\,\hat{\mathbf{e}}\in\mathbb{R}^{n}\) with \(||\hat{\mathbf{e}}||\leq\delta\) such that_
\[C_{\psi_{\theta}}^{\delta}\geq\frac{||K^{\dagger}\hat{\mathbf{e}}||-2\eta}{|| \hat{\mathbf{e}}||} \tag{8}\]
_where \(K^{\dagger}\) is the Moore Penrose pseudo-inverse of \(K\)._
For the proof see [20].
We emphasize that, even if neural networks used as reconstructors do not use any information on the operator \(K\), the stability of \(\psi_{\theta}\) is related to the pseudo-inverse of that operator.
## 3 Experimental setting
Here we describe our particular setting using neural networks as reconstructors for a deblurring application.
### Newtork architectures
We have considered three different neural network architectures for deblurring: the widely used UNet [22], the recently proposed NAFNet [23] and a green AI inspired 3L-SSNet [24].
The UNet and NAFNet architectures are complex, multi-scale networks, with similar overall structure but very different behavior. As shown in Figure 2, both UNet and NAFNet are multi-resolution networks, where the input is sequentially processed by a sequence of blocks \(B_{1},\ldots,B_{n_{i}}\), \(i=1,\ldots,L\) and downsampled after that. After \(L-1\) downsampling, the image is then sequentially upsampled again to the original shape through a sequence of blocks, symmetrically to what happened in the downsampling phase. At each resolution level \(i=1,\ldots,L\), the corresponding image in the downsampling phase is concatenated to the first block in the upsampling phase, to keep the information through the network. Moreover, a skip connection has also been added between the input and the output layer of the model to simplify the training as described in [24]. The left-hand side of Figure 2 shows that the difference between UNet and NAFNet is in the structure of each block. In particular, the blocks in UNet are simple Residual Convolutional Layers, defined as a concatenation of Convolutions, ReLU, BatchNormalizations and a skip connection. On the other side, each block in NAFNet is way more complex, containing a long sequence of gates, convolutional and normalization layers. The key propriety of NAFNet, as described in [23], is that no activation function is used in the blocks, since they have been substituted by non-linear gates, thus obtaining improved expressivity and more training efficiency.
The 3-layer Single-Scale Network (3L-SSNet) is a very simple model defined, as suggested by its name, by just three convolutional layers, each of them composed by a linear filter, followed by a ReLU activation function and a BatchNormalization layer. Since by construction the network works on single-scale images (the input is never downsampled to low-resolution level, as it is common in image processing), to increase the receptive field of the model the kernel size is crucial. For this reason, we considered a 3L-SSNet with width \([128,128,128]\) and kernel size \([9\times 9,5\times 5,3\times 3]\), respectively.
### Data set
As a data set for our experiments we choose the widely-used GoPro [25], which is composed of a large number of photographic images acquired from a GoPro camera. All the images have been cropped into \(256\times 256\) patches (with no overlapping), converted into grayscale and normalized into [0,1].
We synthesize the blurring of each image according to (P) by considering a Gaussian corrupting effect, implemented with the \(11\times 11\) Gaussian kernel \(\mathcal{G}\) defined as
\[\mathcal{G}_{i,j}=\begin{cases}e^{-\frac{1}{2}\frac{i^{2}+i^{2}}{\sigma_{G}^{ 2}}}&i,j\in\{-5,\ldots,5\}^{2}\\ 0&\text{otherwise}\end{cases} \tag{9}\]
with variance \(\sigma_{G}=1.3\). The kernel is visualized in Figure 3, together with one of the GoPro images and its blurred counterpart.
Figure 2: A diagram representing the UNet and NAFNet architectures.
### Neural networks training and testing
To train a Neural Network for deblurring, the set of available images has been split into train and test subsets, with \(N_{\mathbb{D}}=2503\) and \(N_{\mathbb{T}}=1111\) images respectively. Then we consider a set \(\mathbb{D}=\{(\mathbf{y}_{i}^{\delta},\mathbf{x}_{i}^{gt});\ \mathbf{x}_{i}^{gt}\in\mathcal{S} \}_{i=1}^{N_{\mathbb{D}}}\), for a given \(\delta\geq 0\). Since we set a Mean Squared Error (MSE) loss function, a NN-based reconstructor is uniquely defined as the solution of:
\[\min_{\psi_{\theta}\in\mathcal{F}_{\phi}^{\delta}}\sum_{i=1}^{N_{\mathbb{D}}}|| \psi_{\theta}(\mathbf{y}_{i}^{\delta})-\mathbf{x}_{i}^{gt}||_{2}^{2}. \tag{10}\]
Each network has been trained by performing 50 epochs of Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.9\) and a learning rate of \(10^{-3}\). We focus on the next two experiments.
**Experiment A**. In this experiment we train the neural networks on images only corrupted by blur (\(\delta=0\)). To the aim of checking the networks accuracy, defined as in Section 2, we test on no noisy images (_in-domain tests_). Then, to verify theorem 2.1 we consider test images with added Gaussian noise, with \(\sigma=0.025\) (_out-of-domain tests_).
**Experiment B**. A common practice for enforcing network stability is _noise injection_[26], consisting in training a network by adding noise components to the input. In particular, we have added a vector noise \(\mathbf{e}\sim\mathcal{N}(0,\sigma^{2}I)\), with \(\sigma=0.025\). To test the stability of the proposed frameworks with respect to noise, we test with higher noise with respect to training.
### Robustness of the end-to-end NN approach
Preliminary results obtained from experiment A are shown in Figure 4. The first row displays the reconstructions obtained from in-domain tests, where we can appreciate the accuracy of all the three considered architectures. In the second row we can see the results obtained from out-of-domain tests, where the noise on the input data strongly corrupts the solution of the ill-posed inverse problem computed by UNet and NAFNet. Confirming what stated by Theorem 2.1, the best result is obtained with the very light 3L-SSNET, which is the only one able to handle the noise.
## 4 Improving noise-robustness in deep learning based reconstructors
As observed in Section 3, merely using a neural network to solve an inverse problem is an unstable routine. To enforce the robustness of \(\psi_{\theta}\) reconstructors, we propose to modify the Deep Learning based approach by introducing a suitable operator, defined in the following as a _stabilizer_, into the reconstruction process.
**Definition 4.1**.: A continuous functions \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is called a \(\delta\)-stabilizer for a neural network reconstructor \(\psi_{\theta}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) if \(\forall\,e\in\mathbb{R}^{n}\) with \(||e||\leq\delta\), \(\exists\,L_{\phi}^{\delta}\in[0,1)\) and \(\exists\,e^{\prime}\in\mathbb{R}^{n}\) with \(||e^{\prime}||=L_{\phi}^{\delta}||e||\) such that:
\[\phi(K\mathbf{x}+\mathbf{e})=\phi(K\mathbf{x})+\mathbf{e}^{\prime}. \tag{11}\]
In this case, the reconstructor \(\bar{\psi}_{\theta}=\psi_{\theta}\circ\phi\) is said to be \(\delta\)-stabilized. The smallest constant \(L_{\phi}^{\delta}\) for which the definition holds is the stability constant \(C_{\phi}^{\delta}\) of \(\phi\).
Intuitively, applying a pre-processing \(\phi\) with \(L_{\phi}^{\delta}<1\) reduces the perturbation of the input data, by converting a noise of amplitude bounded by \(\delta\) to a corruption with norm bounded by \(\delta L_{\phi}^{\delta}\). This intuition has been mathematically explained
Figure 3: _From left to right: ground truth clean image, blurring kernel, blurred corrupted image._
in [20], Proposition 4.2, where a relationship between the stability constant of the stabilized reconstructor \(\bar{\psi}_{\theta}\) and the stability constant of \(\psi_{\theta}\) has been proved. In particular, if \(\bar{\psi}_{\theta}=\psi_{\theta}\circ\phi\) is a \(\delta\)-stabilized reconstructor, \(L^{\delta}_{\psi_{\theta}}\), \(L^{\delta}_{\phi}\) are the local Lipschitz constants of \(\psi_{\theta}\) and \(\phi\), respectively, then:
\[C^{\delta}_{\bar{\psi}_{\theta}}\leq L^{\delta}_{\psi_{\theta}}L^{\delta}_{ \phi}. \tag{12}\]
As a consequence, if \(L^{\delta}_{\phi}<1\), then the stability constant of \(\bar{\psi}_{\theta}\) is smaller than the Lipschitz constant of \(\psi_{\theta}\), which implies that \(\bar{\psi}_{\theta}\) is more stable to input perturbations.
We underline that the \(\delta\)-stabilizers \(\phi\) are effective if they preserve the characteristics and the details of the input image \(\mathbf{y}^{\delta}\). In this paper we focus on the two following proposals of \(\delta\)-stabilizers \(\phi\).
### Stabilized Neural Network (StNN) based on the imaging model
If the blurring operator \(K\) is known, it can be exploited to derive a \(\delta\)-stabilizer function \(\phi\). We argue that information on \(K\) will contribute to improve the reconstruction accuracy. Specifically, we consider an iterative algorithm, converging to the solution of (1), represented by the scheme:
\[\begin{cases}\mathbf{x}^{(0)}\in\mathbb{R}^{n}\\ \mathbf{x}^{(k+1)}=\mathcal{T}_{k}(\mathbf{x}^{(k)};\mathbf{y}^{\delta})\end{cases} \tag{13}\]
where \(\mathcal{T}_{k}\) is the action of the \(k\)-th iteration of the algorithm. Given a positive integer \(M\in\mathbb{N}\) and a fixed starting iterate \(\mathbf{x}^{(0)}\), let us define the \(\delta\)-stabilizer:
\[\phi_{M}(\mathbf{y}^{\delta})=\bigcirc_{k=0}^{M-1}\mathcal{T}_{k}(\mathbf{x}^ {(k)};\mathbf{y}^{\delta}). \tag{14}\]
By definition, \(\phi_{M}\) maps a corrupted image \(\mathbf{y}^{\delta}\) to the solution computed by the iterative solver in \(M\) iterations.
Setting as objective function in (1) the Tikhonov-regularized least-squared function:
\[\arg\min_{\mathbf{x}\in\mathbb{R}^{n}}\frac{1}{2}||K\mathbf{x}-\mathbf{y}^{ \delta}||_{2}^{2}+\lambda||\mathbf{x}||_{2}^{2}, \tag{15}\]
Figure 4: Results from experiment A with the three considered neural networks. Upper row: reconstruction from no noisy data. Lower row: reconstruction from noisy data (\(\delta=0.025\)).
the authors in [20] showed that it is possible to choose \(M\) such that \(L^{\delta}_{\phi_{M}}<1\). Hence, given \(\delta\) and \(\mathcal{F}^{\mathcal{A}}_{\theta}\), it is always possible to use \(\phi_{M}\) as a pre-processing step, stabilizing \(\psi_{\theta}\). We refer to \(\bar{\psi}_{\theta}=\gamma_{\theta}\circ\phi_{M}\) as _Stabilized Neural Network_ (StNN). In the numerical experiments presented in Section 5, we use as iterative method for the solution of (15) the Conjugate Gradient Least Squares (CGLS) iterative method [11].
### Filtered Neural Network (FiNN)
The intuition that a pre-processing step should reduce the noise present in the input data naturally leads to our second proposal, implemented by a Gaussian denoising filter. The Gaussian filter is a low-pass filter that reduces the impact of noise on the high frequencies [27]. Thus, the resulting pre-processed image is a low-frequency version of \(\mathbf{y}^{\delta}\) and the neural network \(\psi_{\theta}\in\mathcal{F}^{\mathcal{A}}_{\theta}\) has to recover the high frequencies corresponding to the image details. Let \(\phi_{\mathcal{G}}\) represents the operator that applies the Gaussian filter to the input. We will refer to the reconstructor \(\bar{\psi}_{\theta}=\psi_{\theta}\circ\phi_{\mathcal{G}}\) as _Filtered Neural Network_ (FiNN).
Note that, even if FiNN is employed to reduce the impact of the noise and consequently to stabilize the network solution, its \(L^{\delta}_{\phi}\) constant is not smaller than one. In fact, for any \(\mathbf{e}\in\mathbb{R}^{n}\) with \(||\mathbf{e}||\leq\delta\), it holds:
\[\phi_{\mathcal{G}}(K\mathbf{x}+\mathbf{e})=\phi_{\mathcal{G}}(K\mathbf{x})+ \phi_{\mathcal{G}}(\mathbf{e}) \tag{16}\]
as a consequence of the linearity of \(\phi_{\mathcal{G}}\).
## 5 Results
In this Section we present the results obtained in our deblurring experiments described in Section 3. To evaluate and compare the deblurred images, we use visual inspection on a selected test image and exploit the Structural Similarity index (SSIM) [28] on the test set.
### Results of experiments A
We show and comment on the results obtained on experiment A described in Section 3.3. We remark that aim of these tests is to measure the accuracy of the three considered neural reconstructors and of the stabilizers proposed in Section 4 and verify their sensitivity to noise in the input data. In a word, how these reconstructors handle the ill-posedness of the imaging inverse problem.
To this purpose, we visually compare the reconstructions of a single test image by the UNet and \(3\)L-SSNet in Figure 5. The first row (which replicates some of the images of Figure 4) shows the results of the deep learning based reconstructors, where the out-of-domain images are clearly damaged by the noise. The FiNN and, particularly, the StNN stabilizer drastically reduce noise, producing accurate results even for out-of-domain tests.
In order to analyze the accuracy and stability of our proposals, we compute the empirical accuracy \(\hat{\eta}^{-1}\) and the empirical stability constant \(\hat{C}^{\delta}_{\psi}\), respectively defined as:
\[\hat{\eta}^{-1}=\Big{(}\sup_{\mathbf{x}\in\mathcal{S}_{\mathcal{T}}}||\psi(K \mathbf{x})-\mathbf{x}||_{2}\Big{)}^{-1} \tag{17}\]
and
\[\hat{C}^{\delta}_{\psi}=\sup_{\mathbf{x}\in\mathcal{S}_{\mathcal{T}}}\frac{|| \psi(K\mathbf{x}+\mathbf{e})-\mathbf{x}||_{2}-\hat{\eta}}{||\mathbf{e}||_{2}} \tag{18}\]
where \(\mathcal{S}_{\mathcal{T}}\subseteq\mathcal{X}\) is the test set and \(\mathbf{e}\) is a noise realization from \(\mathcal{N}(0,\sigma^{2}I)\) with \(||e||_{2}\leq\delta\) (different for any datum \(x\in\mathcal{S}_{\mathcal{T}}\)).
The computed values are reported in Table 1. Focusing on the estimated accuracies, the results confirm that NN is the most accurate method, followed by NAFNet and 3L-SSNet, as expected. As a consequence of Theorem 2.1, the values of the stability constant \(\hat{C}^{\delta}_{\psi}\) are in reverse order: the most accurate is the less stable (notice the very high value of \(\hat{C}^{\delta}_{\psi}\) for NN!). By applying the stabilizers, the accuracy is slightly lower but the stability is highly improved (in most of the cases the constant is less than one), confirming the efficacy of the proposed solutions to handle noise and, at the same time, maintain good image quality. In particular, StNN is a stable reconstructor independently from the architecture.
To analyse the stability of the test set with respect to noise, we have plotted in Figure 6, for each test image, \(\mathcal{E}_{\psi}(\mathbf{y}^{gt},\mathbf{y}^{\delta})-\hat{\eta}\) vs. \(\|e\|\), where the reconstruction error is defined in (2). With green and red dots we have plotted the experiments with stability constant less and greater than one, respectively and with the blue dashed line the
Figure 5: Results from experiment A with UNet and 3L-SSNet.
Figure 6: Results from experiment A. Plot of \(\mathcal{E}_{\psi}(\kappa^{gt},y^{\xi})-\eta\) vs. \(\|e\|\) for all the test images. The blue dashed line represents the bisect.
bisect. We notice that the values reported in Table 1 for the empirical stability constant computed as supremum (see Equation (18)) are not outliers but they are representative of the results of the whole test set.
### Results of experiment B
In this experiment we used noise injection in the neural networks training, as described in Section 3.3. This quite common strategy reduces the networks accuracy but improve their stability with respect to noise. However, we show that the reconstructions are not totally satisfactory when we test on out-of-domain images, i.e. when input images are affected by noise of different intensities with respect to training.
Figure 7 displays the reconstructions obtained by testing with both in-domain (on the left) and out-of-domain (on the right) images. Even if the NN reconstructions (column 4) are not so injured by noise as in experiment A (see Figure 4), however noise artifacts are clearly visible, especially in UNet and NAFNet. Both the stabilizers proposed act efficiently and remove most of the noise. We observe that the restorations obtained with FiNN are smoother but also more blurred with respect to the ones computed by StNN.
An overview of the tests is displayed by the boxplots of the SSIM values sketched in Figure 8. The light blue, orange and green boxes represent the results obtained with NN, FiNN and StNN methods, respectively. They confirm that the neural networks performance worsens with noisy data (see the different positions of light blue boxes from the left to the right column), whereas the proposed frameworks including FiNN and StNN are far more stable.
Figure 8: Boxplots for the SSIM values in experiment B. The light blue, orange and green boxplots represent the results computed by NN, FiNN and StNN, respectively.
In Figure 9 we plot, for one image in the test set, the absolute error between the reconstruction and the true image vs. the noise standard deviation \(\sigma\). In the upper row the results from experiment A (we remark that in this experiment we trained the networks on no noisy data). The NN error (blue line) is out of range for very small values of \(\sigma\) for both UNet and NArNet, whereas the 3L-SSNet is far more stable. In all the cases, the orange and green line shows that FiNN and StNN improve the reconstruction error. In particular, StNN performs best in all these tests.
Concerning experiment B (in the lower row of the figure), it is very interesting to notice that when the noise is smaller than the training one (corresponding to \(\sigma=0.025\)) the NN methods are the best performing for all the considered architectures. When \(\sigma\simeq 0.05\) the behaviour changes and the stabilized methods are more accurate.
## 6 Conclusions
Starting from the consideration that the most popular neural networks used for image deblurring, such as the family of convolutional UNets, are very accurate but unstable with respect to noise in the test images, we have proposed two different approaches to get stability without losing too much accuracy. The first one is a very light neural architecture, called 3L-SSNET, and the second one is to stabilize the deep learning framework by introducing a pre-processing step. Numerical results on the GoPro dataset have demonstrated the efficiency and robustness of the proposed approaches, under several settings encompassing in-domain and out-of-domain testing scenarios. The 3L-SSNet overcome UNet and NArNet in every test where the noise on test images exceeds the noise on the training set, combining the desired characteristics of execution speed (in a green AI perspective) and high stability. The FiNN proposal increases the stability of the NN-based restoration (the values of its SSIM do not change remarkably in all the experiments), but the restored images appear too smooth and few small details are lost somewhere. The StNN proposal, exploiting a model-based formulation of the underlying imaging process, achieves the highest SSIM values in the most challenging out-of-domain cases, confirming its great theory-grounded potential. It represents, indeed, a good compromise between stability and accuracy. We finally remark that the proposed approach can be simply extended to other imaging applications modeled as an inverse problem, such as super-resolution, denoising, or tomography, where the neural networks learning the map from the input to the ground truth image cannot efficiently handle noise in the input data.
This work represents one step further in shedding light on the black-box essence of NN-based image processing.
AcknowledgmentsThis work was partially supported by the US National Science Foundation, under grants DMS 2038118 and DMS 2208294.
Conflict of InterestsThe authors declare no conflict of interest.
Figure 9: Plots of the absolute error vs. the variance \(\sigma\) of the noise for one image in the test set. Upper row: experiment A. Lower row: experiment B. |
2309.06581 | Zero-Shot Visual Classification with Guided Cropping | Pretrained vision-language models, such as CLIP, show promising zero-shot
performance across a wide variety of datasets. For closed-set classification
tasks, however, there is an inherent limitation: CLIP image encoders are
typically designed to extract generic image-level features that summarize
superfluous or confounding information for the target tasks. This results in
degradation of classification performance, especially when objects of interest
cover small areas of input images. In this work, we propose CLIP with Guided
Cropping (GC-CLIP), where we use an off-the-shelf zero-shot object detection
model in a preprocessing step to increase focus of zero-shot classifier to the
object of interest and minimize influence of extraneous image regions. We
empirically show that our approach improves zero-shot classification results
across architectures and datasets, favorably for small objects. | Piyapat Saranrittichai, Mauricio Munoz, Volker Fischer, Chaithanya Kumar Mummadi | 2023-09-12T20:09:12Z | http://arxiv.org/abs/2309.06581v1 | # Zero-Shot Visual Classification with Guided Cropping
###### Abstract
Pretrained vision-language models, such as CLIP, show promising zero-shot performance across a wide variety of datasets. For closed-set classification tasks, however, there is an inherent limitation: CLIP image encoders are typically designed to extract generic image-level features that summarize superfluous or confounding information for the target tasks. This results in degradation of classification performance, especially when objects of interest cover small areas of input images. In this work, we propose CLIP with Guided Cropping (GC-CLIP), where we use an off-the-shelf zero-shot object detection model in a preprocessing step to increase focus of zero-shot classifier to the object of interest and minimize influence of extraneous image regions. We empirically show that our approach improves zero-shot classification results across architectures and datasets, favorably for small objects.
## 1 Introduction
Conventional supervised learning for closed-set classification tasks involves training Deep Neural Networks (DNNs) on labelled datasets [5]. The resulting models are inherently limited by the class definitions of a specific task. In contrast, recent research focuses on open-vocabulary zero-shot classification models [6; 16]. Pretrained with large-scale image-text datasets, these models have more generic class concepts as the definitions can be introduced by textual prompts of natural language.
CLIP is one of the most popular models for open-vocabulary classification [16]. Its architecture comprises image and text encoders which encode input images and texts into a shared latent space. These encoders are trained with contrastive losses such that dot product similarity scores between image and text encodings indicate how likely input images and texts correspond to one another.
One limitation of CLIP lies in the fact that its encoders are designed to be generic in the sense that its image encodings encompass entire information of a given image regardless of the target task. While this behavior is desirable for some problems, it simultaneously poses a limitation for closed-set object classification tasks where only certain labels and image contents are of interest. In these cases, encoding entire image contents can lead to suboptimal performance, particularly for small objects. For e.g., in Figure 0(a), the large water region in the image dominates similarity scores between image and text encodings of water-related classes, leading to an incorrect zero-shot prediction.
Our central question is: How can we reduce non-discriminative and extraneous information from the image encodings? We observe that reducing areas of context regions by cropping input images around objects of interest can be beneficial. Figure 0(b) illustrates that the cropped image with reduced water regions decrease similarity scores of incorrect water-related classes and result in the dominant similarity score of the correct class (i.e., canoe).
One straightforward approach to reduce influence from non-discriminative information automatically is to directly adopt open-vocabulary object detection models for the zero-shot classification task. These models produce object bounding boxes and _locally_ categorize them based on any given text prompts [12; 7]. However, we speculate that these approaches are not directly optimal for image classification tasks which they are not designed for. In this regard, we conduct an experiment to extend one of the most recent open-vocabulary object detection models, OWL-ViT [12], for a classification setting where each sample belongs to only one class. We observe that, while OWL-ViT shows reasonable performance on bounding box estimation, its zero-shot classification performance is poor compared to standard zero-shot CLIP baselines (more details in section 5.6).
In this work, we aim to improve zero-shot object classification performance of CLIP by guiding their focus to the object of interest and reducing the influence of unrelated visual information. Instead of using OWL-ViT for classification directly, we propose to employ it as a bounding box extraction module such that cropped input images are processed by CLIP as shown in Figure 0(b). We refer this approach as CLIP with Guided Cropping (GC-CLIP). We show that classification performance depends on chosen cropping scales which is especially significant on images with small objects.
Our contributions are as follows: We provide empirical evidence that generic CLIP encoders can lead to suboptimal performance in zero-shot closed-set classification task, particularly on the images with small objects. We propose a method to improve CLIP zero-shot classification using bounding boxes estimated from OWL-ViT. We conduct experiments to show that our approach outperforms a direct OWL-ViT based classifier as well as zero-shot CLIP baselines across different scenarios. Finally, we conduct ablation studies to understand the conditions under which our approach works well.
## 2 Related Works
Zero-Shot and Open-Vocabulary ClassificationZero-shot classification enables trained models to recognize inputs of unseen categories based on externally provided concepts. Earlier works define these concepts in terms of attribute combinations [14; 15; 1; 9; 13; 10]. However, in open-world applications, it is generally not possible to represent all categories based on limited combinations of trained attributes. Hence, recent research focuses on open-vocabulary classification, in which categories are represented by text prompts. In this regard, images and text prompts can be projected by image/text encoders into a joint embedding space so that their similarities can be computed. CLIP [16] and ALIGN [6] encourage similarity between image-text pairs based on contrastive losses. [11] improves zero-shot performance by using multiple text prompts per category based on queries from large language models. Florence [20] considers more modalities in addition to images and texts.
Figure 1: Logits from CLIP (ViT-B/32) before and after cropping around objects of interest
While these models perform well in open-world scenarios, their performance can be limited under the closed-set assumption. As their encoders are designed for open-world applications, they may encode information which are harmful for closed-set classification task. In this work, we aim to alleviate this.
Open-Vocabulary Object DetectionThe concept of open-vocabulary has also been investigated in object detection tasks in which object bounding boxes are produced given input text prompts [4; 22; 8; 7; 21]. ViLD [4] trains object detection based on knowledge distillation from pretrained open-vocabulary classification models. In OWL-ViT [12], simple modifications of standard vision transformers are fine-tuned with large-scale image-text datasets for object detection. GLIPv2 [21] extends models to handle various localization tasks.
Object detection models have the innate ability to not only localize, but classify localized objects based on local information. The question may therefore be raised, whether they are in general sufficient to solve the zero-shot classification task alone. In section 5.6, we conducted experiments based on OWL-ViT, a recent off-the-shelf model, and demonstrate its poor performance on classification tasks. In this work, we use open-vocabulary object detection models only for bounding box extraction.
## 3 Background
Problem FormulationGiven a test dataset \(\{(x_{i},y_{i})\}_{i=1}^{N_{s}}\), where \(x_{i}\in\mathcal{X}=\mathcal{R}^{w\times w}\) and \(y_{i}\in\mathcal{Y}=\{1,2,\dots,N_{c}\}\) is an image and its corresponding label, our zero-shot classification task is to construct a prediction function \(F:\mathcal{X}\rightarrow\mathcal{Y}\) based on pretrained open-vocabulary models to maximize the likelihood \(P(\hat{y}|x)=P(F(x)|x)\). Prediction function based on CLIP will be described in this section while our approach will be presented in section 4.
Conventional ClipCLIP [16] is a multi-modal model designed for open-vocabulary classification. It consists of an image encoder \(G\) and a text encoder \(H\). To perform closed-set classification, a text prompt \(p_{j}^{cls}\) needs to be defined for each class \(j\in\mathcal{Y}\). Then, an embedding of each prompt can be obtained by: \(e_{j}^{text}=H(p_{j}^{cls})\). During inference, an input image \(x_{i}\) will be projected into its image embedding \(e_{i}^{image}=G(x_{i})\) so that its classification logit \(l_{i}^{CLIP}\) can be computed as:
\[l_{i}^{CLIP}=(E^{text})^{T}e_{i}^{image}=\begin{bmatrix}e_{1}^{text}&e_{2}^{ text}&\dots&e_{N_{c}}^{text}\end{bmatrix}^{T}e_{i}^{image}. \tag{1}\]
Each entry \(l_{ij}^{CLIP}\) of the logit indicates the similarity score between the (embedded) input image and the \(j\)-th prompt. The final class prediction can then be obtained as \(\hat{y}_{i}=\arg\max_{j\in\mathcal{Y}}l_{ij}^{CLIP}\).
Figure 2: Guided Cropping pipeline to obtain a guided cropped image with margin ratio \(\alpha\)
Above, we assume that one prompt is available per class. However, it has been shown recently that using multiple prompts per class can improve performance [11]. In this case, each \(e_{j}^{text}\) from equation 1 can be replaced with the average embedding computed from all available text prompts of class \(j\).
## 4 Methodology
### CLIP with Guided Cropping
Conventionally, image embedding \(e_{i}^{image}\) is computed directly from the full image \(x_{i}\) without any task-specific constraints. For closed-set classification, especially in cases of a small object image, this implies that potentially unrelated information is also encoded into \(e_{i}^{image}\), which may lead to suboptimal performance. Minimizing the amount of unrelated concept information in image embeddings is desirable in this case. Our approach, CLIP with Guided Cropping (GC-CLIP), achieves this by using bounding box estimates provided by OWL-ViT.
OWL-ViT is an open-vocabulary object detection model [12]. It takes an image and text prompts of target classes as inputs and produces outputs as a set of bounding boxes together with their scores and classes. In this work, we only use OWL-ViT as a bounding box extraction module as its class predictions are not accurate enough (see section 5.6). The overall GC-CLIP pipeline is shown in Figure 2. We only consider top-k classes (we use k=5) to refine the preliminary CLIP predictions. This is reasonable since it has high probabilities that these top-k classes contain the correct class (see appendix A.3).
Candidate box extractionWe detect bounding boxes of each top-k class with OWL-ViT independently. We found that this is more robust to misdetection resulting in better performance compared to detecting bounding boxes of all classes at once (see appendix A.5). Formally, a set of bounding box candidates \(B_{i}\) for an image \(x_{i}\) can be obtained based on OWL-ViT as follows:
\[B_{i}=\bigcup_{j\in J_{i}^{k}}b_{ij}=\bigcup_{j\in J_{i}^{k}}OWL(x_{i},p_{j}^{ det}) \tag{2}\]
where \(J_{k}\subseteq\mathcal{Y}\) is a set of top-k classes with respect to \(l_{i}^{CLIP}\), \(p_{j}^{det}\) is a text prompt for detection of class \(j\) and \(OWL\) is OWL-ViT detection function returning a max-score bounding box with respect to an input image and a prompt. All bounding boxes are adjusted to squares to avoid skewing images when they are, afterward, transformed into a CLIP-compatible image size. (e.g., \(224\times 224\)).
Box selectionNext, we need to pick one bounding box from \(B_{i}\). We start from a primary box \(b_{i}^{0}\in B_{i}\) which has the highest estimated score from OWL-ViT. In our experiments, we found that using the primary box directly is generally suboptimal as its crop may be too tight to target objects. It is therefore beneficial to slightly enlarge the box (see section 5.3). Given \(b_{i}^{0}\) has the width of \(w_{b_{i}^{0}}\) and
Figure 3: Each green square corresponds to a final bounding box \(b^{\alpha}\) (or \(b^{\alpha_{k}}\)) which will be used to crop the original image \(x_{i}\) to produce logit for the final prediction. \(\Delta w\) is the width difference between the original image and the primary box \(b_{i}^{0}\). \(\alpha\) and \(\alpha_{k}\) are margin ratios.
\(x_{i}\) has the width of \(w\), the box is enlarged to an \(\alpha\)-margin box \(b_{i}^{\alpha}\) uniformly in all direction to the size of \(w_{b_{i}^{0}}+\alpha(w-w_{b_{i}^{0}})\), where \(\alpha\in[0,1]\) is called margin ratio (see Figure 2(a)). For the enlargement, if a box edge exceeds image boundary in one direction, the enlargement will be compensated in the opposite direction. In cases with box augmentation, multiple \(\alpha\) can be employed (see section 4.2).
Logit computationThis selected box \(b_{i}^{\alpha}\) is used to crop \(x_{i}\) and resize it to a CLIP-compatible image size \(w\times w\) resulting in a preprocessed image \(x_{i}^{\alpha}\). The new top-k logit \(l_{i}^{GC\_CLIP(k)}\) is computed based on \(x_{i}^{\alpha}\) as follows:
\[l_{i}^{GC\_CLIP(k)}=\left[e_{j^{1}}^{text}\quad e_{j^{2}}^{text}\quad\dots \quad e_{j^{k}}^{text}\right]^{T}G(x_{i}^{\alpha}), \tag{3}\]
where \(j^{1},j^{2},\dots,j^{k}\in J_{i}^{k}\). The final class prediction is the class within \(J_{i}^{k}\) corresponding to the maximum entry of \(l_{i}^{GC\_CLIP(k)}\).
### Test-Time Box Augmentation
While prediction can directly perform on a raw/preprocessed input image, this can lead to noisy prediction from CLIP. Small non-semantic changes in images can cause changes in predictions making CLIP outputs difficult to analyze. We show this behavior by processing 10 random crops (90%-100% of the original widths) of the same image with CLIP. One would expect that, standard deviations of its predicted true-label probabilities should be low and its final class predictions should not change across different crops. However, we notice from Figure 3(a) that the standard deviations can be relatively high (around 0.2), while the average true-label probability is 0.55. In addition, only around 60% of test samples have no changes in final class predictions across crops (see Figure 3(b)). These results indicate significant sensitivity of CLIP to non-semantic changes. Therefore, instead of computing logits from raw/preprocessed images only, we can perform a simple test-time augmentation to help mitigate this issue. In this work, we investigate two augmentation strategies.
Random Crop Box Augmentation (RAug)With RAug, we augment a single input (raw or preprocessed) image into \(N_{aug}\) total images by cropping the input image with \(N_{aug}\) boxes of random widths within \([\beta w,w]\), while \(\beta\in(0,1)\). The augmented images are used to compute multiple predicted logits as per equation 3, which can then be averaged to produce the final logit score.
Multi-Margin Box Augmentation (MAug)In some cases, it is beneficial to consider context information as long as it does not dominate object information. With MAug, we need to firstly obtain the primary box \(b_{i}^{0}\). Then, instead of using a margin ratio \(\alpha\) as in section 4.1, we perform an object-centric augmentation by using \(N_{aug}\) bounding boxes obtained from multiple margin ratios, distributed uniformly from 0 to 1 (see Figure 2(b)). In other words, the set of all final boxes used in this augmentation is \(\left\{b_{i}^{\alpha_{k}}|\alpha_{k}=\frac{k}{N_{aug}-1},k\in\{0,1,\dots,N_{ aug}-1\}\right\}\). Similarly, logits computed from images cropped by these final boxes are then averaged to get the final logit score.
Figure 4: Results when forwarding multiple random crops of the same images (from ImageNetS919 dataset) to CLIP (ViT-B/32) demonstrating CLIP sensitivity to non-semantic changes.
It must be noted that, with MAug, regions close to the target object are covered by more boxes compared to regions far from the object. Therefore, the augmentation allows some context information to be considered but with lower importance compared to object information.
## 5 Experiments
In this section, we conduct experiments to demonstrate that utilizing CLIP with Guided Cropping can improve zero-shot classification performance. In addition, several ablation studies are also conducted to understand its failure modes and the conditions under which our approach works well.
### Setup
DatasetsWe would like to study classification scenarios in which object sizes in images are controllable. In this work, two datasets are employed. (1) ImageNetS [2]: this dataset is an extension of ImageNet [17] and originally designed for unsupervised semantic segmentation. We use the validation split of the dataset in which pixel-wise segmentation annotations are available. It contains 12,419 samples of 919 classes in total. We construct a subset with target objects of small sizes, called ImageNetS919-SM, containing 2,334 samples whose object sizes are no more than 20% of the full image size. (2) CUB [18]: this dataset is a benchmark for fine-grained classification consisting of 200 bird types. We evaluate our models on its test split of 5,794 samples. Similarly, based on bounding box annotations of the dataset, we construct its subset whose target object sizes are less than 20% of the full image size resulting in CUB-SM containing 1,390 samples. More details of our dataset splitting and example images of these datasets can be found in the appendix A.1.
BaselinesCLIP [16] is used as the main architecture of all baselines. We conduct experiments with two classification prompt types similar to [11] (1) Category: Each class has a single prompt of its category name (2) Descriptions: Each class has multiple prompts queried automatically from GPT-3 according to [11]. In the latter case, the final logit value for a given class is computed by averaging the logit values obtained from all prompts for that class.
ImplementationWe apply our Guided Cropping and box augmentation on top of each baseline. For Guided Cropping variations, the margin ratio \(\alpha\) of 0.2 is used unless otherwise specified. We perform box augmentation with \(N_{aug}=11\). For RAug, \(\beta=0.9\) is used. The high value of \(\beta\) makes RAug augmented boxes less likely to crop object contents away. CLIP backbones studied in this work are ViT-B/32, ViT-B/16 and ViT-L/14. For OWL-ViT, its backbone is ViT-B/32 for all experiments. Category names are used as prompts to perform detection with OWL-ViT. The code of our implementation will be publicly available upon paper acceptance.
### Zero-Shot Classification Performance
In this section, we evaluate zero-shot classification performance of different model configurations on various datasets including both unconstrained object sizes (full dataset) and small-object variants (with -SM suffix). The results are shown in Table 1.
Considering datasets with unconstrained object sizes, ImageNetS919 and CUB, our Guided Cropping performance is generally comparable to (or slightly better than) non-Guided Cropping baselines. This is expected since many samples in these cases could have objects whose sizes already dominate the scene. On the other hand, both box augmentations consistently improve classification performance in all cases indicating that raw predictions from CLIP models are indeed noisy. Smoothing their predictions with box augmentations helps our methods to be more robust to this noise.
Considering results on datasets with small object sizes, ImageNetS919-SM and CUB-SM, our Guided Cropping demonstrates consistent improvement over baselines across different model configurations. This trend can also be noticed regardless of the prompt types. This indicates that our approach, as expected, is more beneficial for images with small target objects. This is reasonable since small object images leave more space in the images for context information which should be reduced before performing image encoding. Another interesting observation is that employing GC-CLIP with Multi-Margin augmentation (MAug) generally achieved better performance. This infers that hinting
the context cues with lower importance can complement with the focus on object of interest to make definite and correct decisions.
It must be noted that, in this experiment, we integrate our Guided Cropping on top of zero-shot models. A question may arise: how does our Guided Cropping affect pretrained supervised models? We conduct an experiment and found that pretrained supervised models benefit less from cropping with small bounding boxes (see appendix A.2). This is expected since supervised models can exploit unrelated contexts as shortcuts [3] to gain performance on in-distribution samples.
### Importance of Margin Ratio
Margin ratio (\(\alpha\)) mentioned in section 4.1 controls how much primary boxes from OWL-ViT are enlarged before they are used to crop input images. Varying margin ratios can help us understand how CLIP reacts to Guided Cropping from \(\alpha=0.0\) (crop with a raw OWL-ViT box) to \(\alpha=1.0\) (no Guided Cropping at all). In this section, we study our models with different margin ratios on ImageNetS919-SM. The results are shown in Figure 5. We mainly discuss results from GC-CLIP and GC-CLIP+RAug here as these configurations utilize a single margin ratio.
According to the results, when Guided Cropping is applied (\(\alpha<1\)), classification accuracies are generally better than the accuracies without Guided Cropping (\(\alpha=1\)). This confirms the benefit of GC-CLIP. It must be noted that, there are some consistent drops of the performance when the values of \(\alpha\) are too small (e.g., when \(\alpha\in[0.0,0.1]\)). This infers that too tight bounding boxes can degrade classification performance. One explanation of this observation is that, in order to recognize
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Prompt} & Guided & \multirow{2}{*}{Box Aug.} & \multicolumn{4}{c}{Dataset} \\ & & Cropping & & ImageNetS919 & CUB & ImageNetS919-SM & CUB-SM \\ \hline \multirow{8}{*}{Category} & \multirow{8}{*}{Category} & - & - & \(63.62\) & \(51.83\) & \(52.83\) & \(49.57\) \\ & & - & Random Crop & \(64.42\) & \(52.45\) & \(53.47\) & \(50.79\) \\ & & ✓ & - & \(63.61\) & \(52.40\) & \(55.18\) & \(51.44\) \\ & & ✓ & Random Crop & \(64.46\) & **53.12** & **56.00** & \(52.81\) \\ & & ✓ & Multi-Margin & **64.66** & **53.12** & **56.00** & **53.09** \\ \cline{2-8} & \multirow{8}{*}{Descriptions} & - & - & \(68.54\) & \(53.05\) & \(55.70\) & \(50.14\) \\ & & - & Random Crop & \(69.15\) & \(53.62\) & \(57.33\) & \(50.79\) \\ & & ✓ & - & \(68.59\) & \(54.07\) & \(58.61\) & **53.38** \\ & & ✓ & Random Crop & \(69.07\) & \(54.47\) & \(59.08\) & \(53.09\) \\ & & ✓ & Multi-Margin & **69.62** & **54.56** & **60.07** & \(52.95\) \\ \hline \hline \multirow{8}{*}{Category} & \multirow{8}{*}{Category} & - & - & \(68.60\) & \(56.51\) & \(57.75\) & \(55.54\) \\ & & - & Random Crop & \(68.81\) & \(56.89\) & \(58.05\) & \(57.41\) \\ \cline{1-1} & & ✓ & - & \(68.06\) & \(56.09\) & \(58.65\) & \(55.97\) \\ \cline{1-1} & & ✓ & Random Crop & \(68.19\) & \(56.78\) & \(58.35\) & \(57.12\) \\ \cline{1-1} & & ✓ & Multi-Margin & **68.94** & **57.30** & **59.81** & **57.63** \\ \cline{1-1} \cline{2-8} & \multirow{8}{*}{Descriptions} & - & - & \(72.67\) & \(57.78\) & \(61.61\) & \(56.55\) \\ \cline{1-1} & & - & Random Crop & \(73.17\) & \(58.87\) & \(62.13\) & \(57.99\) \\ \cline{1-1} & & ✓ & - & \(72.61\) & \(58.70\) & \(63.28\) & **59.35** \\ \cline{1-1} & & ✓ & Random Crop & \(72.86\) & \(58.99\) & \(63.32\) & \(58.78\) \\ \cline{1-1} & & ✓ & Multi-Margin & **73.49** & **59.34** & **64.05** & \(59.06\) \\ \hline \hline \multirow{8}{*}{Category} & \multirow{8}{*}{Category} & - & - & \(75.15\) & \(63.08\) & \(64.78\) & \(62.16\) \\ & & - & Random Crop & \(75.30\) & \(63.32\) & \(64.70\) & \(62.59\) \\ \cline{1-1} & & ✓ & - & \(75.00\) & \(62.96\) & \(66.02\) & \(62.16\) \\ \cline{1-1} & & ✓ & Random Crop & \(75.04\) & \(63.24\) & \(66.54\) & \(62.73\) \\ \cline{1-1} & & ✓ & Multi-Margin & **75.71** & **63.63** & **66.92** & **63.17** \\ \cline{1-1} \cline{2-8} & \multirow{8}{*}{Descriptions} & - & - & \(78.48\) & \(64.65\) & \(67.78\) & \(63.17\) \\ \cline{1-1} & & - & Random Crop & \(78.65\) & \(64.60\) & \(67.65\) & **63.96** \\ \cline{1-1} & & ✓ & - & \(78.32\) & \(64.67\) & \(69.07\) & \(63.31\) \\ \cline{1-1} & & ✓ & Random Crop & \(78.28\) & **64.88** & \(69.41\) & **63.96** \\ \cline{1-1} & & ✓ & Multi-Margin & **79.06** & \(64.76\) & **69.88** & \(62.95\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Zero-shot classification accuracies from different datasets and model configurations.
an object, models need to know the object shape clearly. Too tight bounding boxes can make the models having unclear information on the object boundaries leading to performance drops.
### Understanding Object Size Conditions
In section 5.2, we only conduct experiments on small object images with only one object size conditions (i.e., maximum relative object sizes \(<20\%\) of the total image areas). In this section, we would like to explore how our approach performs on different object size conditions. Therefore, we vary maximum relative object sizes of ImageNetS919 dataset from 5% to 100% for our evaluation. Details of the samples in individual conditions are given in appendix A.1.
The results are shown in Figure 6 (see appendix A.4 for the results of other backbones). Considering the cases without any object size constraints (i.e., x-axis = 1.0), applying Guided Cropping does not significantly impact the performance (the same observation in Table 1). However, as the maximum object sizes decrease, accuracy gaps between conventional CLIP and GC-CLIP become larger. The gaps are also more significant when MAug is applied for box augmentation instead of RAug. This experiment highlights conditions with small objects that our approach works well.
### Qualitative Evaluation
In this section, we quantitatively evaluate GC-CLIP by visualizing some samples whose predictions are changed from CLIP. Improved samples are shown in Figure 6(a). Reasonable improvements can be noticed among these samples. For example, in the ship image, land and sea are context covering large regions. Considering these contexts excessively makes standard CLIP incorrectly predicting the target object as an amphibious vehicle. However, GC-CLIP recognizes the image focusing on the primary box at the vehicle. This reduces distracted visual information when encoding the image leading to correct prediction.
Figure 5: Zero-shot accuracies on ImageNetS919-SM evaluated with different margin ratios.
Figure 6: Accuracies (ViT-B/32) on subsets of ImageNetS919 with various object size conditions.
On the other hand, image samples whose predictions are incorrectly changed by GC-CLIP are shown in Figure (b)b. These samples are failed potentially due to distance between target objects and important contexts. While MAug augmentation allows some contexts to be considered during prediction, large distance between target objects reduce importance of the contexts for the model (less boxes cover the contexts). For example, considering the space shuttle image, the target object is too tiny so ground is an important context distinguishing a missile and a space shuttle (which is usually launched vertically). However, large distance between the ground and the object box reduces effects from the ground in GC-CLIP. Strategies to weight contexts dynamically can be investigated in future works.
### Can we use OWL-ViT directly as a classifier?
Theoretically, OWL-ViT also has capability to minimize information outside target object boundaries and can be used in zero-shot classification task. In this section, we would like to show that, when OWL-ViT is adopted as a classifier directly, it still has limited performance on our classification task.
In order to use OWL-ViT as a classifier, we need to transform its outputs from sets of bounding box locations, scores and class labels into class-wise logits. In this regard, given an input image, prediction logit of a class can be obtained as follows: Firstly, we iterate whether there are any bounding boxes exist for that class. If any boxes exist, the class logit value will be assigned as the maximum score of its corresponding bounding boxes. Otherwise, its logit will be zero. This simple extension encourages classes of bounding boxes with high scores to have high logits.
We evaluate this classifier on ImageNetS919 dataset and obtain 20.34% and 40.78% as top-1 and top-10 accuracies respectively. Here, the performance is still much lower compared to our baseline performance in Table 1 indicating poor classification accuracy of this classifier.
The poor performance of this classifier can be investigated by visualizing incorrectly predicted samples in Figure 8. While OWL-ViT gives reasonable bounding boxes, its class predictions are inaccurate. The actual classes are likely to be confused with other classes with fine-grained differences. For example, the model misclassifies an image of a tiger shark as a snoek fish whose shape is indeed closely resemble to shark. This significant degradation from fine-grained details confirms that OWL-ViT is not optimal to be used as a classifier on standard classification benchmarks.
Figure 8: Examples of failure modes of the OWL-ViT based classifier.
Figure 7: Predictions of CLIP (with RAug) and GC-CLIP (with MAug) with ViT-B/32 on ImageNetS919 samples. Red boxes represent primary boxes \(b^{0}\) estimated from our GC-CLIP.
Conclusion
In this work, we identify a limitation of CLIP in zero-shot closed-set object classification task. As its image encoder is designed for encoding generic image representation, it is prone to encode non-discriminative context information into image features leading to performance degradation, particularly for small objects. We propose GC-CLIP, an approach to reduce effects from potentially non-discriminative information based on object bounding boxes estimated from a zero-shot object detection model. We empirically demonstrate that our approach outperforms baselines especially in cases of image samples with small objects. On the basis of ablation studies, we analyze conditions in which our approach performs well. We hope this work shed a new light on the behavior of large-scale open-vocabulary models for classification and guide future research to improve these models. |
2309.07847 | Thermodynamic entropy production in the dynamical Casimir effect | This paper address the question of thermodynamic entropy production in the
context of the dynamical Casimir effect. Specifically, we study a scalar
quantum field confined within a one-dimensional ideal cavity subject to
time-varying boundary conditions dictated by an externally prescribed
trajectory of one of the cavity mirrors. The central question is how the
thermodynamic entropy of the field evolves over time. Utilizing an effective
Hamiltonian approach, we compute the entropy production and reveal that it
exhibits scaling behavior concerning the number of particles created in the
short-time limit. Furthermore, this approach elucidates the direct connection
between this entropy and the emergence of quantum coherence within the mode
basis of the field. In addition, by considering a distinct approach based on
the time evolution of Gaussian states we examine the long-time limit of entropy
production within a single mode of the field. This approach results in
establishing a connection between the thermodynamic entropy production in a
single field mode and the entanglement between that particular mode and all
other modes. Consequently, by employing two distinct approaches, we
comprehensively address both the short-term and long-term dynamics of the
system. Our results thus link the irreversible dynamics of the field, as
measured by entropy production and induced by the dynamical Casimir effect, to
two fundamental aspects of quantum mechanics: coherence and entanglement. | Gustavo de Oliveira, Lucas C. Céleri | 2023-09-14T16:41:28Z | http://arxiv.org/abs/2309.07847v2 | # Thermodynamic entropy production in the dynamical Casimir effect
###### Abstract
This paper address the question of thermodynamic entropy production in the context of the dynamical Casimir effect. Specifically, we study a scalar quantum field confined within a one-dimensional ideal cavity subject to time-varying boundary conditions dictated by an externally prescribed trajectory of one of the cavity mirrors. The central question is how the thermodynamic entropy of the field evolves over time. Utilizing an effective Hamiltonian approach, we compute the entropy production and reveal that it exhibits scaling behavior concerning the number of particles created in the short-time limit. Furthermore, this approach elucidates the direct connection between this entropy and the emergence of quantum coherence within the mode basis of the field. In addition, by considering a distinct approach based on the time evolution of Gaussian states we examine the long-time limit of entropy production within a single mode of the field. This approach results in establishing a connection between the thermodynamic entropy production in a single field mode and the entanglement between that particular mode and all other modes. Consequently, by employing two distinct approaches, we comprehensively address both the short-term and long-term dynamics of the system. Our results thus link the irreversible dynamics of the field, as measured by entropy production and induced by the dynamical Casimir effect, to two fundamental aspects of quantum mechanics: coherence and entanglement.
## I Introduction
While the fundamental laws of physics exhibit time-reverse symmetry, we encounter irreversible phenomena in our surroundings when dealing with complex systems. In classical physics, irreversibility is primarily characterized by the second law of thermodynamics, which asserts that the thermodynamic entropy of a closed system cannot decrease over time [1]. When fluctuations come into play, stronger principles known as fluctuation theorems emerge [2; 3], and irreversible processes are those in which entropy tends to increase on average.
When considering quantum systems, various approaches have emerged in the pursuit of comprehending thermodynamics from a microscopic perspective. Some of these developments include information theory [4], statistical physics [5], and axiomatic theories [6]. For a comprehensive exploration of entropy production in both classical and quantum systems, we recommend Ref. [7] and its associated references.
We are focusing on the thermodynamics of closed quantum systems, where the time evolution follows a unitary process. This implies that the von Neumann entropy remains constant over time. As a result, this measure is inadequate for quantum thermodynamic entropy because it contradicts the well-established experimental observation that, in general, spontaneous processes tend to increase entropy. Furthermore, it fails to respect the fundamental thermodynamic relation. To tackle this fundamental issue, we turn to the diagonal entropy, as defined in Ref. [8] as
\[S_{d}(\hat{\rho})=-\sum_{n}p_{n}\ln p_{n}, \tag{1}\]
with \(p_{n}\) representing the diagonal elements of the system's density matrix \(\hat{\rho}\) in the energy eigenbasis. This quantity has been proposed as the thermodynamic entropy for closed quantum systems since it exhibits several interesting properties, including extensivity, positivity, and the property of vanishing as the temperature approaches zero [8]. Furthermore, it possesses a crucial characteristic: it increases for every process, whether unitary or not, that induces transitions in the energy eigenbasis. Only when the system's Hamiltonian changes slowly enough will the diagonal entropy remain unchanged. This aligns with our intuition based on the classical definition of thermodynamic entropy, which does not increase for quasistatic processes [9; 10].
It is worth noting that a closely related quantity known as the observational entropy is defined as a coarse-grained version of the diagonal entropy [11]. Therefore, the findings presented here also apply within the context of observational entropy.
Information theory have also given rise to a novel approach to thermodynamics, as elucidated by a recent work [12]. In this approach, physical quantities are defined as those invariant under the action of a gauge group, and the emerging concept of entropy precisely aligns with the diagonal entropy discussed above. This alignment resonates with the fact that the gauge-invariant definition of heat is intricately tied to transitions within the energy eigenbasis [12]. This observation also establishes a connection between our findings and another cornerstone of physics, the gauge principle.
We can think about this entropy as a measure of the randomness within the energy eigenbasis. Imagine that we only have access to energy measurements of a quan
tum system, a common limitation when dealing with systems of a sufficiently large dimension where quantum state tomography becomes impractical [13]. In a general process, whether unitary or not, transitions between energy levels are induced, leading to the development of quantum coherence and potentially entanglement among different parts of the system. The diagonal entropy quantifies the information loss resulting from our limited set of measurements. We refer the reader to Ref. [8] for more details regarding such quantity, including its relation to thermodynamics. The aim of the present work is to apply this concept to a quantum field within the context of the dynamical Casimir effect, and explore the relationship between entropy production and quantum properties such as coherence and entanglement.
Specifically, we consider a quantum scalar field confined within a one-dimensional cavity with mirrors in relative motion, a scenario commonly examined in the context of the dynamical Casimir effect [14; 15; 16; 17]. Under specific conditions, this effect predicts the creation of particles from the vacuum due to the dynamic changes of the boundary conditions imposed by the mirror motion. Over the past five decades, numerous developments have appeared in this field, encompassing the impact of imperfect mirrors [18; 19; 20; 21], distinct geometries [22; 23; 24; 25; 26], gravitational field effects [27; 28], nonlinear interactions [29; 30; 31], and entanglement dynamics [32; 33]. For a comprehensive overview, interested readers are directed to a recent review [34].
However, despite these extensive developments, the irreversible dynamics of the quantum field in this scenario have not been explored, to the best of our knowledge. This work aims to begin addressing this gap by focusing on irreversibility, as measured by the increase in quantum thermodynamic entropy -- the diagonal entropy-- associated with the field's dynamics. In other words, how much entropy is generated in the field due to the nonstationary boundary conditions imposed by the motion of the cavity mirrors? We provide answers to this question through two distinct approaches. Firstly, we employ an effective Hamiltonian theory based on Ref. [35] to calculate the entropy of the total field within the short-time regime. We demonstrate that the entropy increase is intrinsically tied to the generation of quantum coherence within the system's energy eigenbasis, aligning with the gauge theory developed in Ref. [12]. In the second part of the paper, we adopt a different approach to investigate the long-term field dynamics, allowing us to compute the diagonal entropy for a single mode. Interestingly, this entropy is governed by the entanglement between the selected mode and all other modes. These two distinct approaches enable us to connect the irreversibility of field dynamics with two fundamental quantum features: coherence and entanglement.
## II The dynamical Casimir effect
Let us consider a one-dimensional ideal cavity whose mirrors are located at positions \(x=0\) and \(x=L(t)\), with \(L(t)\) being an externally prescribed trajectory. Confined in this cavity, we have a massless real scalar field \(\phi(x,t)\) satisfying the wave equation
\[\left(\partial_{t}^{2}-\partial_{x}^{2}\right)\phi(x,t)=0. \tag{2}\]
Given the ideal nature of the mirrors (perfect reflectors), the boundary conditions imposed on the field take the Dirichlet form
\[\phi(0,t)=\phi(L(t),t)=0. \tag{3}\]
The set of complex value solutions \(\{\phi_{i}\}\) to Eq. (2) under the restrictions imposed by the non-stationary boundary conditions (3) spans a linear vector space \(\mathcal{S}\) with an invariant bilinear form
\[(\phi_{1},\phi_{2})=i\int_{0}^{L(t)}\mathrm{d}x\ [\phi_{1}^{*}\partial_{t} \phi_{2}-\phi_{2}\partial_{t}\phi_{1}^{*}] \tag{4}\]
satisfying all the properties of an inner product except for positive definiteness. This last obstacle hinders the use of Eq. (4) for the field's decomposition into orthonormal solutions on \(\mathcal{S}\). Nevertheless, we can always choose to that matter, any subspace \(\mathcal{S}^{+}\subset\mathcal{S}\), as long as it satisfies the following properties: (_i_) the product (4) is positive definite on \(\mathcal{S}^{+}\); (_ii_) \(\mathcal{S}=\mathcal{S}^{+}\oplus\overline{\mathcal{S}^{+}}\) (with the bar designating the complex conjugated of the space) and (_iii_) for all \(f^{+}\in\mathcal{S}^{+}\) and \(f^{-}\in\overline{\mathcal{S}^{+}}\), we have \((f^{+},f^{-})=0\)[36].
From the last considerations, if we assume the cavity at the interval \(t\leq 0\) to be in a static configuration (with constant mirror position \(L(t\leq 0)=L_{0}\)), the classical field can be written as
\[\phi(x,t\leq 0)=\sum_{k}\left[b_{k}f_{k}^{\mathrm{in}}(x,t)+b_{k}^{*}f_{k}^{ \mathrm{in*}}(x,t)\right], \tag{5}\]
where the set \(\{f_{k}^{\mathrm{in}}(x,t)\}\) is an orthonormal basis on \(\mathcal{S}^{+}\) while \(\{b_{k}\}\) is a set of complex coefficients. Since the mirrors are at rest, one can use the time translation symmetry of the wave equation as a natural criterion to select \(\mathcal{S}^{+}\) as the space of solutions that oscillates with purely positive frequencies
\[f_{k}^{\mathrm{in}}(x,t)=\frac{1}{\sqrt{\pi k}}\sin\left(\omega_{k}^{\mathrm{ in}}x\right)e^{-i\omega_{k}^{\mathrm{in}}t},\quad\mathrm{for}\ t\leq 0, \tag{6}\]
where \(\omega_{k}^{\mathrm{in}}=k\pi/L_{0}\) with \(k=\{1,2,\ldots\}\).
The quantum description of the field is then obtained by means of the usual field quantization prescription. The coefficients \(b_{k}\) and \(b_{k}^{*}\) are promoted to annihilation and creation operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) satisfying the standard commutation relations
\[\left[\hat{b}_{k},\hat{b}_{j}^{\dagger}\right]=\delta_{kj}\ \mathrm{and}\ \left[\hat{b}_{k},\hat{b}_{j}\right]=\left[\hat{b}_{k}^{\dagger},\hat{b}_{j}^{ \dagger}\right]=0. \tag{7}\]
The initial vacuum state \(|0;\mathrm{in}\rangle\) is defined as the state annihilated by all \(\hat{b}_{k}\), whereas a general particle state can be constructed by the application of the creation operator \(\hat{b}_{k}^{\dagger}\) on this vacuum state
\[|\mathbf{n};\mathrm{in}\rangle=|n_{k_{1}},n_{k_{2}},\dots;\mathrm{in}\rangle=\prod_ {i}\frac{1}{\sqrt{n_{k_{i}}!}}\left(\hat{b}_{k_{i}}^{\dagger}\right)^{n_{k_{i}} }|0;\mathrm{in}\rangle\,,\]
with \(n_{k_{i}}\) representing the number of particles in the \(k_{i}\)-th mode.
For \(t>0\), when the mirror starts to move, the quantum field can still be decomposed in terms of the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) in the form
\[\hat{\phi}(x,t>0)=\sum_{k}\left[\hat{b}_{k}f_{k}(x,t)+\hat{b}_{k}^{\dagger}f_{ k}^{*}(x,t)\right], \tag{8}\]
as long as the new set of mode functions \(\{f_{k}(x,t)\}\) satisfies the conditions: (i) the wave equation (2), (ii) the time-dependent boundary condition (3), and (iii) the initial condition \(f_{k}(x,0)=f_{k}^{\mathrm{in}}(x,0)\). In this regard, we proceed by expanding the mode function in a series with respect to an _instantaneous basis_\(\{\varphi_{k}(x,t)\}\) as
\[f_{k}(x,t)=\frac{1}{\sqrt{2\omega_{k}^{\mathrm{in}}}}\sum_{j}Q_{j}^{(k)}(t) \varphi_{j}(x,t), \tag{9}\]
where
\[\varphi_{j}(x,t):=\sqrt{\frac{2}{L(t)}}\sin\left[\omega_{j}(t)x\right]\ \ \mathrm{with}\ \omega_{j}(t)=\frac{j\pi}{L(t)}. \tag{10}\]
Moreover the Fourier coefficients \(Q_{j}^{(k)}(t)\) introduced in Eq. (9) must satisfy the differential equation1
Footnote 1: The set of differential equations (11) can be obtained by substituting Eq. (9) into the wave equation (2) and integrating the resulting expression from \(0\) to \(L(t)\).
\[\tilde{Q}_{j}^{(k)} +\omega_{j}^{2}(t)Q_{j}^{(k)} \tag{11}\] \[=\sum_{l}\left[2\lambda(t)g_{kl}\dot{Q}_{l}^{(k)}+\dot{\lambda}( t)g_{kl}Q_{l}^{(k)}-\lambda^{2}(t)h_{kl}Q_{l}^{(k)}\right],\]
together with the initial conditions
\[Q_{j}^{(k)}(0)=\delta_{jk},\qquad\dot{Q}_{j}^{(k)}(0)=-i\omega_{k}^{\mathrm{ in}}\delta_{kj}, \tag{12}\]
where the upper dot indicates total time derivative, \(\lambda(t)=\dot{L}(t)/L(t)\) and the antisymmetric coefficients \(g_{kj}\) and \(h_{kj}\) are defined for \(j\neq k\) as
\[g_{jk}=(-1)^{j-k}\frac{2kj}{j^{2}-k^{2}},\quad\mathrm{and}\quad h_{jk}=\sum_{ l}g_{jl}g_{kl}. \tag{13}\]
The first noticeable aspect of the provided description is that the mode expansion (9) fundamentally depends on the choice of the basis functions \(\varphi_{k}(x,t)\). This occurs because when the time dependence of the boundary condition (3) is taken into account, the natural criterion of selecting solutions with purely positive frequency is no longer available and there is no unambiguous choice for \(\mathcal{S}^{+}\). Consequently, during the cavity motion, the expansion of the field in terms of creation and annihilation operators becomes arbitrary, implying the nonexistence of a preferred choice for a vacuum state. Thus, unless we can specify a measurement process, the usual notion of particle loses its well-defined meaning, and only when the cavity comes to rest we can associate a definite particle interpretation to the quanta described by these operators [35].
If the cavity returns to a static configuration after some interval of time \(T\) (with a final constant mirror position \(L(t\geq T)=L_{T}\)), one can reintroduce a preferred choice for the mode functions as
\[f_{k}^{\mathrm{out}}(x,t)=\frac{1}{\sqrt{\pi k}}\sin\left(\omega_{k}^{ \mathrm{out}}x\right)e^{-i\omega_{k}^{\mathrm{out}}t}\quad\mathrm{for}\ t\geq T \tag{14}\]
with purely positive frequencies \(\omega_{k}^{\mathrm{out}}=k\pi/L_{T}\). Consequently, the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) cease to have a physical significance and the field is now decomposed as
\[\hat{\phi}(x,t\geq T)=\sum_{k}\left[\hat{a}_{k}f_{k}^{\mathrm{out}}(x,t)+\hat{a }_{k}^{\dagger}f_{k}^{\mathrm{out}*}(x,t)\right], \tag{15}\]
with the set operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) satisfying analogous commutation relations as in Eq. (7) and defining a new vacuum state \(|0;\mathrm{out}\rangle\) as the state annihilated by all \(\hat{a}_{k}\).
As pointed out in Ref. [32], although both sets \(\{f_{k}^{\mathrm{in}},f_{k}^{\mathrm{in}*}\}\) and \(\{f_{k}^{\mathrm{out}},f_{k}^{\mathrm{out}*}\}\) form a basis for the space of solutions \(\mathcal{S}\), they represent different decompositions into the subspaces \(\mathcal{S}^{+}\) and \(\overline{\mathcal{S}^{+}}\). The two sets of mode functions (6) and (14) should then be related by a linear transformation
\[f_{k}^{\mathrm{in}}=\sum_{j}\left[\alpha_{jk}f_{j}^{\mathrm{out}}+\beta_{jk}f_ {j}^{\mathrm{out}*}\right], \tag{16}\]
where \(\alpha_{jk}\) and \(\beta_{jk}\) are complex numbers called Bogoliubov coefficients. Inserting Eq. (16) into the field decomposition (5), and comparing with Eq. (15), we obtain the set of Bogoliubov transformations
\[\hat{a}_{j}=\sum_{k}\left[\alpha_{kj}\hat{b}_{k}+\beta_{kj}^{*}\hat{b}_{k}^{ \dagger}\right]. \tag{17}\]
Observe that the vacuum defined by \(\hat{a}_{k}\) and \(\hat{b}_{k}\) are not equivalent in general. As a consequence, when computing the number of particles defined by the final operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) with respect to the initial vacuum \(|0;\mathrm{in}\rangle\), results
\[N=\langle 0;\mathrm{in}|\sum_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}|0;\mathrm{in} \rangle=\sum_{kj}|\beta_{jk}|^{2}. \tag{18}\]
In general, \(\beta_{jk}\) is non-zero when time-dependent boundary conditions are imposed on the field. This last equation characterizes the DCE as the quantum field phenomenon of particle creation from the vacuum due to the
time-dependent nature of the imposed boundary conditions.
Our aim here is to study the entropy generated in the field due to this effect. To start, the next section introduces an effective Hamiltonian approach [37; 38; 21; 35] to describe the field dynamics. This will be important for us to compute the evolved state and, consequently, the entropy generated by the particle creation process. A limitation of this technique is that it only allow us to study the short-time dynamics of the system as it relies on perturbation theory. Nonetheless, it grants us access to the entire state, enabling the exploration of the relationship between irreversibility and the emergence of quantum coherence.
## III Effective Hamiltonian approach
In this section, we introduce an effective Hamiltonian for the DCE following the developments presented in Ref. [35]. To accomplish this, we begin by expanding the field operator \(\hat{\phi}\) and its conjugate momentum \(\hat{\pi}=\partial_{t}\hat{\phi}\) in terms of the instantaneous basis defined in Eq. (10)
\[\hat{\phi}(x,t) =\sum_{k}\hat{q}_{k}(t)\varphi_{k}(x,t), \tag{19a}\] \[\hat{\pi}(x,t) =\sum_{k}\hat{p}_{k}(t)\varphi_{k}(x,t), \tag{19b}\]
where the operators \(\hat{q}_{k}(t)\) and \(\hat{p}_{k}(t)\) are defined as
\[\hat{q}_{k}(t) :=\int_{0}^{L(t)}\mathrm{d}x\ \hat{\phi}(x,t)\varphi_{k}(x,t), \tag{20a}\] \[\hat{p}_{k}(t) :=\int_{0}^{L(t)}\mathrm{d}x\ \hat{\pi}(x,t)\varphi_{k}(x,t). \tag{20b}\]
Comparing Eqs. (19) with the field operator (5) and its time derivative, the expressions for \(\hat{q}_{k}(t)\) and \(\hat{p}_{k}(t)\) can be computed
\[\hat{q}_{k}(t\leq 0) =\frac{1}{\sqrt{2\omega_{k}^{\mathrm{in}}}}\left[\hat{b}_{k}e^{- i\omega_{k}^{\mathrm{in}}t}+\hat{b}_{k}^{\dagger}e^{i\omega_{k}^{\mathrm{in}}t} \right], \tag{21a}\] \[\hat{p}_{k}(t\leq 0) =i\sqrt{\frac{\omega_{k}^{\mathrm{in}}}{2}}\left[\hat{b}_{k}^{ \dagger}e^{i\omega_{k}^{\mathrm{in}}t}-\hat{b}_{k}e^{-i\omega_{k}^{\mathrm{in }}t}\right]. \tag{21b}\]
For \(t>0\) the cavity is in motion and an effective description of the field dynamics can be obtained by introducing the decomposition [35]
\[\hat{q}_{k}(t) =\frac{1}{\sqrt{2\omega_{k}(t)}}\left[\hat{a}_{k}(t)e^{-i\Omega_{ k}(t)}+\hat{a}_{k}^{\dagger}(t)e^{i\Omega_{k}(t)}\right], \tag{22a}\] \[\hat{p}_{k}(t) =i\sqrt{\frac{\omega_{k}(t)}{2}}\left[\hat{a}_{k}^{\dagger}(t)e^{ i\Omega_{k}(t)}-\hat{a}_{k}(t)e^{-i\Omega_{k}(t)}\right], \tag{22b}\]
where \(\Omega_{k}(t)=\int_{0}^{t}dt^{\prime}\omega_{k}(t^{\prime})\) and the _instantaneous_ annihilation and creation operators \(\hat{a}_{k}(t)\) and \(\hat{a}_{k}^{\dagger}(t)\) satisfy the standard equal times commutation relations
\[\left[\hat{a}_{k}(t),\hat{a}_{k}^{\dagger}(t)\right]=\delta_{kj};\left[\hat{a }_{k}(t),\hat{a}_{k}(t)\right]=\left[\hat{a}_{k}^{\dagger}(t),\hat{a}_{k}^{ \dagger}(t)\right]=0.\]
Here, the name instantaneous refers to the physical interpretation that if we freeze the system at some instant \(t_{0}\), the operators \(\hat{a}_{k}(t_{0})\) and \(\hat{a}_{k}^{\dagger}(t_{0})\) must describe the particle notion for the field as if the cavity mirror had stopped at position \(L(t_{0})\). One can recognize the initial and final operators to be \(\hat{b}_{k}:=\hat{a}_{k}(t=0)\) and \(\hat{a}_{k}:=\hat{a}_{k}(t=T)\).
Taking the time derivative of Eqs. (19) along with Eqs. (22) and, after some algebra (see Appendix A for details), we obtain the following set of differential equations for the annihilation operator
\[\dot{\hat{a}}_{j}(t)=\sum_{k}\left[A_{kj}(t)\hat{a}_{k}(t)+B_{kj}^{*}(t)\hat{a }_{k}^{\dagger}(t)\right]. \tag{23}\]
The equation for the creation operator is obtained by simply taking the transpose complex conjugate of this last equation. In this equation, we defined the coefficients
\[\begin{split} A_{kj}(t)&\\ B_{kj}(t)&\\ \end{split} \tag{24}\]
with
\[\mu_{kj}(t)\coloneqq-\left(\sqrt{\frac{j}{k}}g_{jk}+\frac{1}{2}\delta_{jk} \right)\frac{\dot{L}(t)}{L(t)}. \tag{25}\]
Identifying Eq. (23) as the Heisenberg equation of motion for the annihilation operator, it is straightforward to write down the effective Hamiltonian in the Schrodinger picture as2
Footnote 2: Although Hamiltonian (26) differs from that in Ref. [35] due to the absence of a term proportional to \(\omega_{k}(t)\), both descriptions are equivalent, since this contribution is contained in the exponential terms in Eq. (22).
\[\hat{H}_{\text{eff}}(t)=\frac{i}{2}\sum_{jk}\Bigg{[}A_{kj}(t)\hat{b}_{j}^{ \dagger}\hat{b}_{k}+B_{kj}^{*}(t)\hat{b}_{j}^{\dagger}\hat{b}_{k}^{\dagger}- \text{h.c.}\Bigg{]}, \tag{26}\]
where "h.c." stands for hermitian conjugate.
Here, we can clearly see the existence of two different contributions. The terms containing the coefficients \(B_{kj}^{*}\) and \(B_{kj}\) govern the process of creation and annihilation of pairs of particles, while the ones proportional to \(A_{kj}^{*}\) and \(A_{kj}\) are responsible for scattering of particles between distinct modes.
From this Hamiltonian we can compute the time evolution of any initial density matrix and, therefore, the thermodynamic entropy given in Eq. (1). This will be done in the sequence.
### The density operator
To investigate the entropy production within the proposed scheme, one first needs to obtain an explicit expression for the system's density operator \(\hat{\rho}\) after the cavity returns to its stationary configuration. This can be achieved by finding solutions to the dynamical equation
\[\dot{\hat{\rho}}(t)=-i\left[\hat{H}_{\text{eff}}(t),\hat{\rho}(t)\right]. \tag{27}\]
Conversely, the complex structure of the effective Hamiltonian poses inherent challenges in solving Eq. (27). To overcome this issue, we narrow our focus to a specific category of problems where the equation of motion for the cavity mirror assumes the following form
\[L(t)=L_{0}\left[1+\epsilon l(t)\right], \tag{28}\]
where \(l(t)\) is a smooth function of order unity --as well as its first time derivative--, while \(\epsilon\ll 1\) is a small amplitude.
Since the coefficients in Eq. (25) are proportional to \(\dot{L}(t)/L(t)\), it is straightforward to see that the Hamiltonian coefficients given in Eqs. (24) are proportional to \(\epsilon\). As a result, the formal solution to Eq. (27) up to second order in \(\epsilon\) reads
\[\hat{\rho}(T)=\hat{\rho}(0)-i\int_{0}^{T}dt^{\prime}\left[\hat{H} _{\text{eff}}(t^{\prime}),\hat{\rho}(0)\right] \tag{29}\] \[-\int_{0}^{T}dt^{\prime}\int_{0}^{t^{\prime}}dt^{\prime\prime} \left[\hat{H}_{\text{eff}}(t^{\prime}),\left[\hat{H}_{\text{eff}}(t^{\prime \prime}),\hat{\rho}(0)\right]\right].\]
We are interested in the particular case of the initial vacuum state \(\hat{\rho}(0)=\ket{0;\text{in}}\bra{\text{in};0}\), since we want to study the thermodynamics of the particle creation process. It is convenient to write the evolved state in terms of the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\), which are related to the operators \(\hat{a}_{k}(t)\) and \(\hat{a}_{k}^{\dagger}(t)\) by the intanstaneous version of the Bogoliubov coefficients \(\alpha_{kj}(t)\) and \(\beta_{kj}(t)\).
By substituting the transformations (17) into the set of differential equations (23), we obtain a recursive relation for the Bogoliubov coefficients in terms of powers of \(\epsilon\). Up to first order, the resulting coefficients are given by
\[\alpha_{kj}(t) =\delta_{kj}+\int_{0}^{t}dt^{\prime}A_{kj}(t^{\prime}), \tag{30a}\] \[\beta_{kj}(t) =\int_{0}^{t}dt^{\prime}\;B_{kj}(t^{\prime}), \tag{30b}\]
which implies
\[\hat{a}_{k}(t)=\hat{b}_{k}+\sum_{j}\left(\tilde{\alpha}_{jk}(t)\hat{b}_{j}+ \beta_{jk}^{*}(t)\hat{b}_{j}^{\dagger}\right),\]
where \(\tilde{\alpha}_{kj}(t)=\int_{0}^{t}dt^{\prime}A_{kj}(t^{\prime})\). A direct calculation from Eq. (29) leads us to the following expression for the system's density operator up to second order in \(\epsilon\)
\[\hat{\rho}(T) =\hat{\rho}(0)-\tfrac{1}{2}\sum_{kj}\Bigg{\{}\beta_{kj}^{*}\left( \hat{b}_{k}^{\dagger}\hat{b}_{j}^{\dagger}\hat{\rho}(0)\right)-\tfrac{1}{4} \sum_{nm}\Bigg{[}\beta_{mn}\beta_{kj}^{*}\left(\hat{b}_{k}^{\dagger}\hat{b}_{ j}^{\dagger}\hat{\rho}(0)\hat{b}_{m}\hat{b}_{n}\right)-\beta_{mn}\beta_{kj}^{*} \left(\hat{b}_{m}\hat{b}_{n}\hat{b}_{k}^{\dagger}\hat{b}_{j}^{\dagger}\hat{ \rho}(0)\right)\] \[+\beta_{mn}^{*}\beta_{kj}^{*}\left(\hat{b}_{m}^{\dagger}\hat{b}_{ n}^{\dagger}\hat{b}_{k}^{\dagger}\hat{b}_{j}^{\dagger}\hat{\rho}(0)\right)+2\tilde{ \alpha}_{mn}^{*}\beta_{kj}^{*}\left(\hat{b}_{m}^{\dagger}\hat{b}_{n}\hat{b}_{k }^{\dagger}\hat{b}_{j}^{\dagger}\hat{\rho}(0)\right)\Bigg{]}+\text{h.c.} \Bigg{\}}. \tag{31}\]
Considering the initial vacuum state, the number of particles created inside the cavity due to the DCE takes the form
\[N(T) = \text{Tr}\left\{\sum_{k}\hat{\rho}(T)\hat{b}_{k}^{\dagger}\hat{b}_{k} \right\}=\sum_{kj}|\beta_{kj}|^{2}, \tag{32}\]
in agreement with Eq. (18), thus showing the consistency of our calculations.
We are now ready to discuss the entropy production due to the particle creation process.
### Entropy production
As discussed earlier, we consider the diagonal entropy [8]
\[S_{d}(\hat{\rho})=-\sum_{\mathbf{n}}\rho_{\text{diag}}^{(\mathbf{n})}\ln\rho_{\text{ diag}}^{(\mathbf{n})}, \tag{33}\]
as the main figure of merit for characterizing irreversibility. In this equation, \(\rho_{\text{diag}}^{(\mathbf{n})}=\bra{\text{in};\mathbf{n}}\hat{\rho}\ket{\mathbf{n}; \text{in}}\) represent the diagonal elements of the system's density operator in the initial energy eigenbasis.
From the expression of the density operator shown in Eq. (31), the diagonal entropy can be directly computed,
resulting in
\[S_{d}(T) = -\left[1-\tfrac{1}{2}N(T)\right]\ln\left[1-\tfrac{1}{2}N(T)\right] \tag{34}\] \[- \sum_{kj}\tfrac{1}{2}|\beta_{kj}(T)|^{2}\ln\tfrac{1}{2}|\beta_{kj} (T)|^{2}.\]
We first observe that the entropy production depends on the number of particles created inside the cavity. Secondly, we note that this entropy production is exactly equal to the creation of quantum coherence in the energy eigenbasis of the field. To see this, let us consider the relative entropy of coherence [39]
\[C(\hat{\rho})=S(\hat{\rho}_{d})-S(\hat{\rho}),\]
which is a measure of the amount of quantum coherence in a given basis. Here \(S(\hat{\rho})=-\operatorname{Tr}\hat{\rho}\ln\hat{\rho}\) designates the von Neumman entropy of \(\hat{\rho}\) while \(\hat{\rho}_{d}\) is the diagonal operator built from the diagonal elements of \(\hat{\rho}\) in the selected basis. Since we are interested in the amount of entropy produced during time evolution, we pick up the initial energy eigenbasis to measure coherence. This is fully justified since we are interested in thermodynamics. Under this choice, we directly see that \(S(\hat{\rho}_{d})=S_{d}(\hat{\rho})\). Since our evolution is unitary and the initial state is pure, we have \(S(\hat{\rho})=0\), thus implying that
\[C(\hat{\rho})=S_{d}(T). \tag{35}\]
Note that, differently from Eq. (34), such a result is a general one, independent of the perturbation theory used here.
This result implies that we will observe irreversibility (positive entropy production) for every process that creates quantum coherence in the energy eigenbasis of the system. Therefore, reversible processes must be those that are performed slowly enough in order to not induce transitions among the energy eigenstates. This result is in agreement with the discussions presented in Refs. [8; 9; 10; 12], where both entropy production and heat are associated with processes that generate coherence.
In order to illustrate our results, let us consider that the moving mirror performs harmonic oscillations of the form
\[l(t)=\sin(p\omega_{1}^{\text{in}}t), \tag{36}\]
where \(p\) is an integer, while \(\omega_{1}^{\text{in}}\) is the first unperturbed field frequency.
For simplicity, we define the small dimensionless time \(\tau=\epsilon\omega_{1}^{\text{in}}T/2\) and assume the case in which the mirror returns to its initial position at time \(t=T\) after performing a certain number of complete cycles (\(p\omega_{1}^{\text{in}}T=2\pi m\) with \(m=1,2,\dots\) ). Using Eqs. (13) and (36), we directly obtain
\[|\beta_{kj}(\tau)|=\left\{\begin{array}{rl}\sqrt{kj}\ \tau&\text{if}\ p=k+j,\\ \frac{2\sqrt{kj}\epsilon p}{p^{2}-(k+j)^{2}}\sin\left[\frac{2(k+j)\tau}{ \epsilon}\right]&\text{if}\ p\neq k+j.\end{array}\right. \tag{37}\]
By dropping the rapid oscillating terms, the number of particles created takes the form
\[N(\tau)=\frac{1}{6}p(p^{2}-1)\tau^{2}, \tag{38}\]
in agreement with Ref. [40]. Note that the above expression is valid under perturbation theory involving time, and, therefore, it is a good approximation only when \(\tau\ll 1\).
In this case, the diagonal entropy, our focus of interest here, reduces to
\[S_{d}(\tau) = \frac{1}{2}N(\tau)\Bigg{[}1-\ln\frac{1}{2}N(\tau) \tag{39}\] \[+ \ln\frac{p(p^{2}-1)}{6}-\frac{6\operatorname{v}(p)}{p(p^{2}-1)} \Bigg{]},\]
with
\[\operatorname{v}(p)=\sum_{k=1}^{p-1}(p-k)k\ln(p-k)k.\]
Figure 1 shows the diagonal entropy for this particular case. As it is clear from the figure, entropy will be produced in the field for every value of the mirror frequency \(p\), except for \(p=1\), where the number of created particles vanishes.
The technique employed in this section, based on the effective Hamiltonian, enabled us to calculate the system's entropy production through the time evolution of the density operator. This establishes a direct link between entropy production and the emergence of quantum coherence in the field. Nevertheless, our current analysis is confined to the short-time limit. In the subsequent section, we shift to the Heisenberg picture and quantify entropy production in relation to the time evolution of Gaussian states. This approach permits an exploration
Figure 1: **Entropy production**. Entropy as a function of \(\tau\) for distinct values of the mirror oscillating frequency.
of the contribution to entropy production arising from the generation of entanglement between a single mode and the remainder of the field. Therefore, we see that these two approaches are complementary to each other.
## IV Gaussian state approach
The last section presented an analysis of the entropy production constrained to the short-time regime of the entire system. Now, we introduce a different approach that enables us to analyze entropy production in a specific mode across all time intervals. Additionally, this method facilitates the exploration of the entropy dynamics and its connection with the entanglement between the selected mode and all other modes in the system.
To achieve this goal we follow the techniques outlined in Ref. [40] where the dynamics of the system during the cavity motion is described in the Heisenberg picture. In this approach, the field is decomposed in terms of the Fourier coefficients \(Q_{j}^{(k)}(t)\) through Eq. (8) along with the mode function (9). Consequently, the dynamics of the system is determined by solving the infinite set of coupled differential equations (11) for the Fourier coefficients, with each equation encompassing an infinite number of time-dependent terms.
The problem can be simplified if we consider the special case of parametric resonance, i.e., when one of the mirrors undergoes small oscillations at twice the fundamental frequency of the unperturbed field. Therefore, we impose the following form for the mirror trajectory
\[L(t)=L_{0}\left[1+\epsilon\sin\left(2\omega_{1}^{\text{in}}t\right)\right]. \tag{40}\]
If the mirror returns to its initial position \(L_{0}\) after some interval of time \(T\), then \(\omega_{k}^{\text{in}}=\omega_{k}^{\text{out}}=\omega_{k}\) and the right-hand side of Eq. (11) vanishes. Under these considerations, it is possible to write
\[Q_{j}^{(k)}(t\geq T)=\sqrt{\frac{\omega_{k}}{\omega_{j}}}\left(\alpha_{kj}e^{ -i\omega_{j}t}+\beta_{kj}e^{i\omega_{j}t}\right), \tag{41}\]
where \(\alpha_{kj}\) and \(\beta_{kj}\) are the Bogoliubov coefficients defined in Eq. (17).
Since we impose the field to be weakly perturbed by the mirror oscillations (40), it is natural to search for solutions to \(Q_{j}^{(k)}(t)\) by allowing the Bogoliubov coefficients in Eq. (41) to be functions that vary slowly in time, i.e., \(\dot{\alpha}_{kj},\dot{\beta}_{kj}\sim\epsilon\). Then, by substituting Eq. (41) into Eq. (11), ignoring terms proportional to \(\epsilon^{2}\) (like \(\ddot{\alpha}_{kj},\ddot{\beta}_{kj}\) and \(\lambda^{2}\)) and employing the method of slowly varying amplitudes [42], it is possible to obtain a set of coupled first order differential equations with time independent coefficients in terms of \(\alpha_{kj}\) and \(\beta_{kj}\). For \(k=1\), this set takes the form [41]
\[\frac{\mathrm{d}\alpha_{1j}}{\mathrm{d}\tau} =-\sqrt{3}\alpha_{3j}-\beta_{1j}, \tag{42a}\] \[\frac{\mathrm{d}\beta_{1j}}{\mathrm{d}\tau} =-\alpha_{1j}-\sqrt{3}\beta_{3j}, \tag{42b}\]
whereas for \(k>2\) we obtain
\[\frac{\mathrm{d}\alpha_{kj}}{\mathrm{d}\tau} =\sqrt{k(k-2)}\alpha_{(k-2),j}-\sqrt{k(k+2)}\alpha_{(k+2),j}, \tag{43a}\] \[\frac{\mathrm{d}\beta_{kj}}{\mathrm{d}\tau} =\sqrt{k(k-2)}\beta_{(k-2),j}-\sqrt{k(k+2)}\beta_{(k+2),j}. \tag{43b}\]
Because of the initial conditions \(\alpha_{kj}(0)=\delta_{kj}\) and \(\beta_{kj}(0)=0\), all the coefficients with at least one even index vanish.
Complete solutions to the set of equations (42) and (43) were obtained in Ref. [40] in terms of the hypergeometric function. Nonetheless, in this section we will be interested in computing the diagonal entropy generated in particular modes of the field in the regime of parametric resonance (40). As a result, for reasons that will become clear later, it will be sufficient to pay attention only to the asymptotic behavior of the Bogoliubov coefficients with the first index equal to \(1\).
For \(\tau\ll 1\), their expressions read
\[\alpha_{1(2\mu+1)} =(\mu+1)K_{\mu}J_{\mu}\ \tau^{\mu}+\mathcal{O}(\tau^{\mu+2}), \tag{44a}\] \[\beta_{1(2\mu+1)} =-K_{\mu}J_{\mu}\ \tau^{\mu+1}+\mathcal{O}(\tau^{\mu+3}), \tag{44b}\]
with \(J_{\mu}=(2\mu)!/2^{\mu}(\mu!)^{2}\) and \(K_{\mu}=(-1)^{\mu}\sqrt{2\mu+1}/(\mu+1)\), whereas for \(\tau\gg 1\)
\[\alpha_{1(2\mu+1)} \approx\frac{2}{\pi}\frac{(-1)^{\mu}}{\sqrt{2\mu+1}}, \tag{45a}\] \[\beta_{1(2\mu+1)} \approx\frac{2}{\pi}\frac{(-1)^{\mu}}{\sqrt{2\mu+1}}, \tag{45b}\]
with \(\mu=0,1,2,\ldots\).
Now we are ready to write down the reduced density operator for the considered mode and to address the question of the dynamics of the entropy production and its relation to entanglement.
### Reduced density operator
The reduced density operator of mode \(m\) is given by
\[\hat{\rho}_{m}=\operatorname{Tr}_{\{k\}/m}\hat{\rho}, \tag{46}\]
where \(\operatorname{Tr}_{\{k\}/m}\) denotes the trace of the total density operator \(\hat{\rho}\) over all the modes except the \(m\)-th one.
Now, from the previous section, we can see that the time evolution of the field can be described by an effective quadratic time-dependent Hamiltonian. We know that the time evolution governed by quadratic Hamiltonians transforms any Gaussian state into another Gaussian state, which are completely characterized by the covariance matrix.
As the vacuum state belongs to the class of Gaussian states, it is in fact possible to describe our initial state in terms of the Wigner function for the \(m\)-th mode, which reads
\[W_{m}(\mathbf{q})=\frac{1}{\sqrt{2\pi\det\mathbf{\Sigma}_{m}}}e^{-\frac{1}{2}( \mathbf{q}-\langle\mathbf{q}\rangle)\mathbf{\Sigma}_{m}^{-1}(\mathbf{q}- \langle\mathbf{q}\rangle)},\]
where \(\mathbf{q}=(\hat{q}_{m},\hat{p}_{m})\) is the quadrature operator with components
\[\hat{q}_{m} =\frac{1}{\sqrt{2}}\left(\hat{a}_{m}^{\dagger}+\hat{a}_{m}\right), \tag{47a}\] \[\hat{p}_{m} =\frac{i}{\sqrt{2}}\left(\hat{a}_{m}^{\dagger}-\hat{a}_{m}\right). \tag{47b}\]
\(\mathbf{\Sigma}_{m}\) stands for the covariance matrix
\[\mathbf{\Sigma}_{m}\equiv\begin{pmatrix}\sigma_{m}^{q}&\sigma_{m}^{qp}\\ \sigma_{m}^{qp}&\sigma_{m}^{p}\end{pmatrix} \tag{48}\]
with
\[\sigma_{m}^{q} =\langle\hat{q}_{m}^{2}\rangle-\langle\hat{q}_{m}\rangle^{2}, \tag{49a}\] \[\sigma_{m}^{p} =\langle\hat{p}_{m}^{2}\rangle-\langle\hat{p}_{m}\rangle^{2},\] (49b) \[\sigma_{m}^{qp} =\frac{1}{2}\langle\hat{p}_{m}\hat{q}_{m}+\hat{q}_{m}\hat{p}_{m }\rangle-\langle\hat{q}_{m}\rangle\langle\hat{p}_{m}\rangle. \tag{49c}\]
Since we are interested in the diagonal entropy, we focus on the diagonal components of the density operator in the energy eigenbasis. For the special case of an initially vacuum state \(\ket{0;\textit{in}}\), these diagonal terms can be written as functions of the covariance matrix elements [40]
\[\rho_{m}^{(n)} =\frac{2\left[\left(2\sigma_{m}^{q}-1\right)\left(2\sigma_{m}^{p }-1\right)\right]^{n/2}}{\left[\left(2\sigma_{m}^{q}+1\right)\left(2\sigma_{m} ^{p}+1\right)\right]^{(n+1)/2}}\] \[\times\mathrm{P}_{n}\left(\frac{4\sigma_{m}^{q}\sigma_{m}^{p}-1} {\sqrt{(4(\sigma_{m}^{q})^{2}-1)(4(\sigma_{m}^{p})^{2}-1)}}\right), \tag{50}\]
where \(\mathrm{P}_{n}\) is the Legendre polynomial of order \(n\) and \(\rho_{m}^{(n)}=\bra{\mathrm{in};n}\hat{\rho}_{m}\ket{n;\mathrm{in}}\) is the \(n\)-th diagonal element of the reduced density operator in the initial energy eigenbasis.
By expressing the quadrature operators (47) in terms of the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) defined in Eq. (17), the variances can be directly computed, resulting in
\[\sigma_{m}^{q} =\frac{1}{2}\sum_{k}\left|\alpha_{km}\!-\!\beta_{km}\right|^{2}, \tag{51a}\] \[\sigma_{m}^{p} =\frac{1}{2}\sum_{k}\left|\alpha_{km}\!+\!\beta_{km}\right|^{2} \tag{51b}\]
where \(m\) is an odd integer and the cross term \(\sigma_{m}^{qp}\) is identically zero for our choice of the initial state.
By taking the time derivatives of these last equations and inserting the recursive relations (42) and (43), one can show that
\[\frac{\mathrm{d}\sigma_{m}^{q}}{\mathrm{d}\tau} =-\left[\alpha_{1m}\!-\!\beta_{1m}\right]^{2} \tag{52a}\] \[\frac{\mathrm{d}\sigma_{m}^{p}}{\mathrm{d}\tau} =+\left[\alpha_{1m}\!+\!\beta_{1m}\right]^{2}, \tag{52b}\]
which depends only on the Bogoliubov coefficients, with the first index equal to \(1\) (as we have pointed out in the beginning of the section). Moreover, because the definitions (47), the differential equations (52) need to satisfy the initial conditions \(\sigma_{m}^{q}(0)=\sigma_{m}^{p}(0)=1/2\).
We now analyze the solutions to these equations in two distinct regimes, the short-time and the long-time.
### Short-time regime
The short time limit is defined by \(\tau\ll 1\). Inserting Eqs. (44) into Eqs. (52) and integrating over \(\tau\), we obtain
\[\frac{\sigma_{2\mu+1}^{q}}{\sigma_{2\mu+1}^{p}} \bigg{\}}=\frac{1}{2}\mp\tau^{2\mu+1}J_{\mu}^{2}\left[1\mp K_{ \mu}^{2}\tau+\mathcal{O}(\tau^{2})\right],\]
with \(J_{\mu}\) and \(K_{\mu}\) defined in Eq. (44).
Plugging Eqs. (53) into Eq. (50) leads to the following expression for the diagonal components of the reduced density operator
\[\rho_{2\mu+1}^{(n)} =(-1)^{n}i^{n}J_{\mu}^{n}\tau^{n(2\mu+1)}\left(1-K_{\mu}^{4}\tau^ {2}\right)^{n/2} \tag{53}\] \[\times\left[1-(n+1)J_{\mu}^{2}\tau^{2\mu+2}\left(K_{\mu}^{2}- \frac{1}{2}J_{\mu}^{2}\tau^{2\mu}\right)\bigg{]}\] \[\times P_{n}\left[i\tau\left(K_{\mu}^{2}-J_{\mu}^{2}\tau^{2\mu} \right)\right]+\mathcal{O}(\tau^{2\mu+3}).\]
This expression is what we need to compute the diagonal entropy the \((2\mu+1)\)-th mode. Up to the second order in \(\tau\) we obtain, for \(\mu=0\), the following result
\[S_{d}^{1}(\tau\ll 1)=\frac{1}{2}N_{1}(\tau)\bigg{[}1-\ln\frac{1}{2}N_{1}(\tau) \bigg{]},\]
while for any other value of \(\mu\), we have
\[S_{d}^{2\mu+1}(\tau\ll 1)=N_{2\mu+1}(\tau)\bigg{[}1-\ln N_{2\mu+1}(\tau) \bigg{]}+\mathcal{O}(\tau^{2\mu+3}),\]
where \(N_{2\mu+1}(\tau)=K_{\mu}^{2}J_{\mu}^{2}\tau^{2\mu+2}+\mathcal{O}(\tau^{2\mu+3})\) is the number of particles created in the corresponding mode.
Hence, at short-times, the entropy for each mode increases with the number of created particles, aligning completely with the findings outlined in the preceding section. As expected, the current methodology enables an exploration of the long-time dynamics of the entropy production, and we delve into such an analysis in the subsequent discussion.
### Long-time regime
The long-time limit is defined by \(\tau\gg 1\). In this case, by substituting Eqs. (45) into Eqs. (52), we obtain the time derivatives of the system's quadrature variances as
\[\frac{\mathrm{d}}{\mathrm{d}\tau}\sigma_{2\mu+1}^{q} \approx 0 \tag{54a}\] \[\frac{\mathrm{d}}{\mathrm{d}\tau}\sigma_{2\mu+1}^{p} \approx\frac{16}{\pi^{2}(2\mu+1)}. \tag{54b}\]
The specific integration constant for Eqs. (54a) varies for each mode and depends on the complete form of the Bogoliubov coefficients [40], but the general behavior is the same: both quadrature variances start with the same value \(1/2\) at \(t=0\) and end up assuming distinct asymptotic behavior at \(\tau\gg 1\), with \(\sigma_{m}^{q}\) decreasing to a constant value, whereas \(\sigma_{m}^{p}\) increases almost linearly in time.
It is now straightforward to compute the single-mode reduced density matrix as
\[\rho_{m}^{(n)}(\tau\gg 1)=C_{m}^{(n)}\,\left[\det\mathbf{\Sigma}_{m}(\tau)\right]^{ -1/2}+\mathcal{O}(1/\tau) \tag{55}\]
where
\[C_{m}^{(n)}=\frac{1}{\sqrt{1+T_{m}}}\left(\frac{1-T_{m}}{\sqrt{1-T_{m}^{2}}} \right)^{n}\mathrm{P}_{n}\left(\frac{1}{\sqrt{1-T_{m}^{2}}}\right) \tag{56}\]
is a positive real coefficient with \(T_{m}=1/2\sigma_{m}^{q}\).
From the above expressions, we can compute the diagonal entropy associated with the \(m\)-th field mode as
\[S_{d}^{m}(\tau\gg 1)\approx S_{R}^{m}(\tau)+[\det\mathbf{\Sigma}_{m}(\tau)]^{- 1/2}\mathcal{S}_{m}, \tag{57}\]
where \(\mathcal{S}_{m}=-\sum_{n}C_{m}^{(n)}\ln C_{m}^{(n)}\) and \(S_{R}^{m}(\tau)=\frac{1}{2}\ln\det\mathbf{\Sigma}_{m}(\tau)\) is the Renyi-2 entropy of the \(m\)-th mode [43]. It can be shown that the second term in Eq. (57) diverges logarithmically with the system dimension \(\mathcal{N}\). This last fact is expected since we are considering a field theory and the number of degrees of freedom of the system is infinite. Moreover, we must remember that entropy is defined up to a multiplicative and an additive constant. So, this last term is not fundamental for the dynamical behavior of entropy.
For the resonant mode \(m=1\), we obtain \(\sigma_{1}^{q}\to 2/\pi^{2}\)[40] and \(\sigma_{1}^{p}\to 16\tau/\pi^{2}\), leading to the Reniy-2 entropy
\[S_{R}^{1}(\tau)\approx\frac{1}{2}\ln\frac{32}{\pi^{4}}\tau,\]
which is in agreement with Ref. [32]3. In the case of the subsequent mode \(m=3\), now \(\sigma_{3}^{q}\to 38/9\pi^{2}\) and \(\sigma_{3}^{p}\to 16\tau/3\pi^{2}\), so we obtain
Footnote 3: Here, the argument in the Réniy-2 entropy differs from Ref. [32] by a factor of 4. This occurs because the variances defined in the last reference are twice as large as the ones in Eq. (49).
\[S_{R}^{3}(\tau)\approx\frac{1}{2}\ln\frac{608}{27\pi^{4}}\tau.\]
Now, since the global state of the field is pure --initial pure state under unitary evolution--, \(S_{R}^{m}(\tau)\) quantifies the amount of entanglement between the \(m\)-th mode and all the remaining ones. Therefore, what Eq. (57) is saying to us is that the asymptotic behavior of the diagonal entropy is fundamentally determined by the generation of entanglement between the considered mode and all the others.
## V Conclusions
This article considers the problem of thermodynamic entropy production within the framework of the dynamical Casimir effect, exploring two distinct approaches. The initial approach, employing an effective Hamiltonian description of field dynamics, provides a connections between entropy production and the generation of quantum coherence in the field's mode basis in the short-time limit. The second approach, which relies on the reduced density operator of an individual mode and it is valid for all times, establishes a connection between entropy growth and entanglement generation between the selected mode and all the others.
Although both approaches can only be compared in the short-time regime, where both predicts that entropy increases with the number of created particles, they provide different, but complementary information about the dynamics of the entropy production due to the dynamical Casimir effect.
In summary, the production of thermodynamic entropy in the field due to the dynamical Casimir effect is governed by the generation of quantum coherence in the field's mode basis and entanglement between the modes. Since our initial state is stationary (vacuum), the diagonal entropy cannot decrease [8] and, therefore, neither coherence or entanglement.
These results can be understood as follows. A coupling between all the field modes arises due to the nontrivial boundary conditions imposed on the field by the motion of the mirror. Such interaction induces transitions among the modes, which lies at the root of the generation of quantum coherence and quantum entanglement. Although the evolution is unitary, irreversibility, which is characterized by entropy production, also arises due to these transitions, as discussed in Refs. [8; 9; 10; 12]. Reversible processes are defined in the limit where the motion is so slow that there is no particle creation, no scattering and, thus, no entropy production. Note that in the considered context, in which we have a resonant cavity trapping the field, there are motions for which no particles will be created and, thus, no entropy will be produced. This is a point that deserves a deeper investigation.
Our research enhances the comprehension of the thermodynamics of quantum fields within non-trivial boundary conditions and exploring the impact of quantum coherence and entanglement on such phenomena. Despite this, numerous questions remain unanswered.
An interesting question that directly emerges concerns the split of the energy into work and heat, where the latter is associated with the irreversible aspect of the process, while the former should be related to the energy that can be extracted from the field after the process [45; 46]. Another related issue involves the statistical description of the field in terms of stochastic entropy production and the fluctuation theorems [47]. Furthermore, what role do multiple quantum coherence and multipartite entanglement play in entropy production? How do the thermalization properties of field dynamics frame into this? Lastly, a question arises regarding whether heat and work adhere appropriately to the equivalence principle [48]. These are some of the pertinent questions that
will be the focus of future investigations.
###### Acknowledgements.
This work was supported by the National Institute for the Science and Technology of Quantum Information (INCT-IQ), Grant No. 465469/2014-0, by the National Council for Scientific and Technological Development (CNPq), Grants No 308065/2022-0, and by Coordination of Superior Level Staff Improvement (CAPES).
## Appendix A Derivation of the effective Hamiltonian
### Dynamical equations for the instantaneous creation and annihilation operators
From Eq. (2), the dynamical equation of motion for a quantum scalar field and its conjugated momentum, can be written as
\[\partial_{t}\hat{\phi}(x,t) =\hat{\pi}(x,t) \tag{36a}\] \[\partial_{t}\hat{\pi}(x,t) =\partial_{x}^{2}\hat{\phi}(x,t). \tag{36b}\]
By combining Eqs. (19) and (22), one can express the fields \(\hat{\phi}\) and \(\hat{\pi}\) and their correspondent time derivatives as
\[\hat{\phi} =\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\varphi_{k}, \tag{37a}\] \[\hat{\pi} =i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\hat{a}_{k}e^{-i\Omega_{k}}\right)\varphi_{k},\] (37b) \[\partial_{t}\hat{\phi} =\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\left(\partial_{t} \varphi_{k}-\frac{\dot{\omega}_{k}}{2\omega_{k}}\varphi_{k}\right)\] \[+\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\varphi_{k}+\hat{\pi},\] (37c) \[\partial_{t}\hat{\pi} =i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\hat{a}_{k}e^{-i\Omega_{k}}\right)\left(\partial_{t}\varphi_{k}+ \frac{\dot{\omega}_{k}}{2\omega_{k}}\varphi_{k}\right)\] \[+i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\dot{\hat{a}}_{k}e^{-i\Omega_{k}}\right)\varphi_{k}+\partial_{x}^{2} \hat{\phi}, \tag{37d}\]
where, for conciseness, we have suppressed the notation of time and spatial dependence in all terms in (37). Comparing (36) with (37c) and (37d), we can isolate the time derivative of the operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) by computing
\[\int_{0}^{L}\mathrm{d}x\varphi_{j}\left(\partial_{t}\hat{\phi}- \hat{\pi}\right)=\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\delta_{kj}\] \[-\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i\Omega _{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\left(G_{kj}+\frac{\dot{ \omega}_{k}}{2\omega_{k}}\delta_{kj}\right)=0 \tag{38}\]
and
\[\int_{0}^{L}\mathrm{d}x\varphi_{j}\left(\partial_{t}\hat{\pi}- \partial_{x}^{2}\hat{\phi}\right) \tag{39}\] \[=i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\dot{\hat{a}}_{k}e^{-i\Omega_{k}}\right)\delta_{kj}\] \[+i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\hat{a}_{k}e^{-i\Omega_{k}}\right)\left(G_{jk}+\frac{\dot{ \omega}_{j}}{2\omega_{j}}\delta_{kj}\right)=0,\]
where it was used \(\int_{0}^{L}dx\varphi_{k}\varphi_{j}=\delta_{kj}\) and \(G_{kj}\coloneqq\int_{0}^{L}\varphi_{k}\partial_{t}\varphi_{j}\). By defining \(\mu_{kj}=\sqrt{\frac{\omega_{j}}{\omega_{k}}}\left(G_{kj}+\frac{\dot{\omega} _{k}}{2\omega_{k}}\delta_{kj}\right)\) we obtain from (38) and (39) the following equations
\[\dot{\hat{a}}_{j}e^{-i\Omega_{j}}+\dot{\hat{a}}_{j}^{\dagger}e^{i \Omega_{j}} =\sum_{k}\mu_{kj}\left(\hat{a}_{j}e^{-i\Omega_{j}}+\hat{a}_{j}^{ \dagger}e^{i\Omega_{j}}\right), \tag{40a}\] \[\dot{\hat{a}}_{j}e^{-i\Omega_{j}}-\dot{\hat{a}}_{j}^{\dagger}e^{ i\Omega_{j}} =\sum_{k}\mu_{jk}\left(\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}-\hat{a}_{k}e^{-i \Omega_{k}}\right). \tag{40b}\]
From the last system, it is easy to isolate \(\dot{\hat{a}}_{j}(t)\) and \(\dot{\hat{a}}_{j}^{\dagger}(t)\) as
\[\dot{\hat{a}}_{j}(t) =\sum_{k}\left[A_{kj}(t)a_{k}(t)+B_{kj}^{*}(t)a_{k}^{\dagger}(t) \right], \tag{41a}\] \[\dot{\hat{a}}_{j}^{\dagger}(t) =\sum_{k}\left[A_{kj}^{*}(t)a_{k}^{\dagger}(t)+B_{kj}(t)a_{k}(t) \right], \tag{41b}\]
with
\[A_{kj}(t) =\frac{1}{2}\left[\mu_{kj}(t)-\mu_{jk}(t)\right]e^{-i[\Omega_{k}( t)-\Omega_{j}(t)]}, \tag{42a}\] \[B_{kj}(t) =\frac{1}{2}\left[\mu_{kj}(t)+\mu_{jk}(t)\right]e^{-i[\Omega_{k}( t)+\Omega_{j}(t)]}. \tag{42b}\]
Since \(\omega_{k}(t)=k\pi/L(t)\) and using the definition (10) we can calculate
\[G_{kj}(t) =g_{kj}\frac{\dot{L}(t)}{L(t)}, \tag{43}\] \[\frac{\dot{\omega}_{k}(t)}{\omega_{k}(t)} =-\frac{\dot{L}(t)}{L(t)}, \tag{44}\]
where \(g_{kj}\) has the same form as expressed in (13). So we obtain \(\mu_{kj}(t)=-\left(\sqrt{\frac{\dot{i}}{k}}g_{jk}+\frac{1}{2}\delta_{kj} \right)\frac{\dot{L}(t)}{L(t)}\) as in Eq. (25).
### Effective Hamiltonian
To find the effective Hamiltonian that generates the dynamical equations (42) we begin by considering the most general quadratic operator
\[\hat{H}(t)=\sum_{kl}\left[\mathcal{A}_{kl}(t)\hat{a}_{k}^{\dagger}(t) \hat{a}_{l}^{\dagger}(t)+\mathcal{B}_{kl}(t)\hat{a}_{k}^{\dagger}(t)\hat{a}_{l}(t)\right. \tag{45}\] \[\left.+\mathcal{C}_{kl}(t)\hat{a}_{l}^{\dagger}(t)\hat{a}_{k}(t)+ \mathcal{D}_{kl}(t)\hat{a}_{k}(t)\hat{a}_{l}(t)\right],\]
which is: (i) hermitian, by satisfying the conditions \(\mathcal{A}_{kl}(t)=\mathcal{D}_{kl}^{\ast}(t)\), \(\mathcal{B}_{kl}(t)=\mathcal{C}_{kl}^{\ast}(t)\) and (ii) invariant over an index change, with the conditions \(\mathcal{A}_{kl}(t)=\mathcal{A}_{lk}(t)\), \(\mathcal{D}_{kl}(t)=\mathcal{D}_{lk}(t)\), \(\mathcal{B}_{kl}(t)=\mathcal{C}_{lk}(t)\) and \(\mathcal{B}_{lk}(t)=\mathcal{C}_{kl}(t)\).
Suppressing the notation for time dependence, the correspondent Heisenberg equation of motion for the annihilation and creation operators is therefore
\[\dot{\hat{a}}_{j}=i\left[\hat{H},\hat{a}_{j}\right]=i\sum_{kl} \left(\mathcal{A}_{kl}\Big{[}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{ \dagger},\hat{a}_{j}\Big{]}+\mathcal{B}_{kl}\Big{[}\hat{a}_{k}^{\dagger}\hat{ a}_{l},\hat{a}_{j}\Big{]}\right.\] \[\left.\qquad\qquad\qquad\qquad+\mathcal{C}_{kl}\Big{[}\hat{a}_{l }^{\dagger}\hat{a}_{k},\hat{a}_{j}\Big{]}+\mathcal{D}_{kl}[\hat{a}_{k}\hat{ a}_{l},\hat{a}_{j}]\right)\] \[=-i\sum_{k}\bigg{[}\left(\mathcal{A}_{kj}+\mathcal{A}_{jk}\right) \hat{a}_{k}^{\dagger}+\left(\mathcal{B}_{jk}+\mathcal{C}_{kj}\right)\hat{a}_ {k}\bigg{]} \tag{111}\]
and
\[\dot{\hat{a}}_{j}^{\dagger}=i\left[\hat{H},\hat{a}_{j}^{\dagger} \right]=i\sum_{kl}\Biggl{(}\mathcal{A}_{kl}\Big{[}\hat{a}_{k}^{\dagger}\hat{a} _{l}^{\dagger},\hat{a}_{j}^{\dagger}\Big{]}+\mathcal{B}_{kl}\Big{[}\hat{a}_{k }^{\dagger}\hat{a}_{l},\hat{a}_{j}^{\dagger}\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad+\mathcal{C}_{kl}\Big{[}\hat{a}_{l }^{\dagger}\hat{a}_{k},\hat{a}_{j}^{\dagger}\Big{]}+\mathcal{D}_{kl}\Big{[} \hat{a}_{k}\hat{a}_{l},\hat{a}_{j}^{\dagger}\Big{]}\Biggr{)}\] \[=i\sum_{k}\bigg{[}\left(\mathcal{D}_{kj}+\mathcal{D}_{jk}\right) \hat{a}_{k}+\left(\mathcal{B}_{kj}+\mathcal{C}_{jk}\right)\hat{a}_{k}^{ \dagger}\bigg{]}. \tag{112}\]
Comparing (10a) with (111) and (10b) with (112), we obtain the following system
\[-i\left[\mathcal{A}_{kj}(t)+\mathcal{A}_{jk}(t)\right] =-2i\mathcal{A}_{kj}(t)=B_{kj}^{\ast}(t)\] \[i\left[\mathcal{D}_{kj}(t)+\mathcal{D}_{jk}(t)\right] =2i\mathcal{D}_{kj}(t)=B_{kj}(t)\] \[i\left[\mathcal{B}_{kj}(t)+\mathcal{C}_{jk}(t)\right] =2i\mathcal{B}_{kj}(t)=A_{kj}^{\ast}(t).\]
Inserting the last coefficients into Eq. (100), one obtains the following expression for the effective Hamiltonian
\[\hat{H}_{H}(t)=\frac{i}{2}\sum_{jk}\Bigg{[}A_{kj}(t)\hat{a}_{j}^{ \dagger}(t)\hat{a}_{k}(t)+B_{kj}^{\ast}(t)\hat{a}_{j}^{\dagger}(t)\hat{a}_{k}^ {\dagger}(t)-\text{h.c.}\Bigg{]}, \tag{114}\]
where the subscript \(H\) conveys that the operator is represented in the Heisenberg picture of quantum mechanics.
Moving to the Schrodinger picture, the last Hamiltonian takes the form
\[\hat{H}_{S}(t)=\frac{i}{2}\sum_{jk}\Bigg{[}A_{kj}(t)\hat{b}_{j}^{ \dagger}\hat{b}_{k}+B_{kj}^{\ast}(t)\hat{b}_{j}^{\dagger}\hat{b}_{k}^{\dagger} -\text{h.c.}\Bigg{]}, \tag{115}\]
where the Heisenberg annihilation (and creations) operator is defined as \(\hat{a}_{k}(t)=\hat{U}_{S}^{\dagger}(t)\hat{b}_{k}\hat{U}_{S}(t)\), with \(\hat{U}_{S}(t)\) being the time evolution operator generated by the Hamiltonian (115).
|
2309.16792 | Agent Coordination via Contextual Regression (AgentCONCUR) for Data
Center Flexibility | A network of spatially distributed data centers can provide operational
flexibility to power systems by shifting computing tasks among electrically
remote locations. However, harnessing this flexibility in real-time through the
standard optimization techniques is challenged by the need for sensitive
operational datasets and substantial computational resources. To alleviate the
data and computational requirements, this paper introduces a coordination
mechanism based on contextual regression. This mechanism, abbreviated as
AgentCONCUR, associates cost-optimal task shifts with public and trusted
contextual data (e.g., real-time prices) and uses regression on this data as a
coordination policy. Notably, regression-based coordination does not learn the
optimal coordination actions from a labeled dataset. Instead, it exploits the
optimization structure of the coordination problem to ensure feasible and
cost-effective actions. A NYISO-based study reveals large coordination gains
and the optimal features for the successful regression-based coordination. | Vladimir Dvorkin | 2023-09-28T18:39:42Z | http://arxiv.org/abs/2309.16792v2 | # Agent Coordination via Contextual Regression (AgentCONCUR) for Data Center Flexibility
###### Abstract
A network of spatially distributed data centers can provide operational flexibility to power systems by shifting computing tasks among electrically remote locations. However, harnessing this flexibility in real-time through the standard optimization techniques is challenged by the need for sensitive operational datasets and substantial computational resources. To alleviate the data and computational requirements, this paper introduces a coordination mechanism based on contextual regression. This mechanism, abbreviated as AgentCONCUR, associates cost-optimal task shifts with public and trusted contextual data (e.g., real-time prices) and uses regression on this data as a coordination policy. Notably, regression-based coordination does not learn the optimal coordination actions from a labeled dataset. Instead, it exploits the optimization structure of the coordination problem to ensure feasible and cost-effective actions. A NYISO-based study reveals large coordination gains and the optimal features for the successful regression-based coordination.
Contextual learning, data centers, feature selection, regression, sustainable computing, system coordination
## I Introduction
Coordinated operations of bulk power systems and coupled infrastructures allow for leveraging their complementarity and offsetting inefficiencies, thus leading to enhanced performance. Coordination schemes synchronize grid operations with distribution [1], natural gas [2], and district heating [3] systems, and more recently, a large coordination potential has emerged from the networks of data centers (NetDC) [4]. Their unique coordination advantage is in _spatial flexibility_, which distributed data centers provide by shifting computing tasks among electrically remote locations. This flexibility resource will be important for future power grids, as electricity demand of data centers is rapidly growing, and is expected to reach 35 GW by 2030 in the U.S. alone [5]. Even at present, the coordination potential is significant: training a single GPT-3 language model - the core of the popular ChatGPT chatbot - consumes as much as 1.3 GWh [6]. Allocating such energy-intensive tasks in the NetDC is thus likely to predetermine the dispatch cost and emission intensity in adjacent grids.
The growing environmental footprint of computing has encouraged large internet companies to optimize NetDC operations in a carbon-aware manner. Using online emission monitoring tools, such as WattTime.org and ElectricityMaps.com, they smartly time and allocate computing tasks in regions with the least emission footprint [7, 8]. However, the sole reliance on limited emission data is the form of _grid-agnostic_ coordination, which respects NetDC constraints yet ignores those of power grids. For _grid-aware_ coordination, the literature offers three coordination mechanisms: demand response [4], enhanced market design [9], and co-optimization of grid and NetDC operations [10]. In practice, participation of data centers in demand response is very limited due to performance concerns [4]. While the second mechanism integrates the spatial flexibility within market-clearing algorithms and even features robust market properties [11], it remains challenging to fully represent complex data center objectives (e.g., quality of service) and constraints (e.g., latency) via single utility function. The latter co-optimization mechanism models the _ideal_ power-NetDC coordination with the potential for the full representation of operational constraints, akin to coordination models for conventional energy infrastructures [1, 2, 3]. However, large data requirements and short operational time frames hinder such coordination in practice.
This paper develops a new, regression-based mechanism for grid-aware coordination of power systems and NetDC, termed AgentCONCUR. Unlike optimization-based coordination, the regression solely acts on available contextual grid information, while approximating the optimal decision-making of the two systems. As such, AgentCONCUR resembles industry practices in [7] and [8] by relying on limited grid data, while also leveraging the optimization structure of the ideal coordination. Specifically, this paper contributes by:
1) Developing a bilevel co-optimization of the power grid and NetDC operations, where power system decision-making is constrained by that of NetDC. Similar to grid-aware models in [9, 10, 11], this model takes the power system perspective, but it represents the NetDC as a separate, optimization problem with customer-oriented objectives and constraints (e.g., latency). This co-optimization provides the ideal solution.
2) Devising a contextual regression policy that efficiently approximates the ideal coordination. The policy feasibility and cost-consistency is ensured by the new training optimization which inherits the ideal optimization structure. Using sufficiently many operational scenarios in training allows for robust and cost-consistent performance across testing scenarios. Furthermore, the proposed training allows for the optimal coordination feature selection, such that the coordination can be made possible at different data-intensity requirements.
3) Performing a case study on the New York ISO system to estimate the cost-saving potential in the peak-hour coordination and its approximation by AgentCONCUR. Our results
reveal practical trade-offs between the amount of contextual information (features) and the efficiency of coordination.
This paper also contributes to decision-focused learning. While the prior work focused on contextual _data_ predictions, e.g., demand [12] or wind power generation [13] data, here we contextually predict the coordination _decisions_ instead.
In the remainder, Section II details decision-making of power grid and NetDC operators, and then presents the ideal coordination. Section III introduces the contextual regression approach for AgentCONCUR. Section IV applies AgentCONCUR to New York ISO system and Section V concludes.
_Notation:_ Lower- and upper-case letters denote vectors and matrices, respectively. For some matrix \(A\), \(a_{ij}\) denotes its element at position \((i,j)\). Symbol \({}^{\top}\) stands for transposition, and \({}^{\star}\) denotes the optimal value. Vectors 0 and 1 are of zeros and ones, respectively. Operator \(\langle\cdot\rangle_{\mathrm{F}}\) is the Frobenius inner product, and \(\|\cdot\|_{p}\) denotes the \(p-\)norm.
## II Optimizing Power and NetDC Coordination
We consider the power-NetDC coordination problem, where agents interface as pictured in Fig. 1. The NetDC operator chooses the spatial allocation of computing tasks among data centers, where the tasks come from a spatially distributed population of users. The allocation criterion is the minimum of network _latency_ - a time delay between sending, executing and sending back the result of a computational task for all users. The resulting task allocation shapes electricity demand, which is then used in the optimal power flow (OPF) problem for power system dispatch. The two problems can thus be solved in a coordinated manner to minimize the dispatch cost.
The coordination is performed by means of spatial shifts of computing tasks using _virtual links_ connecting data centers into a network [9]. These shifts must be coordinated to satisfy both power system and NetDC objectives. To enable such coordination, we formulate the following bilevel optimization, where the power system operator acts as a leader, whose decision space is constrained by the NetDC operator, acting as a follower:
\[\underset{x,\varphi}{\text{minimize}} c_{\text{opf}}(x) \triangleright\text{OPF cost}\] subject to \[x\in\mathcal{X}_{\text{opf}}(y) \triangleright\text{OPF feasibility}\] \[\underset{y}{\text{minimize}} c_{\text{net-dc}}(y) \triangleright\text{Latency loss}\] subject to \[y\in\mathcal{Y}_{\text{net-dc}}(\varphi)\triangleright\text{NetDC feasibility}\]
where the task shift \(\varphi\) is the _coordination variable_. The lower-level problem takes request \(\varphi\) as input, and minimizes the latency loss by selecting the new task allocation \(y\). The optimized allocation then enters the power flow equations, and the power system operators computes the new dispatch \(x\) which minimizes the cost. The optimal solution \(\varphi^{\star}\) achieves cost-optimal and feasible for the two systems coordination.
The rest of this section details decision-making of the two systems, and then presents the bilevel coordination problem.
### _Power System Optimization_
The operational setting builds on the standard DC-OPF problem [14], which computes the least-cost generation dispatch \(p\in\mathbb{R}^{b}\), within limits \(p,\overline{p}\in\mathbb{R}^{b}_{+}\), that satisfies electricity net demand - load \(d\in\mathbb{R}^{b}_{+}\) subtracted by non-dispatchable renewable generation \(r\in\mathbb{R}^{b}_{+}\). The dispatch cost is modeled using a quadratic function with the first- and second-order coefficients \(c\in\mathbb{R}^{b}_{+}\) and \(C\in\mathbb{S}^{b}_{+}\), respectively. The power flows are computed using the matrix of power transfer distribution factors \(F\in\mathbb{R}^{b\times l}\), and must respect the line capacity \(\overline{f}\in\mathbb{R}^{l}_{+}\). In rare cases, when generation lacks to satisfy all loads, we model load shedding \(\ell\in\mathbb{R}^{b}\) with the most expensive cost \(s\gg c\). In this notation, the OPF problem is:
\[\underset{p,\ell}{\text{minimize}} c^{\top}p+p^{\top}Cp+s^{\top}\ell\] (1a) subject to \[\mathbb{1}^{\top}(p+r+\ell-d-\Gamma\vartheta)=0, \tag{1b}\] \[|F(p+r+\ell-d-\Gamma\vartheta)|\leqslant\overline{f},\] (1c) \[\underline{p}\leqslant p\leqslant\overline{p},\ 0\leqslant\ell\leqslant d, \tag{1d}\]
which minimizes the total dispatch cost (1a) subject to the power balance equation (1b), minimum and maximum power flow, generation, and load shedding limits in (1c)-(1d), respectively. Modelling Power-NetDC coordination, we distinguish between conventional loads \(d\) and power consumption by data centers \(\Gamma\vartheta\) in constraints (1b) and (1c), where auxiliary matrix \(\Gamma\in\mathbb{R}^{b\times n}\) converts computing loads \(\vartheta\in\mathbb{R}^{n}\) of \(n\) data centers into electrical loads. Although restrictive, this linear conversion model is consistent with power consumption models under different utilization regimes of data centers [15].
### _NetDC Optimization_
The NetDC operator allocates computing tasks of \(m\) users among \(n\) data centers. For some computing demand \(\delta\in\mathbb{R}^{m}\), allocation \(W\in\mathbb{R}^{n\times m}\) is optimized to satisfy conservation conditions \(\vartheta_{i}=\sum_{j=1}^{m}w_{ij}\) and \(\delta_{j}=\sum_{i=1}^{n}w_{ij}\), enforced
Fig. 1: Interfaces and notation of the power system, network of data centers (NetDC), and communication network between data centers and users.
for each data center \(i\) and user \(j\), respectively. The goal is to minimize the latency, which is proportional to geodesic distance \(G\in\mathbb{R}^{n\times m}\) between users and data centers [16]. The proxy function for the aggregated latency is then defined as
\[\mathcal{L}:\mathbb{R}^{n\times m}\mapsto\mathbb{R},\quad\mathcal{L}(W)=\sum_{i =1}^{n}\sum_{j=1}^{m}g_{ij}w_{ij}. \tag{2}\]
The optimal task allocation problem then becomes:
\[\underset{W,\vartheta\geqslant 0}{\text{minimize}} \mathcal{L}(W)+\varrho\|W\|_{2}^{2}\] (3a) subject to \[W^{\top}\mathds{1}=\delta, \tag{3b}\] \[W\mathds{1}=\vartheta, \tag{3c}\]
which minimizes latency subject to task conservation conditions (3b) and (3c). The objective function (3a) additionally includes a quadratic term that evenly allocates tasks among equally remote data centers, using a small parameter \(\varrho>0\).
Although the data center loading \(\vartheta^{\star}\) is latency-optimal, it is ignorant of the processes in the power system and may shape an expensive electricity demand allocation \(\Gamma\vartheta^{\star}\). In this case, consider a task shift request \(\varphi\in\mathbb{R}^{k}\) along \(k=\frac{n(n-1)}{2}\) virtual links available in a fully connected NetDC. Also, consider the incidence matrix \(A\in\mathbb{R}^{n\times k}\) of the _directed_ NetDC, where
\[a_{ij}=\begin{cases}+1,&\text{if }i=n\\ -1,&\text{if }i=n^{\prime}\end{cases}\qquad\forall j=(n,n^{\prime})\in 1,\dots,k.\]
Then, given some nominal solution \(W^{\star},\vartheta^{\star}\) from (3), and some exogenous task shift request \(\varphi\), the tasks are re-allocated in a latency-aware fashion by solving the following optimization:
\[\underset{W,\vartheta\geqslant 0}{\text{minimize}} \frac{1}{2}\|\mathcal{L}(W-W^{\star})\|_{2}^{2}\] (4a) subject to \[W^{\top}\mathds{1}=\delta, \tag{4b}\] \[W\mathds{1}=\vartheta,\] (4c) \[A\varphi=\vartheta-\vartheta^{\star},\] (4d) \[\mathcal{L}(W-W^{\star})\leqslant\alpha\mathcal{L}(W^{\star}). \tag{4e}\]
The problem seeks a new task allocation \(W\) that deviates the least from the latency-optimal solution \(W^{\star}\). Indeed, the objective function (4a) minimizes the latency loss, subject to the task conservation requirements (4b)-(4c). Equation (4d) re-distributes the nominal loading \(\vartheta^{\star}\) with respect to exogenous task shift \(\varphi\). The last constraint (4e) ensures that the aggregated latency must not exceed an \(\alpha-\)percentage of the nominal latency. Thus, problem (4) does not permit task shifts \(\varphi\) that increase the network latency beyond the allowable amount.
### _Bilevel Optimization for Power and NetDC Coordination_
Since the vector of spatial task shifts \(\varphi\) affects the OPF costs in (1) and the latency optimality loss in (4) simultaneously, \(\varphi\) is modeled as a coordination variable between power system and NetDC operators. The cost-optimal and feasible task shift is found by solving the following bilevel optimization:
\[\underset{p,\ell,\varphi}{\text{minimize}} c^{\top}p+p^{\top}Cp+s^{\top}\ell\] (BL.U) subject to \[|\Gamma(p+r+\ell-d-\Gamma\vartheta)=0,\] \[|F(p+r+\ell-d-\Gamma\vartheta)|\leqslant\overline{f},\] \[p\leqslant p\leqslant\overline{p},\ 0\leqslant\ell\leqslant d,\] \[\vartheta\in\underset{W,\vartheta}{\text{argmin}} \frac{1}{2}\|\mathcal{L}(W-W^{\star})\|_{2}^{2}\] (BL.L) subject to \[W^{\top}\mathds{1}=\delta, :\mu\] (BL.L) \[W\mathds{1}=\vartheta, :\nu\] (BL.L) \[A\varphi=\vartheta-\vartheta^{\star}, :\kappa\] \[w_{ij}\geqslant 0,\,\forall i,j, :\omega\] \[\mathcal{L}(W-W^{\star})\leqslant\alpha\mathcal{L}(W^{\star}),:\gamma\]
which identifies the cost-optimal task shift \(\varphi\) in (BL.L), anticipating the response of NetDC in (BL.L) in terms of new data center loading \(\vartheta\). Here, the colon signs define the dual variables associated with each constraint in (BL.L). The common solution strategy for this problem is to replace (BL.L) with its Karush-Kuhn-Tucker (KKT) conditions [17], yielding a mixed-integer formulation detailed in Appendix A.
## III Agent Coordination via Contextual Regression (AgentCONCUR)
Solving the bilevel program (BL) in real-time is challenging because of large data requirements, the lack of coordination interfaces between the power grid and NetDC operators, and the computational burden of the bilevel problem that may fail to provide the solution within operational time frames. To bypass these coordination challenges, we adopt a _contextual_ regression approach, which consists of two stages. At the first (planning) stage, a regression policy is optimized to associate the cost-optimal tasks shifts with the contextual, easy-to-access in real-time information. At the second (real-time) stage, a trained regression policy instantly maps the contextual information into task shifts. The contextual information includes partial yet strongly correlated with grid conditions data, such as aggregated load and generation statistics, electricity prices correlated with costs, and power flows correlated with bus injections. Such contextual information is available online from many power system operators worldwide, e.g., [18].
Towards formulating the problem, let \(x\) denote a feature vector collecting contextual data, and \(\phi(x)\) denote the regression policy. We focus on affine policies, i.e.,
\[\phi(x)\triangleq\beta_{0}+\beta_{1}x\in\mathbb{R}^{k},\]
where \(\beta=(\beta_{0},\beta_{1})\) are regression parameters. Once they are optimized, for some feature realization \(\widehat{x}\), the coordination in real-time proceeds as follows:
\[\varphi=\begin{cases}\phi(\widehat{x}),&\text{if feasible for NetDC and OPF}\\ 0,&\text{otherwise}.\end{cases}\]
That is, implement the regression solution if the task shifts are feasible for NetDC operations and also produce an OPF-feasible electricity load profile. Otherwise, proceed with a typically more expensive yet feasible non-coordinated solution.
In the remainder, we first present the base regression training, used as a reference to optimize \(\phi(x)\). Then, we present the proposed training optimization at the core of AgentCONCUR.
### _Base Regression_
The base approach to optimize policy \(\phi(x)\) is two-fold:
1. Collect a dataset \(\{(x_{i},\varphi_{i}^{\star})\}_{i=1}^{q}\) of \(q\) records, where each record \(i\) includes contextual features \(x_{i}\) and the optimal solution \(\varphi_{i}^{\star}\) to problem (BL), specific to record \(i\).
2. Solve a convex optimization problem: \[\underset{\|\beta\|_{1}\leqslant\varepsilon}{\text{minimize}} \frac{1}{q}\sum_{i=1}^{q}\lVert\beta_{0}+\beta_{1}x_{i}-\varphi_{i}^{ \star}\rVert_{2}^{2}\] (5) which minimizes the regularized mean squared error over \(q\) historical records. Here, we chose \(L_{1}-\)regularization, know as _Lasso_[19], which encourages sparsity of \(\beta\) up to selected regularization parameter \(\varepsilon\in\mathbb{R}_{+}\). For any given value \(\varepsilon\), optimization (5) selects optimal coordination features and minimizes the prediction error simultaneously.
While being a _data-only_ approximation of the bilevel problem (BL), this approach suffers from at least two drawbacks that may prevent its practical implementation. First, although it minimizes a prediction error, it may result in large decision errors in terms of OPF costs, e.g., when under- and over-predictions of task shifts have asymmetric cost implications. This may result in a large regret, i.e., the average distance between the OPF costs induced by trained policy \(\phi\) and the OPF costs of the bilevel problem (BL). Second, optimization (5) is myopic to the upper- and lower-level feasible regions, thus risking violating operational limits of both power system and NetDC. These two observations motivate us to internalize the cost and feasibility criteria into model training.
### _Cost- and Feasibility-Aware Regression_
Optimizing policy \(\phi(x)\) for AgentCONCUR, we leverage the optimization structure of bilevel model (BL) to guarantee the least-cost and feasible regression-based coordination across available historical records. The proposed optimization is:
\[\underset{p,\ell,\varphi,W,\vartheta,\beta}{\text{minimize}} \frac{1}{q}\sum_{i=1}^{q}(c^{\top}p_{i}+p_{i}^{\top}Cp_{i}+s^{ \top}\ell_{i})\] (6a) subject to \[\varphi_{i}=\beta_{0}+\beta_{1}x_{i},\;\lVert\beta\rVert_{1} \leqslant\varepsilon, \tag{6b}\] \[\mathbb{1}^{\top}(p_{i}+r_{i}+\ell_{i}-d_{i}-\Gamma\vartheta_{i} )=0,\] (6c) \[|F(p_{i}+r_{i}+\ell_{i}-d_{i}-\Gamma\vartheta_{i})|\leqslant \overline{f},\] (6d) \[p\leqslant p_{i}\leqslant\overline{p},\;\mathbf{0}\leqslant\ell_ {i}\leqslant d_{i},\] (6e) KKT conditions of (BLL), \[\forall i=1,\ldots,q\]
which minimizes the sample average OPF cost, subject to a set of upper-level OPF constraints in (6c)-(6e) and a set of KKT conditions of the lower-level problem (BLL), both enforced on each instance \(i\) of the training dataset. Constraint (6b) couples many instances of ideal coordination via regression policy and its role is two-fold: it structures task shifts and selects the optimal coordination features using the \(L_{1}-\)regularization. This regularization also bounds the optimal solution, which is necessary when \(\varphi_{i}=\beta_{0}+\beta_{1}x_{i}\) is a rank-deficient system of equations, i.e., having more features than virtual links.
Similar to the base regression, the task shifts are restricted to the affine policy of contextual information. However, problem (6) also anticipates how the affine restriction affects the average OPF costs. Indeed, the choice of parameters \(\beta\) affects the task shift requests \(\varphi_{1},\ldots,\varphi_{q}\), which then alter electricity load of data centers \(\vartheta_{1},\ldots,\vartheta_{q}\) as they are coupled through the KKT conditions of the lower-level problem (BLL). Thus, the optimal solution of problem (6) returns regression parameters that are cost-optimal on average under the affine restriction.
Moreover, by solving problem (6), we also guarantee the feasibility of power system and NetDC operations across historical records. Indeed, \(\beta=0\) is always a feasible choice, which corresponds to the latency-optimal solution from problem (3). Hence, in the worst-case, problem (6) chooses a non-coordinated solution to ensure feasibility for both systems. We can also reason about the feasibility of \(\phi(x)\) for unseen operational scenarios in a probabilistic sense. Indeed, the theory of sample-based approximation of stochastic programs suggests that feasibility on unseen, out-of-sample scenarios improves as the sample size \(q\) increases [20]. In the numerical experiments, we investigate the relationship between the sample size and the out-of-sample feasibility of the optimized policy \(\phi(x)\).
The training optimization (6) is solved at the operational planning stage using the similar mixed-integer reformation from Appendix A. Although the problem is NP-hard, modern optimization solvers, e.g., _Gurobi_, make the optimization more practical than its worst-case complexity would imply. Then, at the real-time stage, the trained regression model instantly maps contextual features into computing task shifts.
## IV New York ISO Case Study
### _Data and Settings_
We use an \(11\)-zone aggregation of the NYISO power system depicted in Fig. 2, sourcing data from [21]. This zonal layout corresponds to the granularity of the contextual data from the NYISO website [18], which is used to train coordination policies. The power system includes approximately \(30\) GW of electricity demand, supplied by approximately \(42\) GW of conventional generation (oil, gas, and hydro) and by \(1.7\) GW of renewable generation (wind and solar). We install \(n=5\) data centers in the West, Central, North, NYC, and MillWd zones, serving customers in all \(m=11\) NYISO zones. Computing loads can thus be shifted using \(k=n(n-1)/2=10\) virtual links. The task shifts outside the NY state area are not permitted. The computing demand \(\delta_{i}\) is assumed to be proportional to the maximum peak load \(d_{i}\) in the \(i^{\text{th}}\) area, and will be scaled to achieve different NetDC penetration levels in the range from 5% to 30% of the peak system load.
The operational data spans the period from January 1, 2018, to June 30, 2019, and includes 546 peak-hour records. Each record contains the following contextual features, which are readily available on the NYISO website [18]:
* Zonal real-time electricity demand \((d)\);
* Zonal electricity prices \((\lambda)\);
* Total renewable generation, then disaggregated by zones using data on existing renewable installations \((r)\);
* Power flows between aggregation zones \((f)\).
Each record includes 45 contextual features, so that the coordination policy based on these features takes the form:
\[\phi\triangleq\beta_{0}+\beta_{1}^{d}\begin{bmatrix}d_{1}\\ \vdots\\ d_{11}\end{bmatrix}+\beta_{1}^{\lambda}\begin{bmatrix}\lambda_{1}\\ \vdots\\ \lambda_{11}\end{bmatrix}+\beta_{1}^{r}\begin{bmatrix}r_{1}\\ \vdots\\ r_{11}\end{bmatrix}+\beta_{1}^{f}\begin{bmatrix}f_{1}\\ \vdots\\ \vdots\\ f_{12}\end{bmatrix}\]
To optimize and test the policy, we randomly select \(q=250\) records for training and reserve the remaining 296 records for testing, unless stated otherwise. The performance of the trained coordination policies is discussed using the unseen, test set. The remaining settings include default regularization parameters \(\varrho=10^{-5}\) and \(\varepsilon=10\). All data, codes and default settings needed to replicate the results are available at
[https://github.com/wdvorkin/AgentCONCUR](https://github.com/wdvorkin/AgentCONCUR)
### _Efficiency Gains of Power and NetDC Coordination_
The NYISO dispatch costs are compared in four cases:
* _No coordination:_ NetDC electricity demand obeys the latency-optimal solution from problem (3);
* _Ideal coordination:_ NetDC demand obeys the solution of the ideal coordination by means of bilevel problem (BL);
* _Base regression:_ NetDC demand is shifted according to the base regression policy optimized in (5);
* _AgentCONCUR:_ NetDC demand is shifted according to the regression policy optimized in (6).
Our results reveal that the NYISO system benefits from coordinating spatial tasks shifts in amount of \(\approx 1.9\) GWh from the densely populated South towards the Central, Northern, and Western parts of the state, as shown in Fig. 2. Noticeably, the ideal coordination consistently uses the same 4 out of 10 virtual links, while the AgentCONCUR coordination policy enjoys more active links. This difference is due to less flexible, affine policy structure, which results in more used links to ensure feasibility across the entire training dataset simultaneously, as opposed to per-scenario feasibility satisfaction provided by the ideal coordination.
Figure 3 illustrates the discrepancies in dispatch costs in all four cases. As the penetration of NetDC increases, the non-coordinated solution demonstrates rapid, quadratic growth of dispatch costs in the NYISO dominated by conventional generation. On the other hand, the ideal coordination demonstrates a rather linear growth (e.g., see the bottom plot) of dispatch costs thanks to the cost-aware allocation of computing tasks. However, the extent of cost reduction significantly depends on the maximum allowable latency loss \(\alpha,\) specified by the NetDC operator. For a small loss of 25%, users are likely to observe no difference in the quality of service. However, this enables savings of up to 24.5% of dispatch costs in the ideal coordination case, depending on the penetration level. The cost-saving potential increases to 49.0% and 56.7% in the case of double and tripled latency loss, respectively, when users experience more noticeable delays during peak-hour operation.
This cost-saving potential is exploited by both regression coordination policies. However, the base regression policy, which ignores power system and NetDC operational constraints, often results in substantively higher dispatch costs, which tend to stay closer to the non-coordinated solution than to the ideal one. On the other hand, the AgentCONCUR policy, which is aware of constraints of both systems, efficiently approximates the ideal solution, i.e., staying relatively close to the ideal solution in many cases depicted in Fig. 3. However, it tends to show a larger approximation gap as the allowable latency loss and NetDC penetration increase.
Fig. 3: Average NYISO dispatch cost across the testing dataset under different coordination models for the varying NetDC penetration level and maximum allowable latency loss. The area between the dashed lines defines the cost-saving potential for regression-based coordination.
Fig. 2: 11-zone NYISO system with 5 data centers. The arrows show active virtual links under different coordination solutions for the 20% NetDC penetration level and 100% maximum latency loss. The change of NetDC electricity demand is given as the average across the test dataset.
### _Feasibility of Regression-Based Coordination_
The approximation gaps reported in Fig. 3 are due to infeasible task shifts, i.e., the shifts that violate power system constraints, NetDC constraints, or both. Whenever the task shift is infeasible in real-time, the two operators resort to a more expensive yet feasible non-coordinated solution. However, the feasibility of regression-based coordination improves with a larger size of the training dataset, as illustrated in Fig. 4. The AgentCONCUR policy dominates the base one and achieves zero violations of power system constraints (e.g., no load shedding) with sample size \(q\geqslant 150\). Moreover, for \(q\geqslant 150\), it keeps infeasibility of NetDC operations below 7%. The dominance of AgentCONCUR is _consistent_, which is important when the set of representative records is limited. We also observed similar results across other NetDC penetration and latency parameters.
Increasing the size of the training dataset also increases the computational burden of problem (6), as shown in Fig. 5. For a reasonable choice of \(q=150\), the CPU times are \(\approx 8\) hours. However, this time is required for training at the planning stage, i.e., well before the real-time operations.
### _Coordination Feature Selection_
While 45 contextual features are used in training, we demonstrate that the power-NetDC coordination can also be achieved with fewer features, i.e., with less data requirements.
We perform feature selection using regularization parameter \(\varepsilon\) in problem (6). The smaller the \(\varepsilon\), the fewer features are used by the policy. Table I reports the selected features for various assignments of \(\varepsilon\). Observe, as the feature space shrinks (\(\downarrow\varepsilon\)), the policy gives less priority to renewable power data, which is reasonable as the NYISO has a very small installed renewable capacity at present (e.g., only 1.7 GW). As the space further shrinks, less priority is given to electricity prices, which become less informative in uncongested cases. The power flows and electricity demands, on the other hand, consistently present among selected features for AgentCONCUR.
Figure 6 further reveals the trade-off between the dispatch cost and amount of selected features. Approximately, the same level of costs (see the dashed trend line) can be achieved in the range \(\varepsilon\in[1,10^{5}]\), selecting from 6 to \(30+\) features. Moreover, parameter \(\varepsilon\) can be optimized to achieve the optimal dispatch cost under regression-based coordination. Here, the optimal \(\varepsilon^{\star}=10\) selects 24 features for coordination.
Notably, for \(\varepsilon=0.1\), the coordination of task shifts within the entire NetDC is performed with a _single_ feature, i.e., electricity demand of the largest demand center - New York City. Although this is not the cost-optimal choice, this is the least data-intensive coordination, which still performs better than the non-coordinated solution, as also shown in Fig. 6.
Fig. 4: Infeasibility of regression-based coordination as the function of the training dataset size. Results are for 20% NetDC penetration and 25% maximum latency loss, averaged across 100 random draws of training scenarios.
Fig. 5: Average CPU times to solve the mixed-integer reformulation of the bilevel program (6). NetDC penetration level is 20%.
Fig. 6: OPF costs for varying regularization parameter \(\varepsilon\). The blue dots depict the average costs obtained on the test dataset, and the dash line is the trend. The red dot marks the optimal selection that minimizes the cost on average.
## V Conclusions
To streamline the economic coordination of power grids and data centers, this work proposed to transition from data-intensive optimization-based coordination to a light weighted regression-based coordination. Recognizing the risks of trusting a regression model with coordinating two critical infrastructure systems, we devised a new training algorithm which inherits the structure of the optimization-based coordination and enables feasible and cost-consistent computing task shifts in real-time. The case study on NYISO system with various NetDC penetration levels revealed 24.5-56.7% cost-saving potential, most of which has shown to be delivered by regression policies at different data-intensity preferences.
There are some notable limitations that motivate several directions for future work. First, while the optimization-based coordination remunerates data center flexibility via duality theory [11], as such, the duality-based solution is unavailable under regression policies. It is thus relevant to study the integration of regression policies into real-time electricity markets. Moreover, while the current focus has been on spatial flexibility for peak-hour coordination, it is also relevant to explore regression policies for harnessing both spatial and temporal flexibility, as proposed before for optimization-based coordination [9]. This, in turn, may result in the increased computational burden and require decomposition. Lastly, although the proposed mechanism does not require any private data exchange at the time of coordination, it still needs sensitive data from the power system and NetDC for training at the planning stage. One potential solution to remove this practical bottleneck is the use of data obfuscation algorithms [22], yet it will require additional modifications to the training procedure to eliminate the effect of noise.
## Acknowledgements
Vladimir Dvorkin is supported by the Marie Sklodowska-Curie Actions COFUND Postdoctoral Program, Grant Agreement No. 101034297 - project Learning ORDER.
### _Mixed-Integer Reformulation of the Bilevel Problem_
The Karush-Kuhn-Tucker conditions of the lower-level problem (BLL) are derived from the following Lagrangian:
\[\underset{\mu}{\text{max}}\;\underset{W,\vartheta}{\text{min}} \frac{1}{2}\|\mathcal{L}(W)-\mathcal{L}(W^{\star})\|_{2}^{2}-\mu^{ \top}(W^{\top}\mathbf{1}-\delta)\] \[-\nu^{\top}(W\mathbf{1}-\vartheta)-\kappa^{\top}(A\varphi- \vartheta+\vartheta^{\star})\] \[+\langle\omega,W\rangle_{\text{F}}-\gamma(\mathcal{L}(W-W^{ \star})-\alpha\mathcal{L}(W^{\star})).\]
The stationarity conditions are the partial derivatives of the Lagrangian with respect to primal variables and take the form:
\[\vartheta\colon\;\nu+\kappa=\mathbf{0}, \tag{7a}\] \[w_{ij}\colon\;g_{ij}(\mathcal{L}(W-W^{\star}))-\mu_{j}-\nu_{i}- \omega_{ij}-\gamma g_{ij}=0\] \[\forall i=1,\ldots,n,\;j=1,\ldots,m \tag{7b}\]
The primal feasibility amounts to constraints of problem (4), while the dual feasibility requires the dual variables of problem's inequalities to be non-negative, i.e.,
\[\omega_{ij}\geqslant 0,\forall i=1,\ldots,n,\;j=1,\ldots,m,\quad\gamma \geqslant 0. \tag{7c}\]
The last complementarity slackness conditions write as
\[\omega_{ij}w_{ij}=0,\;\forall i=1,\ldots,n,\;j=1,\ldots,m,\] \[\gamma(\mathcal{L}(W-W^{\star})-\alpha\mathcal{L}(W^{\star}))=0,\]
which are non-convex. These constraints are addressed using an equivalent mixed-integer SOS1 reformulation [23]:
\[\{\omega_{ij},w_{ij}\}\in\text{SOS1}, \tag{7d}\] \[\{\gamma,\mathcal{L}(W-W^{\star})-\alpha\mathcal{L}(W^{\star})\} \in\text{SOS1}, \tag{7e}\]
where formulation \(\{x,y\}\in\text{SOS1}\) means that at most one variable may be nonzero. The equivalent reformulation of problem (BL) is then obtained when the lower-level problem (BLL) is replaced with constraints (4b)-(4e) and (7).
|
2309.13832 | Inspiral and Plunging Orbits in Kerr-Newman Spacetimes | We present the analytical solutions for the trajectories of particles that
spiral and plunge inward the event horizon along the timelike geodesics
following general non-equatorial paths within Kerr-Newman spacetimes. Our
studies encompass both bound and unbound motions. The solutions can be written
in terms of the elliptical integrals and the Jacobian elliptic functions of
manifestly real functions of the Mino time. They can respectively reduce to the
Kerr, Reissner-Nordstr$\ddot{o}$m, and Schwarzschild black holes in certain
limits of the spin and charge of the black holes, and can be compared with the
known ones restricted in equatorial motion. These explicit solutions may have
some implications for the gravitational wave emission from extreme mass-ratio
inspirals. | Yu-Chung Ko, Da-Shin Lee, Chi-Yong Lin | 2023-09-25T02:37:39Z | http://arxiv.org/abs/2309.13832v3 | # Inspiral and Plunging Orbits in Kerr-Newman Spacetimes
###### Abstract
We present the analytical solutions for the trajectories that spiral and plunge inward the event horizon along the timelike geodesics of particles following general non-equatorial paths within Kerr-Newman spacetimes. Our studies encompass both bound and unbound motions. The solutions can be written in terms of the elliptical integrals and the Jacobian elliptic functions of manifestly real functions of the Mino time, and can respectively reduce to the Kerr, Reissner-Nordstr\(\ddot{o}\)m, and Schwarzschild black holes in certain limits of the spin and charge of the black holes. The results can be compared with some of the known ones restricted in the equatorial plane. These explicit solutions may find applications such as the black hole accretion.
pacs: 04.70.-s, 04.70.Bw, 04.80.Cc
Introduction
The recent detections of gravitational waves emitted by the merging of binary systems were the confirmation of the prediction made by Einstein a century before as a consequence of general relativity [1; 2; 3]. The capture of the spectacular images of the supermassive black holes M87* at the center of M87 galaxy [4] and Sgr A* at the center of our galaxy [5] leads to another scientific achievement of a direct evidence of the existence of the black holes. The black hole is one of the mysterious stellar objects, which are the solutions of the Einstein's field equations. In astrophysics, extreme mass-ratio inspirals (EMRIs), consisting of a stellar mass object orbits a massive black hole, have recently gained considerable attention for analyzing the gravitational wave signal to accurately test the predictions of general relativity in the strong regime of gravity. Gravitational wave signal generated through EMRIs, a key source of low frequency gravitational waves to be observed in the planned space-based Laser Interferometer Space Antenna (LISA) provides a chance to measure various fascinating properties of supermassive black holes.[8; 9; 10; 11].
The present work is motivated by EMRIs, which can be approximated as the light body travels along the geodesic of the background spacetime of the massive black hole. In particular, the recent studies in [12; 13] have been devoted to inspirals of the particle on the equatorial plane asymptotically from the innermost stable circular orbits (ISCO) of Kerr black holes. They also derive a simple expression for the equatorial radial flow from the ISCO relevant to the dynamics of the accretion disk. These exact solutions may find applications to the numerical accretion, the generated gravitational waveforms arising from EMRIs as well as extending current theories of black hole accretion [17; 18; 19]. Moreover, the work of [20] extends the motion on the equatorial plane to the generic nonequatorial motion in Kerr black holes. In the family of the Kerr black holes the geodesics of the particle due to the spacetime symmetry of the Kerr family possesses two conserved quantities, the energy \(E_{m}\) and the azimuthal angular momentum \(L_{m}\) of the particle. Nevertheless, the existence of the third conserved quantity, discovered in the sixties and known nowadays as Carter constant, renders the geodesic equations as a set of first-order differential equations [21]. Later, the introduction of the Mino time [22] further fully decouples the equations of motions with the solutions expressed in terms of the elliptical functions. In our previous paper [15], we have studied the null and time-like geodesics of the light and the neutral particles
respectively in the exterior of Kerr-Newman black holes. We then obtain the solutions of the trajectories in terms of the elliptical integrals and the Jacobi elliptic functions for the null and time-like geodesics, which are manifestly real functions of the Mino time that the initial conditions can be explicitly specified with reference to [23]. In this work, we will mainly focus on the infalling particles into the Kerr-Newman black holes in general nonequatorial motion. Theoretical considerations, together with recent observations of structures near Sgr A* by the GRAVITY experiment [24], indicate possible presence of a small electric charge of central supermassive black hole [25; 26]. Thus, it is of great interest to explore the geodesic dynamics in the Kerr-Newman black hole.
Layout of the paper is as follows. In Sec. II, the mini review of the time-like geodesic equations is provided in terms on the conserved quantities, the energy, azimuthal angular momentum, and Carter constant. The equations of motion can be recast in integral forms involving two effective potentials. In particular, the positions of the roots of radial potential give rise to the inspiral and plunge trajectories of particles into the black holes. Sec. III focus on the portion of the parameter space of the conserved quantities that satisfies the conditions of triple roots, the innermost stable spherical orbits (ISSO). The analytical solutions of the inspiral orbits are derived for this case. Two other two cases of our interest here involve pairs of complex roots. In Sec. IV, one of the real root smaller than the event horizon and the particle motion is bound by the turning point from the other real root of the radial potential. In Sec. V we show the case of unbound motion, in which the two real roots are inside the event horizon. The exact solution for the plunging trajectories and illustrative examples are given. In VI the conclusions are drawn. For the completeness of the paper, the Appendixes A and B show some of relevant formulas derived in the earlier publication [15; 28].
## II Equation of motion for time-like geodesics
We start from a summary of the equations of motion for the particle in the Kerr-Newman black hole exterior. We work with the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\) on the space-time of the exterior of the Kerr-Newman black hole with the gravitational mass \(M\), angular
momentum \(J\), and angular momentum per unit mass \(a=J/M\) described by the metric as
\[ds^{2}=-\frac{\Delta}{\Sigma}\left(dt-a\sin^{2}\theta d\phi\right)^{2}+\frac{\sin ^{2}\theta}{\Sigma}\left[(r^{2}+a^{2})d\phi-adt\right]^{2}+\frac{\Sigma}{ \Delta}dr^{2}+\Sigma d\theta^{2}\;, \tag{1}\]
where \(\Sigma=r^{2}+a^{2}\cos^{2}\theta\) and \(\Delta=r^{2}-2Mr+a^{2}+Q^{2}\). The roots of \(\Delta(r)\) determine outer/inner event horizons \(r_{+}/r_{-}\) as
\[r_{\pm}=M\pm\sqrt{M^{2}-(Q^{2}+a^{2})}\;. \tag{2}\]
We assume that \(0<a^{2}+Q^{2}<M^{2}\) throughout the paper.
For the asymptotically flat, stationary, and axial-symmetric black holes, the metric is independent of \(t\) and \(\phi\). Thus, the conserved quantities are energy \(E_{m}\) and azimuthal angular momentum \(L_{m}\) of the particle along a geodesic. These can be constructed through the four momentum \(p^{\mu}=mu^{\mu}=m\,dx^{\mu}/d\sigma_{m}\), defined in terms of the proper time \(\sigma_{m}\) and the mass of the particle \(m\), as
\[E_{m} \equiv-p_{t}, \tag{3}\] \[L_{m} \equiv p_{\phi}\,. \tag{4}\]
Additionally, another conserved quantity is the Carter constant explicitly obtained by
\[C_{m}=\Sigma^{2}\left(u^{\theta}\right)^{2}-a^{2}cos^{2}\theta\left(E_{m} \right)^{2}+L_{m}^{2}\cot^{2}\theta+m^{2}a^{2}\cos^{2}\theta\,. \tag{5}\]
Together with the time-like geodesics of the particle, \(u^{\mu}u_{\mu}=m^{2}\), one obtains the equations of motion
\[\frac{\Sigma}{m}\frac{dr}{d\sigma_{m}}=\pm_{r}\sqrt{R_{m}(r)}\,, \tag{6}\]
\[\frac{\Sigma}{m}\frac{d\theta}{d\sigma_{m}}=\pm_{\theta}\sqrt{\Theta_{m}( \theta)}\,, \tag{7}\]
\[\frac{\Sigma}{m}\frac{d\phi}{d\sigma_{m}}=\frac{a}{\Delta}\left[\left(r^{2}+a^ {2}\right)\gamma_{m}-a\lambda_{m}\right]-\frac{1}{\sin^{2}\theta}\left(a \gamma_{m}\sin^{2}\theta-\lambda_{m}\right)\,, \tag{8}\]
\[\frac{\Sigma}{m}\frac{dt}{d\sigma_{m}}=\frac{r^{2}+a^{2}}{\Delta}\left[\left( r^{2}+a^{2}\right)\gamma_{m}-a\lambda_{m}\right]-a\left(a\gamma_{m}\sin^{2} \theta-\lambda_{m}\right)\,, \tag{9}\]
where we have normalized \(E_{m}\), \(L_{m}\), and \(C_{m}\) by the mass of the particle \(m\)
\[\gamma_{m}\equiv\frac{E_{m}}{m},\,\,\,\lambda_{m}\equiv\frac{L_{m}}{m},\,\,\, \eta_{m}\equiv\frac{C_{m}}{m^{2}}. \tag{10}\]
The symbols \(\pm_{r}=\text{sign}\left(u^{r}\right)\) and \(\pm_{\theta}=\text{sign}\left(u^{\theta}\right)\) are defined by 4-velocity of the particle. Moreover, the radial \(R_{m}(r)\) in (6) and and angular potentials \(\Theta_{m}(\theta)\) in (7) for the particle are obtained as
\[R_{m}(r)=\left[\left(r^{2}+a^{2}\right)\gamma_{m}-a\lambda_{m}\right]^{2}- \Delta\left[\eta_{m}+\left(a\gamma_{m}-\lambda_{m}\right)^{2}+r^{2}\right]\,, \tag{11}\]
\[\Theta_{m}(\theta)=\eta_{m}+a^{2}\gamma_{m}^{2}\cos^{2}\theta-\lambda_{m}^{2} \cot^{2}\theta-a^{2}\cos^{2}\theta\,. \tag{12}\]
As well known [22], the set of equations of motion (6)-(9) can be fully decoupled by introducing the so-called Mino time \(\tau_{m}\) defined as
\[\frac{dx^{\mu}}{d\tau_{m}}\equiv\frac{\Sigma}{m}\frac{dx^{\mu}}{d\sigma_{m}}. \tag{13}\]
For the source point \(x_{i}^{\mu}\) and observer point \(x^{\mu}\), the integral forms of the equations above can be rewritten as [23]
\[\tau_{m}-\tau_{mi}=I_{mr}=G_{m\theta}\,, \tag{14}\]
\[\phi_{m}-\phi_{mi}=I_{m\phi}+\lambda_{m}G_{m\phi}\,, \tag{15}\]
\[t_{m}-t_{mi}=I_{mt}+a^{2}\gamma_{m}G_{mt}\,, \tag{16}\]
where the integrals \(I_{mr}\), \(I_{m\phi}\), and \(I_{mt}\) involve the radial potential
\[I_{mr}\equiv\int_{r_{i}}^{r}\frac{1}{\pm_{r}\sqrt{R_{m}(r)}}dr,\,, \tag{17}\]
\[I_{m\phi}\equiv\int_{r_{i}}^{r}\frac{a\left[\left(2Mr-Q^{2}\right)\gamma_{m}- a\lambda_{m}\right]}{\pm_{r}\Delta\sqrt{R_{m}(r)}}dr, \tag{18}\]
\[I_{mt}\equiv\int_{r_{i}}^{r}\frac{r^{2}\gamma_{m}\Delta+\left(2Mr-Q^{2} \right)\left[\left(r^{2}+a^{2}\right)\gamma_{m}-a\lambda_{m}\right]}{\pm_{r} \Delta\sqrt{R_{m}(r)}}dr\,. \tag{19}\]
The angular integrals are
\[G_{m\theta}\equiv\int_{\theta_{i}}^{\theta}\frac{1}{\pm_{\theta}\sqrt{\Theta (m\theta)}}d\theta\,, \tag{20}\]
\[G_{m\phi}\equiv\int_{\theta_{i}}^{\theta}\frac{\csc^{2}\theta}{\pm_{\theta} \sqrt{\Theta_{m}(\theta)}}d\theta\,, \tag{21}\]
\[G_{mt}\equiv\int_{\theta_{i}}^{\theta}\frac{\csc^{2}\theta}{\pm_{\theta} \sqrt{\Theta_{m}(\theta)}}d\theta\,. \tag{22}\]
In the previous work [15; 28], we have shown the exact solutions to some of the cases of both null and time-like geodesics. For the present work, we will mainly focus on the spiralling and plunging orbits of the bound and the unbound motion at the black hole exterior. There
are three types of the orbit of this kind, which will be considered in the subsequent sections for both bound and unbound motion. The radial potential \(R_{m}(r)\) is a quartic polynomial and the positions of its roots of play the essential roles in the present study. The discussion of angular potential \(\Theta(\theta)\) and the integrals involved, on the other hand, remain unchanged in this work. For the sake of completeness, we will provide a short summary of Ref. [15] in Appendix A.
Before ending this section, let us introduce a few notations that will be used later. Related to \(R_{m}(r)\) we defined the integrals
\[I_{n}\equiv\int_{r_{i}}^{r}r^{n}\sqrt{\frac{1-\gamma_{m}^{2}}{R_{m}(r)}}\,dr \equiv iI_{n}^{U}\;,\;n=1,2 \tag{23}\]
\[I_{\pm}\equiv\int_{r_{i}}^{r}\frac{1}{(r-r_{\pm})}\sqrt{\frac{1-\gamma_{m}^{2} }{R_{m}(r)}}\,dr\equiv iI_{\pm}^{U} \tag{24}\]
In terms of \(I_{1}\), \(I_{2}\), and \(I_{\pm}\) we can rewrite (18) and (19) as follows
\[I_{m\phi}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\frac{2Ma}{r_{+ }-r_{-}}\left[\left(r_{+}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right)+ Q^{2}}{2M}\right)I_{+}(\tau_{m})-\left(r_{-}-\frac{a\left(\frac{\lambda_{m}}{ \gamma_{m}}\right)+Q^{2}}{2M}\right)I_{-}(\tau_{m})\right] \tag{25}\]
\[I_{mt}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left\{\frac{4M^{2 }}{r_{+}-r_{-}}\left[\left(r_{+}-\frac{Q^{2}}{2M}\right)\left(r_{+}-\frac{a \left(\frac{\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I_{+}(\tau_{m}) \right.\right.\]
\[\left.\left.-\left(r_{-}-\frac{Q^{2}}{2M}\right)\left(r_{-}-\frac{a\left(\frac {\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I_{-}(\tau_{m})\right]+2 MI_{1}(\tau_{m})+I_{2}(\tau_{m})\right\}\]
\[+\left(4M^{2}-Q^{2}\right)\gamma_{m}\tau_{m} \tag{26}\]
## III Inspiral orbits in bound motion
Based upon the studies in [15] about the plots of the radial potentials for various ranges of the parameters \(\lambda_{m}\) and \(\eta_{m}\) for the bound motion (\(\gamma_{m}<1\)), since the particle kinematically is allowed to move for \(R_{m}(r)>0\), there evidently exist two types of the trajectories of the spiral motion cross the horizon into black holes. One is that the particle starts from \(r_{i}\leq r_{\rm isso}\), where the radius of the ISSO orbit \(r_{\rm isso}\) is with the parameters located at A and B in Fig. 1, spirals cross the horizon of the black holes. The other one is to start from \(r_{i}<r_{m4}\)
with the parameters of C and D in Fig. 3, and travel through the horizon of the black holes. This solutions shown in [15] are of particular useful to produce the ISSO solutions in the case of the triple root, and further reduce to the inspiral trajectories in the former case.
The solutions along the \(r\) direction can be obtained from the inversion of (14) with the integral \(I_{mr}\) in (17), where the radial potential (11) is given in the case of the triple root located at the ISSO radius, namely \(r_{m2}=r_{m3}=r_{m4}=r_{\rm isso}\) and the initial \(r\) is set at \(r_{i}\leq r_{\rm isso}\). Then we can get the Mino time \(\tau_{m}\) as the function of \(r\) by integral (14) in the case of the triple root
\[\tau_{m}^{I}(r)=\frac{-2}{(r_{\rm isso}-r_{m1})\sqrt{1-\gamma_{m}^{2}}}\Bigg{[} \sqrt{\frac{r-r_{m1}}{r_{\rm isso}-r}}-\sqrt{\frac{r_{i}-r_{m1}}{r_{\rm isso} -r_{i}}}\Bigg{]} \tag{27}\]
The particle moves toward the horizon with \(\nu_{r_{i}}=-1\). Thus, the radial motion of the particle can be obtained from the inverse of (27)
\[r^{I}(\tau_{m})=\frac{r_{m1}+r_{\rm isso}\left[X^{I}(\tau_{m})\right]^{2}}{1+ [X^{I}(\tau_{m})]^{2}}\, \tag{28}\]
Figure 1: The main graphics shows the parametric plot of \(\lambda_{m}(r_{\rm isso})\) versus \(\eta_{m}(r_{\rm isso})\). The tripled roots \(r_{\rm isso}\) are the solutions of the equations \(R_{m}^{\prime\prime}(r)=R_{m}^{\prime}=R(r)=0\). The inset illustrate the behavior of the radial potential \(R_{m}\) with the parameters for located at A. The case of B has \(\eta_{m}=0\), which is an example of equatorial motion.
where
\[X^{I}(\tau_{m})=\frac{\sqrt{1-\gamma_{m}^{2}}(r_{\rm{isso}}-r_{m1})}{2}\tau_{m}- \sqrt{\frac{r_{i}-r_{m1}}{r_{\rm{isso}}-r_{i}}}\,, \tag{29}\]
The solution (28) of coordinate \(r\) involves the triple root \(r_{\rm{isso}}\) of radial potential, which can be determined as follows.
From the double root solutions \(R(r)=R^{\prime}(r)=0\)[15] we have the constants of motion in the case of spherical orbits,
\[\lambda_{\rm{mss}}=\frac{\left[r_{\rm{mss}}\left(Mr_{\rm{mss}}-Q^{2}\right)-a^ {2}M\right]\gamma_{m}-\Delta\left(r_{\rm{mss}}\right)\sqrt{r_{\rm{mss}}^{2} \left(\gamma_{m}^{2}-1\right)+Mr_{\rm{mss}}}}{a\left(r_{\rm{mss}}-M\right)}\,, \tag{30}\]
\[\eta_{\rm{mss}}=\frac{r_{\rm{mss}}}{a^{2}\left(r_{\rm{mss}}-M\right)^{2}} \Big{\{}r_{\rm{mss}}\left(Mr_{\rm{mss}}-Q^{2}\right)\left(a^{2}+Q^{2}-Mr_{\rm{ mss}}\right)\gamma_{m}^{2}\\ +2\left(Mr_{\rm{mss}}-Q^{2}\right)\Delta\left(r_{\rm{mss}}\right) \gamma_{m}\sqrt{r_{\rm{mss}}^{2}\left(\gamma_{m}^{2}-1\right)+Mr_{\rm{mss}}} \\ +\left[a^{2}\left(Mr_{\rm{mss}}-Q^{2}\right)-\left(\Delta\left(r_{ \rm{mss}}\right)-a^{2}\right)^{2}\right]\left[r_{\rm{mss}}\left(\gamma_{m}^{2} -1\right)+M\right]\Big{\}}\ \,. \tag{31}\]
The subscript "ss" means the spherical orbits with \(s=\pm\), which denotes the two types of motion with respect to the relative sign between the black hole's spin and the azimuthal angular of the particle (see Section III C of [15]). Together with the above equation, an addition equation from \(R^{\prime\prime}(r)=0\) determines the triple root, the radius of \(r_{\rm{isso}}\) given by [15]
\[-Mr_{\rm{isso}}^{5}\Delta\left(r_{\rm{isso}}\right)+4\left(Mr_{\rm{isso}}^{3} -Q^{2}r_{\rm{isso}}^{2}+a^{2}\eta_{\rm{isso}}-as\sqrt{\Gamma_{\rm{ms}}}\right) ^{2}=0 \tag{32}\]
where
\[\Gamma_{\rm{ms}}=r_{\rm{isso}}^{4}\left(Mr_{\rm{isso}}-Q^{2}\right)-\eta_{\rm {isso}}\left[r_{\rm{isso}}\left(r_{\rm{isso}}-3M\right)+2Q^{2}\right]r_{\rm{ isso}}^{2}+a^{2}\eta_{\rm{isso}}^{2}. \tag{33}\]
We proceed by evaluating the coordinates \(\phi_{m}(\tau_{m})\) and \(t_{m}(\tau_{m})\) using (15) and (16), which involve not only the angular integrals \(G_{m\phi}\) and \(G_{mt}\), but also the radial integrals (18) and (19). With the help of (25) and (26), we first rewrite (18) and (19) as
\[I_{m\phi}^{I}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\frac{2Ma}{r _{+}-r_{-}}\left[\left(r_{+}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}} \right)+Q^{2}}{2M}\right)I_{+}^{I}(\tau_{m})-\left(r_{-}-\frac{a\left(\frac{ \lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I_{-}^{I}(\tau_{m})\right] \tag{34}\]
\[I^{I}_{mt}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}} \left\{\frac{4M^{2}}{r_{+}-r_{-}}\left[\left(r_{+}-\frac{Q^{2}}{2M}\right)\left( r_{+}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I^{I}_{+}( \tau_{m})\right.\right.\] \[\left.\left.\qquad\qquad-\left(r_{-}-\frac{Q^{2}}{2M}\right) \left(r_{-}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M} \right)I^{I}_{-}(\tau_{m})\right]+2MI^{I}_{1}(\tau_{m})+I^{I}_{2}(\tau_{m})\right\}\] \[\left.\qquad\qquad+\left(4M^{2}-Q^{2}\right)\gamma_{m}\tau_{m}\right. \tag{35}\]
For the present case of the triple roots, the calculation of the integrals is straightforward and one can express \(I^{I}_{n}\) and \(I^{I}_{\pm}\) in terms of elementary functions,
\[I^{I}_{\pm}(\tau_{m})=\frac{\sqrt{1-\gamma_{m}^{2}}}{r_{\rm isso}-r_{\pm}} \tau_{m}+\frac{1}{\sqrt{\left(r_{\pm}-r_{m1}\right)\left(r_{\rm isso}-r_{\pm} \right)^{3}}}\tanh^{-1}\sqrt{\frac{(r_{\pm}-r_{m1})(r_{\rm isso}-r^{I}(\tau_{m }))}{(r_{\rm isso}-r_{\pm})(r^{I}(\tau_{m})-r_{m1})}}-\mathcal{I}^{I}_{\pm_{i}} \tag{36}\]
\[I^{I}_{1}(\tau_{m})=\sqrt{1-\gamma_{m}^{2}}r_{\rm isso}\tau_{m}+2\tan^{-1}\sqrt {\frac{r_{\rm isso}-r^{I}(\tau_{m})}{r^{I}(\tau_{m})-r_{m1}}}-\mathcal{I}^{I}_ {1_{i}} \tag{37}\]
\[I^{I}_{2}(\tau_{m})=\frac{r_{I}(\tau_{m})(r_{m1}-r_{\rm isso})+r_{\rm isso}(3r_ {\rm isso}-r_{m1})}{2}\tau_{m}+(r_{m1}+3r_{\rm isso})\tan^{-1}\sqrt{\frac{r_{ \rm isso}-r^{I}(\tau_{m})}{r^{I}(\tau_{m})-r_{m1}}}-\mathcal{I}^{I}_{2_{i}} \tag{38}\]
It is worthwhile to mention that \(\mathcal{I}^{I}_{\pm_{i}}\), \(\mathcal{I}^{I}_{1_{i}}\), \(\mathcal{I}^{I}_{2_{i}}\) are obtained by evaluating \(\mathcal{I}^{I}_{\pm}\), \(\mathcal{I}^{I}_{1}\), \(\mathcal{I}^{I}_{2}\) at \(r=r_{i}\) of the initial condition, that is, \(I^{I}_{\pm}(0)=I^{I}_{1}(0)=I^{I}_{2}(0)=0\). The solutions of \(\phi^{I}(\tau_{m})\) and \(t^{I}(\tau_{m})\) can be constructed from \(I_{m\phi}\) (18), \(G_{m\phi}\) (21) and \(I_{mt}\) (19) and \(G_{mt}\) (22) through (15) and (16). Together with the solutions along the \(r\) and \(\theta\) directions in (28) and (22), they are the spiralling motions in the general nonequatorial plane of the Kerr-Newman exterior. An illustrative example is shown in Fig.(2), which corresponds to the case A of Fig. 1.
For the particle initially at \(r_{i}=r_{\rm isso}\), the solution (28) gives \(r(\tau)=r_{\rm isso}\) for any \(\tau\) obtained from \(X^{I}\rightarrow-\infty\) in (29) when \(r_{i}\to r_{\rm isso}\). It is anticipated that the particle will travel in the spherical motion on the ISSO. However, for \(r_{i}<r_{\rm isso}\) of our interest, as \(r\) reaches the outer horizon \(r_{+}\), it takes finite Mino time \(\tau_{m}\). Nevertheless, because of the \(\tanh^{-1}\) function in (36), \(I^{\rm isso}_{\pm}\rightarrow\infty\) as \(r\to r_{+}\), giving the coordinate time \(t\rightarrow\infty\) and the azimuthal angle \(\phi\rightarrow\infty\) observed in the asymptotical flat regime. The above expressions can be further reduce to the Kerr black hole case by sending \(Q\to 0\).
An interesting particular case is the motion of the particle on the equatorial plane by taking \(\theta=\frac{\pi}{2}\) and \(\eta_{m}\to 0\) limits in the results above. The spherical motion becomes the circular motion such that \(r_{\rm isso}\) is reduced to \(r_{\rm isco}\). In particular, \(G_{m\phi}=\tau_{m}\) and the equation
of motion (15) simplifies to [15]
\[\phi_{m}^{I}\left(r\right)=I_{m\phi}^{I}\left(\tau_{m}\left(r\right)\right)+ \lambda_{m}\tau_{m}^{I}\left(r\right)+\phi_{mi}^{I}\,, \tag{39}\]
where \(I_{m\phi}^{I}\) is given by (34). In addition, one can eliminate Mino time \(\tau_{m}\) using Eq. (27). Then the inspirals solution of \(\phi_{m}\) on the equatorial plane of equation (8) can be expressed as a function of \(r\),
\[\phi_{m}^{I}(r)=-2\sqrt{\frac{r-r_{m1}}{\left(1-\gamma_{m}^{2} \right)\left(r_{\mathrm{isco}}-r\right)}}\frac{r_{\mathrm{isco}}^{2}\lambda_{m} +\left(2Mr_{\mathrm{isco}}-Q^{2}\right)\left(a\gamma_{m}-\lambda_{m}\right)}{ \left(r_{\mathrm{isco}}-r_{+}\right)\left(r_{\mathrm{isco}}-r_{-}\right)\left(r _{\mathrm{isco}}-r_{m1}\right)}\] \[-\frac{2}{r_{+}-r_{-}}\frac{\left(2Ma\gamma_{m}-r_{-}\lambda_{m} \right)r_{+}-Q^{2}\left(a\gamma_{m}-\lambda_{m}\right)}{\left(r_{\mathrm{isco }}-r_{+}\right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{+}-r_{m1}\right) \left(r_{\mathrm{isco}}-r_{+}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{+}-r_{m1} \right)\left(r_{\mathrm{isco}}-r\right)}{\left(r_{\mathrm{isco}}-r_{+}\right) \left(r-r_{m1}\right)}}\] \[+\frac{2}{r_{+}-r_{-}}\frac{\left(2Ma\gamma_{m}-r_{+}\lambda_{m} \right)r_{-}-Q^{2}\left(a\gamma_{m}-\lambda_{m}\right)}{\left(r_{\mathrm{isco }}-r_{-}\right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{-}-r_{m1}\right) \left(r_{\mathrm{isco}}-r_{-}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{-}-r_{m1} \right)\left(r_{\mathrm{isco}}-r\right)}{\left(r_{\mathrm{isco}}-r_{-}\right) \left(r-r_{m1}\right)}} \tag{40}\]
Analogously, we have \(G_{mt}=0\) from (22) for the equatorial orbits and (16) simplify to
\[t_{m}^{I}\left(r\right)=I_{mt}^{\mathrm{I}}\left(\tau_{m}\right)+t_{mi}^{I} \tag{41}\]
Figure 2: An illustrative example of nonequatorial orbit with parameters A of Fig. 1. In this case, the particle starts from \(r_{i}<r_{\mathrm{isco}}\) and inspirals into the black hole after many azimuthal and longitudinal revolutions. From the top view one notices the very different time scales of spiralling phase and plunging phase
where \(I_{mt}^{I}\) has been calculated in (35). Substitute \(\tau_{m}^{I}\) in favor of \(r\) we find
\[t_{m}^{I}\left(r\right)=-\gamma_{m}\sqrt{\frac{\left(r-r_{m1} \right)\left(r_{\text{isco}}-r\right)}{1-\gamma_{m}^{2}}}+\frac{\gamma_{m}\left( r_{m1}+3r_{\text{isco}}+4M\right)}{\sqrt{1-\gamma_{m}^{2}}}\tan^{-1}\sqrt{\frac{r_{ \text{isco}}-r}{r-r_{m1}}}\] \[-2\sqrt{\frac{r-r_{m1}}{\left(1-\gamma_{m}^{2}\right)\left(r_{ \text{isco}}-r\right)}}\frac{r_{\text{isco}}^{2}\left(r_{\text{isco}}^{2}+a^{2} \right)\gamma_{m}+\left(2Mr_{\text{isco}}-Q^{2}\right)a\left(a\gamma_{m}- \lambda_{m}\right)}{\left(r_{\text{isco}}-r_{+}\right)\left(r_{\text{isco}}-r_{ -}\right)\left(r_{\text{isco}}-r_{m1}\right)}\] \[-\frac{2\left(2Mr_{+}-Q^{2}\right)}{r_{+}-r_{-}}\frac{2M\gamma_{ m}r_{+}-\left(a\lambda_{m}+Q^{2}\gamma_{m}\right)}{\left(r_{\text{isco}}-r_{+} \right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{+}-r_{m1}\right)\left(r_{ \text{isco}}-r_{+}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{+}-r_{m1}\right)\left( r_{\text{isco}}-r\right)}{\left(r_{\text{isco}}-r_{+}\right)\left(r-r_{m1} \right)}}\] \[+\frac{2\left(2Mr_{-}-Q^{2}\right)}{r_{+}-r_{-}}\frac{2M\gamma_{ m}r_{-}-\left(a\lambda_{m}+Q^{2}\gamma_{m}\right)}{\left(r_{\text{isco}}-r_{-} \right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{-}-r_{m1}\right)\left(r_{ \text{isco}}-r_{-}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{-}-r_{m1}\right)\left( r_{\text{isco}}-r\right)}{\left(r_{\text{isco}}-r_{-}\right)\left(r-r_{m1}\right)}} \tag{42}\]
As for the initial conditions one can determine \(\phi_{mi}^{I}\) and \(t_{mi}^{I}\) by \(I_{m\phi}^{I}\left(\tau_{m}^{I}\left(r\right)\right)+\lambda_{m}\tau_{m}^{i} \left(r\right)\) and \(I_{mt}^{I}\left(\tau_{m}\right)\) vanishing at the initial \(r_{i}\). The corresponding trajectories are shown with the additional parameter \(Q\) apart from \(a\) of the black holes in Fig. 3. This certainly generalizes the solution in [12] for the Kerr black holes where the particle starts from \(t_{m}(r)=-\infty\) as \(r\lesssim r_{isco}\) and inspirals to the event horizon.
One of the limiting cases that can significantly simplifying the above expressions is considering the extremal limit of the Kerr black hole. For \(Q\to 0\) giving \(r_{m1}=0\), and for the direct orbits with \(a=M\), the ISCO radius is on the event horizon. Therefore we focus on the extremal retrograde motion with \(r_{\text{isco}}=9M\)\(\lambda_{m}=-22\sqrt{3}M/9\) and \(\gamma_{m}=5\sqrt{3}/9\). It turns out that the coefficients of the \(\tanh^{-1}\) of the above expressions (40) all vanish. Then they can be simplified into the known results [12; 13]
\[\phi_{m}^{I}\left(r\right)=-\frac{2\sqrt{2}}{3}\frac{r^{\frac{3}{2}}}{(r-M) \sqrt{9M-r}} \tag{43}\]
\[t_{m}^{I}\left(r\right)=\sqrt{\frac{(9M-r)r}{2}}\left(\frac{4M-5r}{r-M} \right)-\frac{117\sqrt{2}}{2}M\sqrt{\frac{r}{9M-r}}\] \[\qquad\qquad+\frac{155\sqrt{2}}{2}M\tan^{-1}\sqrt{\frac{9M-r}{r} }-4M\tanh^{-1}\sqrt{\frac{9M-r}{8r}} \tag{44}\]
Another limiting case is considering the Reissner Nordstr\(\ddot{o}\)m (RN) black hole. Since \(a\to 0\) with the spherically symmetric metric, the general motion can be treated by considering that in the equatorial plane. Again, the coefficients of the \(\tanh^{-1}\) of the above expressions (40) all vanish. The expressions of \(\phi_{m}^{I}\left(r\right)\) and \(t_{m}^{I}\left(r\right)\) can be simplified as
\[\phi_{m}^{I}\left(r\right)=-\frac{2\lambda_{m}}{r_{\text{isco}}-r_{m1}}\sqrt{ \frac{r-r_{m1}}{(1-\gamma_{m}^{2})(r_{\text{isco}}-r)}} \tag{45}\]
\[t_{m}^{I}\left(r\right)= -\gamma_{m}\sqrt{\frac{\left(r-r_{m1}\right)\left(r_{\text{isco}}-r \right)}{1-\gamma_{m}^{2}}}+\frac{\gamma_{m}\left(r_{m1}+3r_{\text{isco}}+4M \right)}{\sqrt{1-\gamma_{m}^{2}}}\tan^{-1}\sqrt{\frac{r_{\text{isco}}-r}{r-r_{ m1}}}\] \[-2\sqrt{\frac{r-r_{m1}}{\left(1-\gamma_{m}^{2}\right)\left(r_{ \text{isco}}-r\right)}}\frac{r_{\text{isco}}^{4}\gamma_{m}}{\left(r_{\text{isco }}-r_{+}\right)\left(r_{\text{isco}}-r_{-}\right)\left(r_{\text{isco}}-r_{m1} \right)}\] \[-\frac{2}{r_{+}-r_{-}}\frac{\left(2Mr_{+}-Q^{2}\right)^{2}\gamma _{m}}{\left(r_{\text{isco}}-r_{+}\right)\sqrt{\left(1-\gamma_{m}^{2}\right) \left(r_{+}-r_{m1}\right)\left(r_{\text{isco}}-r_{+}\right)}}\tanh^{-1}\sqrt{ \frac{\left(r_{+}-r_{m1}\right)\left(r_{\text{isco}}-r\right)}{\left(r_{\text{isco }}-r_{+}\right)\left(r-r_{m1}\right)}}\] \[+\frac{2}{r_{+}-r_{-}}\frac{\left(2Mr_{-}-Q^{2}\right)^{2}\gamma _{m}}{\left(r_{\text{isco}}-r_{-}\right)\sqrt{\left(1-\gamma_{m}^{2}\right) \left(r_{-}-r_{m1}\right)\left(r_{\text{isco}}-r_{-}\right)}}\tanh^{-1}\sqrt{ \frac{\left(r_{-}-r_{m1}\right)\left(r_{\text{isco}}-r\right)}{\left(r_{\text{isco }}-r_{-}\right)\left(r-r_{m1}\right)}} \tag{46}\]
Further simplification occurs in the extremal limit. For \(M=\pm Q\) in the RN black holes, \(r_{\pm}=M\), and with \(r_{\text{isco}}=4M\), \(r_{m1}=4M/5\), \(\lambda_{m}=2\sqrt{2}M\) and \(\gamma_{m}=3\sqrt{6}/8\), (45) and (46) can have such a simple form
\[\phi_{m}^{I}\left(r\right)=-2\sqrt{\frac{5r-4M}{4M-r}} \tag{47}\]
Figure 3: Illustration of the orbit on the equatorial plane with the parameters of B in Fig. 1. The particle starts from \(r_{i}<r_{\text{isco}}\) and inspirals into the black hole horizon.
\[t_{m}^{I}\left(r\right)= -3\sqrt{\frac{3(4M-r)(r-4M/5)}{5}}+\frac{252\sqrt{15}}{25}M\tan^{-1} \sqrt{\frac{4M-r}{r-4M/5}}\] \[-32M\sqrt{\frac{5r-4M}{12M-3r}}-\frac{(2M^{2}-1)^{2}}{(M-r)M^{3}} \sqrt{\frac{(4M-r)(5r-4M)}{3}}\] \[-\frac{4(2M^{2}-1)}{M^{3}}\tanh^{-1}\sqrt{\frac{4M-r}{15r-12M}} \tag{48}\]
Finally, in the case of the Schwarzschild black hole with \(Q\to 0\) and \(a\to 0\) giving \(r_{+}\to 2M\) and \(r_{-}\to 0\), the choice of the motion can be simply in the equatorial plane in such black holes with the spherical symmetry with \(r_{m1}=0\). Thus, with the further inputs of \(r_{\rm{isco}}=6M\), \(\lambda_{m}=2\sqrt{3}M\), \(\gamma_{m}=2\sqrt{2}/3\) in the Schwarzschild case, they become as simple as
\[\phi_{m}^{I}\left(r\right)=-2\sqrt{3}\sqrt{\frac{r}{6M-r}}\, \tag{49}\]
\[t_{m}^{I}\left(r\right) =\frac{864\sqrt{2}M}{25}\sqrt{\frac{r}{6M-r}}-2\sqrt{2}\sqrt{(6M -r)r}\] \[+44\sqrt{2}M\tan^{-1}\sqrt{\frac{6M-r}{r}}-4M\tanh^{-1}\sqrt{ \frac{6M-r}{2r}}. \tag{50}\]
We then recover the results of two recent publications [12; 13]
## IV Plunging orbits in bound motion
Another bound orbit, in which particles eventually fall into the black hole, is the motion with the parameters of C and D in Fig. 4. In this case, there are two real roots, being \(r_{m1}\) inside the inner horizon, \(r_{m4}\) outside the outer horizon, and a pair of the complex-conjugated roots \(r_{m2}=r_{m3}^{*}\). Assuming that the particle starts from \(r_{i}\leq r_{m4}\), it will plunge directly into the black hole, as there are no other real valued roots in the journey before reaching the event horizon. This section devotes to find the analytical solution for the particle orbit in the case \(r_{m2}=r_{m3}^{*}\) and \(r_{m4}>r_{i}>r_{+}>r_{-}>r_{m1}\).
The solutions in the present case basically follow the same procedure as discussed in the previous section of ISSO orbit. The integration of (14) is straightforward, but the Jacobi elliptic functions are involved for the representation of the results. We find after some algebra
\[\tau_{m}^{B}(r)=-\frac{1}{\sqrt{(1-\gamma_{m}^{2})A_{m}B_{m}}}\left(F\left( \varphi(r)|k^{B}\right)-F\left(\varphi(r_{i})|k^{B}\right)\right) \tag{51}\]
where \(F(\varphi|k)\) is the incomplete elliptic integral of the first kind [27]. The two parameters of the elliptic integrals are
\[\varphi(r)=\cos^{-1}\left(\frac{B_{m}(r_{m4}-r)-A_{m}(r-r_{m1})}{B_{m}(r_{m4}-r)+ A_{m}(r-r_{m1})}\right) \tag{52}\]
and
\[k^{B}=\frac{(r_{m4}-r_{m1})^{2}-(A_{m}-B_{m})^{2}}{4A_{m}B_{m}}\;, \tag{53}\]
where we have used the short notations
\[A_{m}=\sqrt{(r_{m4}-r_{m2})(r_{m4}-r_{m3})}\;,\;B_{m}=\sqrt{(r_{m3}-r_{m1})(r_ {m2}-r_{m1})}\,. \tag{54}\]
With the help of Jacobian elliptic cosine function [27] one finds the inversion of (51) as
\[r^{B}(\tau_{m})=\frac{(B_{m}r_{m4}+A_{m}r_{m1})-(B_{m}r_{m4}-A_{m}r_{m1}){\rm cn }\left(X^{B}(\tau_{m})\left|k^{B}\right)}{(B_{m}+A_{m})-(B_{m}-A_{m}){\rm cn} \left(X^{B}(\tau_{m})\left|k^{B}\right)}\right.\,, \tag{55}\]
where
\[X^{B}(\tau_{m})=\sqrt{(1-\gamma_{m}^{2})\,A_{m}B_{m}}\tau_{m}-F\Bigg{(}\cos^{ -1}\left(\frac{B_{m}(r_{m4}-r_{i})-A_{m}(r_{i}-r_{m1})}{B_{m}(r_{m4}-r_{i})+A_ {m}(r_{i}-r_{m1})}\right)\left|k^{B}\right) \tag{56}\]
Figure 4: The graphics shows the portion of parameter space bound by the double root solution, \(r_{m2}=r_{m3}\). The equation \(R_{m}(r)\)=0 with parameters in the blue zone have complex roots, \(r_{m2}=r_{m3}^{\star}\), so that, a particle in this region, say C or D, when it starts from \(r_{i}<r_{4}\) will plunge into the black hole horizon. The inset shows the behavior of the radial potential \(R_{m}(r)\) for the case of the parameters located in C and D.
Notice that \(A_{m}>B_{m}>0\), \(k^{B}\) lies between \(0<k_{B}<1\), and for \(r<r_{m4}\), \(-1<\frac{B_{m}(r_{m4}-r_{i})-A_{m}(r_{i}-r_{m1})}{B_{m}(r_{m4}-r_{i})+A_{m}(r_{i}- r_{m1})}<1\). The Jacobian elliptic cosine function are the real-valued function.
The solutions of the coordinates \(\phi_{m}^{B}(\tau_{m})\) and \(t_{m}^{B}(\tau_{m})\) involve the integrals \(I_{m\phi}^{B}\) and \(I_{mt}^{B}\) given in (25) and (26), as in Sec. III. The integration of \(I_{1}^{B}\), \(I_{2}^{B}\), and \(I_{\pm}^{B}\) are direct, but the results have cumbersome representation:
\[I_{\pm}^{B}(\tau_{m})=\frac{1}{B_{m}\left(r_{m4}-r_{\pm}\right)+ A_{m}\left(r_{\pm}-r_{m1}\right)}\left[\frac{B_{m}-A_{m}}{\sqrt{A_{m}B_{m}}}X^{ B}(\tau_{m})\right.\] \[\left.+\frac{2(r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{B_{m}\left(r_{m4} -r_{\pm}\right)-A_{m}\left(r_{\pm}-r_{m1}\right)}R_{1}(\beta_{\pm}^{B}; \Upsilon_{\tau_{m}}^{B}|k^{B})\right]-\mathcal{I}_{\pm_{i}}^{B} \tag{57}\]
\[I_{1}^{B}(\tau_{m})=\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A _{m}}\right)\frac{X^{B}(\tau_{m})}{\sqrt{A_{m}B_{m}}}+\frac{2(r_{m4}-r_{m1}) \sqrt{A_{m}B_{m}}}{A_{m}^{2}-B_{m}^{2}}R_{1}(\beta^{B};\Upsilon_{\tau_{m}}^{B} |k^{B})-\mathcal{I}_{1_{i}}^{B} \tag{58}\]
\[I_{2}^{B}(\tau_{m})=\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A _{m}}\right)^{2}\frac{X^{B}(\tau_{m})}{\sqrt{A_{m}B_{m}}}\] \[+4\left(\frac{A_{m}r_{m1}-B_{m}r_{m4}}{A_{m}-B_{m}}\right)\frac{ (r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{A_{m}^{2}-B_{m}^{2}}R_{1}(\beta^{B};\Upsilon _{\tau_{m}}^{B}|k^{B})\] \[+\sqrt{A_{m}B_{m}}\left(\frac{2(r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{ A_{m}^{2}-B_{m}^{2}}\right)^{2}R_{2}(\beta^{B};\Upsilon_{\tau_{m}}^{B}|k^{B})- \mathcal{I}_{2_{i}}^{B} \tag{59}\]
In the formulas above, the parameters of functions \(R_{1}\) and \(R_{2}\) are related with roots of \(R_{m}(r)\) as follows
\[\beta_{\pm}^{B}=-\frac{B_{m}(r_{m4}-r_{\pm})+A_{m}(r_{\pm}-r_{m1}) }{B_{m}(r_{m4}-r_{\pm})-A_{m}(r_{\pm}-r_{m1})}\,\quad\beta^{B}=\frac{A_{m}-B_{m}}{A_{m}+B_{m}} \tag{60}\]
\[\Upsilon_{r}^{B}=\cos^{-1}\left(\frac{B_{m}(r_{m4}-r)-A_{m}(r-r_{m1})}{B_{m}(r _{m4}-r)+A_{m}(r-r_{m1})}\right),\quad\Upsilon_{\tau_{m}}^{B}=\mbox{am}\left(X _{B}(\tau_{m})\left|k_{B}\right) \tag{61}\]
where am is the Jacobi amplitude function. The quantities \(\mathcal{I}_{\pm_{i}}^{B}\), \(\mathcal{I}_{1_{i}}^{B}\), \(\mathcal{I}_{2_{i}}^{B}\) are obtained by evaluating \(\mathcal{I}_{\pm}^{B}\), \(\mathcal{I}_{1}^{B}\), \(\mathcal{I}_{2}^{B}\) at \(r=r_{i}\) of the initial condition, that is, \(I_{\pm}^{B}(0)=I_{1}^{B}(0)=I_{2}^{B}(0)=0\). Finally, \(R_{1}\) and \(R_{2}\) are the integral of Jacobian elliptic cosine function,
\[R_{1}(\alpha;\phi|k)\equiv\int_{0}^{F(\phi|k)}\frac{du}{1+\alpha \mbox{cn}(u|k)}=\frac{1}{1-\alpha^{2}}\left[\Pi\Bigg{(}\frac{\alpha^{2}}{ \alpha^{2}-1};\phi\left|k\right.\right)-\alpha f(p_{\alpha},\phi,k)\right] \tag{62}\]
\[R_{2}(\alpha;\phi|k)\equiv\int_{0}^{F(\phi|k)}\frac{du}{[1+\alpha \mathrm{cn}(u|k)]^{2}}\] \[\qquad=\frac{1}{\alpha^{2}-1}\left[F\left(\phi|k\right)-\frac{ \alpha^{2}}{k+(1-k)\alpha^{2}}\left(E(\phi|k)-\frac{\alpha\sin(\phi)\sqrt{1-k \sin^{2}(\phi)}}{1+\alpha\cos(\phi)}\right)\right]\] \[\qquad\qquad+\frac{1}{k+(1-k)\alpha^{2}}\left(2k-\frac{\alpha^{2 }}{\alpha^{2}-1}\right)R_{1}(\alpha;\phi|k) \tag{63}\]
in which
\[f(p_{\alpha},\phi,k)=\frac{p_{\alpha}}{2}\ln\left(\frac{p_{\alpha}\sqrt{1-k \sin^{2}(\phi)}+\sin(\phi)}{p_{\alpha}\sqrt{1-k\sin^{2}(\phi)}-\sin(\phi)} \right)\,,\quad p_{\alpha}=\sqrt{\frac{\alpha^{2}-1}{k+(1-k)\alpha^{2}}} \tag{64}\]
In particular, for \(\alpha=\beta^{B},\ \beta_{\pm}^{B}\), then \(-1<\alpha<1\), which ensures that the solutions are real-valued functions.
We have applied the exact solution to the parameter set C of Fig. 4. In this case, \(\lambda_{m}=1\), \(\eta_{m}=7\), and \(\gamma_{m}=0.98\) the result is show in Fig. 5. The black parameters are \(a=0.7\), \(Q=0.7\). The particle stars from the initial position \(r_{i}=7.4M\), \(\theta_{i}=\pi/2\), and \(\phi_{i}=0\) and fall almost directly into the black hole.
From the above general formulas one obtains the case of the equatorial plane, in which \(\theta=\frac{\pi}{2}\) and \(\eta_{m}\to 0\). The bound plunge solution of the coordinates \(\phi_{m}^{B}\) and \(t_{m}^{B}\) can be rewritten as the function of \(r\) as follows
\[\phi_{m}^{B}\left(r\right)=I_{m\phi}^{B}\left(\tau_{m}\left(r \right)\right)+\lambda_{m}\tau_{m}^{B}\left(r\right)+\phi_{mi}^{B}\] \[=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left[\frac{2Ma}{r_{+ }-r_{-}}\left(\mathcal{J}_{m+}-\mathcal{J}_{m-}\right)-\frac{\lambda_{m}}{ \gamma_{m}}f\left(r\right)\right]\;, \tag{65}\]
\[t_{m}^{B}\left(r\right)=I_{mt}^{B}\left(\tau_{m}\right)+t_{mi}^ {B}\] \[=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left\{\frac{4M^{2}}{ r_{+}-r_{-}}\left(\mathcal{T}_{m+}-\mathcal{T}_{m-}\right)+\frac{B_{m}r_{m4}-A_{m}r _{m1}}{B_{m}-A_{m}}\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A_{m}}+M\right)f \left(r\right)\right.\] \[+\frac{2(r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{A_{m}^{2}-B_{m}^{2}} \left[2\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A_{m}}\right)+M\right]R_{1} \left(\beta^{B};\varphi(r)|k^{B}\right)\] \[\left.+4\sqrt{A_{m}B_{m}}\left[\frac{(r_{m4}-r_{m1})\sqrt{A_{m}B_{ m}}}{A_{m}^{2}-B_{m}^{2}}\right]^{2}R_{2}\left(\beta^{B};\varphi(r)|k^{B} \right)+\left(4M^{2}-Q^{2}\right)f\left(r\right)\right\}\;, \tag{66}\]
where
\[\mathcal{T}_{m\pm}=\left(r_{\pm}-\frac{Q^{2}}{2M}\right)\mathcal{J}_{m\pm} \tag{67}\]
\[f\left(r\right)=\frac{1}{\sqrt{A_{m}B_{m}}}F\left(\varphi(r)|k^{B}\right) \tag{69}\]
Fig. 6 shows an exemplary orbit of this type.
Above expression can also convert into the solutions of the Kerr and RN black hole by taking the respective \(a\to 0\) and \(Q\to 0\) limit. In the Kerr black hole, straightforwardly substituting \(Q=0\) and the root \(r_{m1}=0\) into the definition of \(k_{B}\) and \(B_{m}\) in (53) and (54) as well as the (65) and (66) gives the solutions. Nevertheless, in the RN black hole in the
Figure 5: Illustration of an orbit off the equatorial plane with the parameters of C in Fig. 4. In this case the particle starts from \(r_{i}<r_{4}\) and plunges directly into the black hole horizon. See the text for more details
limits of \(a\to 0\) but \(r_{m1}\neq 0\) give huge simplification to (65) becoming
\[\phi_{m}^{B}\left(r\right)=-\frac{\lambda_{m}}{\sqrt{(1-\gamma_{m}^{2})A_{m}B_{m }}}F\Bigg{(}\cos^{-1}\left(\frac{B_{m}(r_{m4}-r)-A_{m}(r-r_{m1})}{B_{m}(r_{m4}- r)+A_{m}(r-r_{m1})}\right)\Bigg{|}k^{B}\Bigg{)} \tag{70}\]
whereas the solution of \(t_{m}^{B}\) remains the same form as in (66) in the corresponding limits. In the Schwarzchild black hole where \(a,Q\to 0\), the two event horizons, \(r_{+}=2M\), \(r_{-}=0\) giving \(\mathcal{T}_{m-}\to 0\), together with \(r_{m1}=0\), lead to the further simplification to (70) and (66).
## V Plunging orbits in unbound motion
For unbound motion, the particle may start from the spatial infinity characterized with the constants of motion, the azimuthal angular momentum \(\lambda_{m}\), the energy \(\gamma_{m}\), and the Carter constant \(\eta_{m}\). In this section We consider the parameters mainly in the E regime shown in Fig. 7, in which the roots of the radial potential have the properties, \(r_{m3}^{*}=r_{m4}\) and \(r_{i}>r_{+}>r_{-}>r_{m2}>r_{m1}\). This means that there is no turning point in the black hole exterior and the particle starting from the spatial infinity will plunge directly into the black
Figure 6: Illustration of an orbit on the equatorial plane with the parameters of D in Fig. 4. The particle initiates its journey at point \(r_{i}\), moves outward, reaches the turning point at \(r_{4}\), and then reverses its course, plunging back into the black hole.
hole.
The main propose here is also to derive the exact solutions for the coordinates \(r_{m}^{U}(\tau_{m})\), \(\theta_{m}^{U}(\tau_{m})\), \(\phi_{m}^{U}(\tau_{m})\), and \(t_{m}^{U}(\tau_{m})\) (We add the upper index \(U\) for the unbound case). Although the procedure is identical to the previous two sections, special care is needed because the difference of the structure of the roots. The counterpart of Eq. (51) is
\[\tau_{m}^{U}=-\frac{1}{\sqrt{(\gamma_{m}^{2}-1)A_{m}^{U}B_{m}^{U}}}\left[F \left(\psi(r)|k^{U}\right)-F\left(\psi(r_{i})|k^{U}\right)\right] \tag{71}\]
where
\[\psi(r)=\cos^{-1}\left(\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2})}{A_{m}^{U }(r-r_{m1})+B_{m}^{U}(r-r_{m2})}\right)\,, \tag{72}\]
\[k^{U}=\frac{(A_{m}^{U}+B_{m}^{U})^{2}-(r_{m2}-r_{m1})^{2}}{4A_{m}^{U}B_{m}^{U} }\,, \tag{73}\]
and
\[A_{m}^{U}=\sqrt{(r_{m3}-r_{m2})(r_{m4}-r_{m2})}\,,\;B_{m}^{U}=\sqrt{(r_{m3}-r_ {m1})(r_{m4}-r_{m1})} \tag{74}\]
Figure 7: The graphics shows the portion of parameter space limited by the double root solution, \(r_{m3}=r_{m4}\) and \(r_{m1}<r_{m2}<r_{-}<r_{+}\). For the region of the parameter space for E and F, \(r_{m3}\) and \(r_{m4}\) are complex, \(r_{m3}=r_{m4}^{*}\) and \(r_{m1}<r_{m2}<r_{-}\). The inset shows the details of the roots of illustrative cases E and F in the main figure. See the text for more discussion.
Notice that \(A_{m}^{U}(B_{m}^{U})\) have different combinations of roots than the bounded case (54). The evolution of coordinate \(r^{U}(\tau_{m})\) is then
\[r^{U}(\tau_{m})=\frac{(B_{m}^{U}r_{m2}-A_{m}^{U}r_{m1})+(B_{m}^{U}r_{m2}+A_{m}^{ U}r_{m1})\text{cn}\left(X^{U}(\tau_{m})\left|k^{U}\right)}{(B_{m}^{U}-A_{m}^{U}) \text{+}(B_{m}^{U}+A_{m}^{U})\text{cn}\left(X^{U}(\tau_{m})\left|k^{U}\right)} \tag{75}\]
where
\[X^{U}(\tau_{m})=\sqrt{\left(\gamma_{m}^{2}-1\right)A_{m}^{U}B_{m}^{U}}\tau_{m} -F\Bigg{(}\cos^{-1}\left(\frac{A_{m}^{U}(r_{i}-r_{m1})-B_{m}^{U}(r_{i}-r_{m2}) }{A_{m}^{U}(r_{i}-r_{m1})+B_{m}^{U}(r_{i}-r_{m2})}\right)\Bigg{|}k^{U}\Bigg{)} \tag{76}\]
Again the properties, \(B_{m}^{U}>A_{m}^{U}>0\), \(0<k^{U}<1\), and \(r_{m1}<r_{m2}<r\), \(-1<\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2})}{A_{m}^{U}(r-r_{m1})+B_{m}^{ U}(r-r_{m2})}<1\), guarantee that the Jacobian elliptic cosine function in Eq. (75) is a real-valued function.
The missing pieces for a complete description of the motions are the unbound version of equations (25) and (26), in which the integral (23) and (24) have been solved in Sec. IV. We express the results as follows
\[I_{\pm}^{U}(\tau_{m})=-\frac{1}{B_{m}^{U}\left(r_{\pm}-r_{m2} \right)+A_{m}^{U}\left(r_{\pm}-r_{m1}\right)}\left[\frac{B_{m}^{U}+A_{m}^{U}} {\sqrt{A_{m}^{U}B_{m}^{U}}}X^{U}(\tau_{m})\right.\] \[\left.+\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{B_{m}^{U} \left(r_{\pm}-r_{m2}\right)-A_{m}^{U}\left(r_{\pm}-r_{m1}\right)}R_{1}(\beta_ {\pm}^{U};\Upsilon_{\tau_{m}}^{U}|k^{U})\right]-\mathcal{I}_{\pm_{i}}^{U}\;, \tag{77}\]
\[I_{1}^{U}(\tau_{m})=\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}} {B_{m}^{U}+A_{m}^{U}}\right)\frac{X^{U}(\tau_{m})}{\sqrt{A_{m}^{U}B_{m}^{U}}} +\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{m}^{U}) ^{2}}R_{1}(\beta^{U};\Upsilon_{\tau_{m}}^{U}|k^{U})-\mathcal{I}_{1_{i}}^{U} \tag{78}\]
,
\[I_{2}^{U}(\tau_{m})=\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}} {B_{m}^{U}+A_{m}^{U}}\right)^{2}\frac{X^{U}(\tau_{m})}{\sqrt{A_{m}^{U}B_{m}^{ U}}}\] \[+4\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}}{B_{m}^{U}+A_{m}^{U }}\right)\frac{(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{ m}^{U})^{2}}R_{1}(\beta^{U};\Upsilon_{\tau_{m}}^{U}|k^{U})\] \[+\sqrt{A_{m}^{U}B_{m}^{U}}\left(\frac{2(r_{m2}-r_{m1})\sqrt{A_{m }^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{m}^{U})^{2}}\right)^{2}R_{2}(\beta^{U}; \Upsilon_{\tau_{m}}^{U}|k^{U})-\mathcal{I}_{2_{i}}^{U}\;. \tag{79}\]
where the functions \(R_{1}\) and \(R_{2}\) have been defined in (62) and (63) and the unbound version of the parameters now read as
\[\beta_{\pm}^{U}=\frac{B_{m}^{U}(r_{\pm}-r_{m2})+A_{m}^{U}(r_{\pm}-r_{m1})}{B_{ m}^{U}(r_{\pm}-r_{m2})-A_{m}^{U}(r_{\pm}-r_{m1})},\quad\beta^{U}=\frac{B_{m}^{U}+A_{m }^{U}}{B_{m}^{U}-A_{m}^{U}} \tag{80}\]
\[\Upsilon_{r}^{U}=\cos^{-1}\left(\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2})}{A_{ m}^{U}(r-r_{m1})+B_{m}^{U}(r-r_{m2})}\right),\quad\Upsilon_{\tau_{m}}^{U}=\text{am} \left(X^{U}(\tau_{m})\left|k^{U}\right.\right) \tag{81}\]
As before the initial conditions \(\mathcal{I}_{\pm_{i}}^{U}\), \(\mathcal{I}_{1_{i}}^{U}\), \(\mathcal{I}_{2_{i}}^{U}\) are obtained by evaluating \(\mathcal{I}_{\pm}^{U}\), \(\mathcal{I}_{1}^{U}\), \(\mathcal{I}_{2}^{U}\) at \(r=r_{i}\) of the initial condition. Also, for \(\alpha\) in the definition of \(R_{1}\) and \(R_{2}\) functions (62), in this case \(\alpha=\beta^{U},\beta_{\pm}^{U}\) with \(0<\alpha<1\) where the solutions are real-valued functions. Fig. 8 illustrate the orbit with parameters of E in Fig. 7.
\[t_{m}^{U}\left(r\right)=I_{mt}^{U}\left(\tau_{m}\right)+t_{mi}^{U}\] \[=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left\{\frac{4M^{2}}{r_ {+}-r_{-}}\left(\mathcal{V}_{m-}-\mathcal{V}_{m+}\right)+\frac{B_{m}^{U}r_{m2} +A_{m}^{U}r_{m1}}{B_{m}^{U}+A_{m}^{U}}\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{ m1}}{B_{m}^{U}+A_{m}^{U}}+M\right)g\left(r\right)\right.\] \[+\left.\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U })^{2}-(A_{m}^{U})^{2}}\left[2\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}}{B_ {m}^{U}+A_{m}^{U}}\right)+M\right]R_{1}\left(\beta^{U};\psi(r)|k^{U}\right)\right.\] \[\left.+4\sqrt{A_{m}^{U}B_{m}^{U}}\left[\frac{(r_{m2}-r_{m1}) \sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{m}^{U})^{2}}\right]^{2}R_{2} \left(\beta^{U};\psi(r)|k^{U}\right)+\left(4M^{2}-Q^{2}\right)g\left(r\right)\right\} \tag{83}\]
where
\[\mathcal{V}_{m\pm}=\left(r_{\pm}-\frac{Q^{2}}{2M}\right)\mathcal{K}_{m\pm} \tag{84}\]
\[\mathcal{K}_{m\pm} =\left(r_{\pm}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right) +Q^{2}}{2M}\right)\left\{\frac{(B_{m}^{U}+A_{m}^{U})g\left(r\right)}{B_{m}^{ U}(r_{\pm}-r_{m2})+A_{m}^{U}(r_{\pm}-r_{m1})}\right.\] \[\left.+\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{\left[B_{ m}^{U}(r_{\pm}-r_{m2})\right]^{2}-\left[A_{m}^{U}(r_{\pm}-r_{m1})\right]^{2}}R_{1} \left(\beta_{\pm}^{U};\psi(r)|k^{U}\right)\right\} \tag{85}\]
\[g\left(r\right)=\frac{1}{\sqrt{A_{m}^{U}B_{m}^{U}}}F\left(\psi(r)|k^{U}\right) \tag{86}\]
Fig. 9 shows an example with parameter of F in 7.
Figure 9: Illustration of an equatorial inspiral orbit with the parameters in the F region in Fig. 7 for \(\eta_{m}=0\) where the particle starts from spatial infinity and inspiral directly into the horizon.
In the Kerr black hole, for \(Q\to 0\), the solutions are given by taking \(r_{m1}=0\) to the (82) and (83). In the RN black hole for \(a\to 0\) but \(r_{m1}\neq 0\), the (82) can be significantly simplified as
\[\phi_{m}^{U}\left(r\right)=-\frac{\lambda_{m}}{\sqrt{(\gamma_{m}^{2}-1)A_{m}^{ U}B_{m}^{U}}}F\Bigg{(}\cos^{-1}\left(\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2}) }{A_{m}^{U}(r-r_{m1})+B_{m}^{U}(r-r_{m2})}\right)\Bigg{|}k^{U}\Bigg{)} \tag{87}\]
In the Schwarzchild black hole with \(a,Q\to 0\), the root \(r_{m1}=0\) has to applied to the RN black hole above. Nevertheless, the solution of \(t_{m}^{U}\) in various black holes remains the same form as (83) after taking the proper limits.
## VI Conclusions
In this paper, we analytical derive the inspiral solutions for the general nonequatorial orbits into the Kerr-Newman black holes for both bound and unbound motion. The solutions can be written in terms of the elliptical integrals and the Jacobian elliptic functions of manifestly real functions of the Mino time. Various limits have been taken to show the respective solutions in Kerr, Reissner Nordstr\(\ddot{o}\)m, and Schwarzschild black holes. In the case of the bound motion, we extend the study of [12; 13] to consider that the particle starts from \(r\leq r_{\text{ISSO}}\) and then inspirals into the black hole with the particular normalized energy \(\gamma_{m}\), azimuthal angular momentum \(\lambda_{m}\) and Carter constant \(\eta_{m}\) in Fig. 1 that exist the triple root of the radial potential. In the limits of \(Q\to 0\) and restricting on the equatorial plane, the obtained solution reduces to the one obtained in [12; 13]. We also consider the other type of the inspiral motion with the values of \(\gamma_{m},\lambda_{m},\eta_{m}\) in Fig.1 where the there are two real roots, one inside the horizon, \(r_{m1}\),and the other outside the horizon, \(r_{m4}\) of the radial potential. Thus, the particle starts from \(r\leq r_{m4}\) and directly inspirals into the black hole. As for the unbound state, the values of \(\gamma_{m},\lambda_{m},\eta_{m}\) shown in Fig.1 are shown where there are two real root \(r_{m2},r_{m1}\) inside the horizon. The particle starts from the spatial infinity and will inspiral directly into the black holes.
These exact solutions of the spiral motion into the black hole are of astrophysical interest due to the fact that they have direct relevance to black hole accretion phenomena. There can be significant X-ray emission and other observational signals such as gravitational waves from matter flowing inward. These explicit solutions may find applications to the numerical accretion, the generated gravitational waveforms as well as extending current theories of
black hole accretion [17; 18; 19].
Appendix A The angular potential \(\Theta(\theta)\) and the integrals \(G_{m\theta}\), \(G_{m\phi}\), and \(G_{mt}\)
The detailed studies related to the \(\Theta_{m}\) potential in the \(\theta\) direction can be found in [23; 15]. Here we summarize some of the relevant parts for the completeness of presentation. The angular potential (12) for the particle can be rewritten in terms of \(u=\cos^{2}\theta\). and the equation of motion requires \(\Theta_{m}\geq 0\), which restricts the parameter space of \(\lambda_{m}\), \(\eta_{m}\), and \(\gamma_{m}\) (see Fig. 9 in [15]). The roots of \(\Theta_{m}(\theta)=0\) can be written as [23],
\[u_{m,\pm}=\frac{\Delta_{m,\theta}\pm\sqrt{\Delta_{m,\theta}^{2}+\frac{4\,a^{2 }\,\eta_{m}}{\gamma_{m}^{2}-1}}}{2a^{2}}\,,\ \ \Delta_{m\theta}=a^{2}-\frac{\eta_{m}+\lambda_{m}^{2}}{\gamma_{m}^{2}-1}\,, \tag{13}\]
which give the boundaries of the parameter space. For \(\eta_{m}>0\) and nonzero \(\lambda_{m}\) of the motion for the particle starts off from the black hole exterior, \(1>u_{+}>0\) is the only positive root that in turn gives two roots at \(\theta_{m+}=\cos^{-1}\left(-\sqrt{u_{+}}\right),\theta_{m-}=\cos^{-1}\left( \sqrt{u_{+}}\right)\). The particle travels between the southern and northern hemispheres crossing the equator at \(\theta=\frac{\pi}{2}\).
The solution of the coordinate \(\theta_{m}(\tau_{m})\) can be obtained by an inversion of (14) [23; 15]
\[\theta(\tau_{m})=\cos^{-1}\left(-\nu_{\theta_{i}}\sqrt{u_{m+}}\text{sn}\left( \sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)}\left(\tau_{m}+\nu_{\theta_{i} }\mathcal{G}_{m\theta_{i}}\right)\left|\frac{u_{m+}}{u_{m-}}\right)\right) \tag{14}\]
where Mino time
\[\tau_{m}=G_{m\theta}=p(\mathcal{G}_{m\theta_{+}}-\mathcal{G}_{m\theta_{-}})+ \nu_{\theta_{i}}\left[(-1)^{p}\mathcal{G}_{m\theta}-\mathcal{G}_{m\theta_{i}}\right] \tag{15}\]
and sn is the Jacobi Elliptical sine function. In (15) \(p\) counts the times the trajectory passes through the turning points and \(\nu_{\theta_{i}}=\text{sign}\left(\frac{d\theta_{i}}{d\tau^{\prime}}\right)\). The function \(\mathcal{G}_{m\theta}\) is
\[\mathcal{G}_{m\theta}=-\frac{1}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1 \right)}}F\left(\sin^{-1}\left(\frac{\cos\theta}{\sqrt{u_{m+}}}\right)\left| \frac{u_{m+}}{u_{m-}}\right)\right.\,. \tag{16}\]
The evolution of coordinates \(\phi_{m}(\tau_{m})\) and \(t_{m}(\tau_{m})\) in (15) and (16) involves the integrals (21) and (22), which can expressed as follows [15]
\[G_{m\phi}(\tau_{m})=\frac{1}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)} }\Pi\left(u_{m+};\text{am}\left(\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1 \right)}\left(\tau_{m}+\nu_{\theta_{i}}\mathcal{G}_{\theta_{i}}\right)\left| \frac{u_{m+}}{u_{m-}}\right)\left|\frac{u_{m+}}{u_{m-}}\right)-\nu_{\theta_{i }}\mathcal{G}_{m\phi_{i}}\,, \tag{17}\]
\[\mathcal{G}_{\phi_{i}}=-\frac{1}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1 \right)}}\Pi\left(u_{m+};\sin^{-1}\left(\frac{\cos\theta_{i}}{\sqrt{u_{m+}}} \right)\left|\frac{u_{m+}}{u_{m-}}\right)\right.\,, \tag{18}\]
\[G_{mt}(\tau_{m})=-\frac{2u_{m+}}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)}} E^{\prime}\left(\text{am}\left(\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)} \left(\tau_{m}+\nu_{\theta_{i}}\mathcal{G}_{m\theta_{i}}\right)\left|\frac{u_{m +}}{u_{m-}}\right)\right)\left|\frac{u_{m+}}{u_{m-}}\right)-\nu_{\theta_{i}} \mathcal{G}_{mt_{i}}\,, \tag{10}\]
\[\mathcal{G}_{mt_{i}}=\frac{2u_{+}}{\sqrt{-u_{-}\hat{a}^{2}\left(\gamma_{m}^{2} -1\right)}}E^{\prime}\left(\text{sin}^{-1}\left(\frac{\cos\theta_{i}}{\sqrt{u _{+}}}\right)\left|\frac{u_{+}}{u_{-}}\right)\, \tag{11}\]
where \(E\) and \(\Pi\) are the incomplete elliptic integral of the second and third kinds. Also the prime denotes the derivative with respect to the second argument,
\[E^{\prime}\left(\varphi\left|k\right.\right)=\partial_{k}E\left(\varphi\left| k\right.\right)=\frac{E\left(\varphi\left|k\right.\right)-F\left(\varphi\left|k \right.\right)}{2k}\,. \tag{12}\]
## Appendix B The radial potential \(R_{m}(r)\) and its roots
As for the radial potential (11), it is a quartic polynomial. We then rewrite \(R_{m}(r)\) as follows
\[R_{m}(r)=S_{m}r^{4}+T_{m}r^{3}+U_{m}r^{2}+V_{m}r+W_{m}\,, \tag{13}\]
where the coefficients functions are given in terms constants of motion and parameters of the black hole as
\[S_{m}=\gamma_{m}^{2}-1, \tag{14}\]
\[T_{m}=2M, \tag{15}\]
\[U_{m}=a^{2}\left(\gamma_{m}^{2}-1\right)-Q^{2}-\eta_{m}-\lambda_{m}^{2}, \tag{16}\]
\[V_{m}=2M\Big{[}(a\gamma_{m}-\lambda_{m})^{2}+\eta_{m}\Big{]}, \tag{17}\]
\[W_{m}=-a^{2}\eta_{m}-Q^{2}\Big{[}(a\gamma_{m}-\lambda_{m})^{2}+\eta_{m}\Big{]}\,. \tag{18}\]
Furthermore, it is useful to represent the radial potential using its roots, namely
\[R_{m}(r)=\left(\gamma_{m}^{2}-1\right)(r-r_{m1})(r-r_{m2})(r-r_{m3})(r-r_{m4} )\,. \tag{19}\]
The different dynamical behaviors of the system are characterized by the positions of these roots. See figures (1), (4), (7), and also References [15; 23] The roots of a quartic equation are well known, but cumbersome. We will write them down for the sake of unifying notation and ensuring the completeness of the work
\[r_{m1}=-\frac{M}{2\left(\gamma_{m}^{2}-1\right)}-z_{m}-\sqrt{-\,\frac{X_{m}}{ 2}-z_{m}^{2}+\frac{Y_{m}}{4z_{m}}}\,, \tag{20}\]
\[r_{m2} = -\frac{M}{2\left(\gamma_{m}^{2}-1\right)}-z_{m}+\sqrt{-\frac{X_{m}}{ 2}-z_{m}^{2}+\frac{Y_{m}}{4z_{m}}}\,, \tag{101}\] \[r_{m3} = -\frac{M}{2\left(\gamma_{m}^{2}-1\right)}+z_{m}-\sqrt{-\frac{X_{m }}{2}-z_{m}^{2}-\frac{Y_{m}}{4z_{m}}}\,,\] (102) \[r_{m4} = -\frac{M}{2\left(\gamma_{m}^{2}-1\right)}+z_{m}+\sqrt{-\frac{X_{m }}{2}-z_{m}^{2}-\frac{Y_{m}}{4z_{m}}}\,, \tag{103}\]
where
\[z_{m}=\sqrt{\frac{\Omega_{m+}+\Omega_{m-}-\frac{X_{m}}{3}}{2}}\,, \tag{104}\]
and
\[\Omega_{m\pm}=\sqrt[3]{-\frac{\varkappa_{m}}{2}\pm\sqrt{\left(\frac{\varpi_{ m}}{3}\right)^{3}+\left(\frac{\varkappa_{m}}{2}\right)^{2}}} \tag{105}\]
with
\[\varpi_{m}=-\,\frac{X_{m}^{2}}{12}-Z_{m}\,,\qquad\varkappa_{m}=-\,\frac{X_{m }}{3}\left[\left(\frac{X_{m}}{6}\right)^{2}-Z_{m}\right]-\,\frac{Y_{m}^{2}}{ 8}\,. \tag{106}\]
\(X_{m}\), \(Y_{m}\), and \(Z_{m}\) are the short notation for
\[X_{m} = \frac{8U_{m}S_{m}-3T_{m}^{2}}{8S_{m}^{2}}\,, \tag{107}\] \[Y_{m} = \frac{T_{m}^{3}-4U_{m}T_{m}S_{m}+8V_{m}S_{m}^{2}}{8S_{m}^{3}}\,,\] (108) \[Z_{m} = \frac{-3T_{m}^{4}+256W_{m}S_{m}^{3}-64V_{m}T_{m}S_{m}^{2}+16U_{m} T_{m}^{2}S_{m}}{256S_{m}^{4}}\,. \tag{109}\]
###### Acknowledgements.
This work was supported in part by the National Science and Technology council (NSTC) of Taiwan, Republic of China.
|
2309.13601 | FaceGemma: Enhancing Image Captioning with Facial Attributes for
Portrait Images | Automated image caption generation is essential for improving the
accessibility and understanding of visual content. In this study, we introduce
FaceGemma, a model that accurately describes facial attributes such as
emotions, expressions, and features. Using FaceAttdb data, we generated
descriptions for 2000 faces with the Llama 3 - 70B model and fine-tuned the
PaliGemma model with these descriptions. Based on the attributes and captions
supplied in FaceAttDB, we created a new description dataset where each
description perfectly depicts the human-annotated attributes, including key
features like attractiveness, full lips, big nose, blond hair, brown hair,
bushy eyebrows, eyeglasses, male, smile, and youth. This detailed approach
ensures that the generated descriptions are closely aligned with the nuanced
visual details present in the images. Our FaceGemma model leverages an
innovative approach to image captioning by using annotated attributes,
human-annotated captions, and prompt engineering to produce high-quality facial
descriptions. Our method significantly improved caption quality, achieving an
average BLEU-1 score of 0.364 and a METEOR score of 0.355. These metrics
demonstrate the effectiveness of incorporating facial attributes into image
captioning, providing more accurate and descriptive captions for portrait
images. | Naimul Haque, Iffat Labiba, Sadia Akter | 2023-09-24T10:30:22Z | http://arxiv.org/abs/2309.13601v2 | # Face-Att: Enhancing Image Captioning with Facial Attributes for Portrait Images
###### Abstract
Automated image caption generation is a critical area of research that enhances accessibility and understanding of visual content for diverse audiences. In this study, we propose the Face-Att model, a novel approach to attribute-focused image captioning that emphasizes the accurate depiction of facial attributes within images. Face-Att automatically detects and describes a wide range of attributes, including emotions, expressions, pointed noses, white skin tones, hair textures, attractiveness, and approximate age ranges. Leveraging deep learning techniques, we explore the impact of different image feature extraction methods on caption quality and evaluate our model's performance using metrics such as BLEU and METEOR. Our Face-Att model leverages annotated attributes of portraits as supplementary prior knowledge for our portrait images before captioning. This innovative addition yields a subtle yet discernible enhancement in the resulting scores, exemplifying the potency of incorporating additional attribute vectors during training. Furthermore, our research contributes to the broader discourse on ethical considerations in automated captioning. This study sets the stage for future research in refining attribute-focused captioning techniques, with a focus on enhancing linguistic coherence, addressing biases, and accommodating diverse user needs.
**Keywords: Image captioning, Facial attributes, Portrait images, Computer vision, Natural language processing, Deep neural networks, VGG-Face model, ResNet50 model, InceptionV3, LSTM model, Face-Att, BLEU score, METEOR score, Linguistic coherence**
## 1 Introduction
Image captioning is a challenging interdisciplinary task, merging Computer Vision [1] and Natural Language Processing (NLP) [2] techniques. Its primary goal is to generate descriptive and contextually relevant captions for images using sophisticated Deep Learning [3] models. These models must extract meaningful visual features and understand the semantic context to produce informative and coherent human-readable captions. Image captioning has practical applications, such as aiding the visually impaired [4, 5] and enhancing human-computer interactions, making it a pivotal AI research area.
Significant advancements in automatic image captioning stem from Deep Neural Networks and large captioning datasets. These networks typically produce factual image descriptions. Recent research extends this by detecting emotional and relational aspects in images and enriching captions with emotive features for more engaging descriptions. In summary, image captioning is essential for advancing AI technology and facilitating seamless human-machine interactions by bridging the gap between visual content and human language.
In the realm of modern Image Captioning, an encoder-decoder paradigm [6, 7, 8, 9] is commonly adopted. This top-down approach begins with a Convolutional Neural Network (CNN) [10] model performing image content encoding, followed by a Long Short-Term Memory (LSTM) [39] model which is responsible for generating the image caption through decoding. So, after reviewing the current state of the art, we have decided to develop our Facial Attribute Image Captioning Model, Face-Att, based on this paradigm.
Several image captioning systems have addressed facial expressions and emotions, but facial attribute captioning goes further by capturing and describing specific physical characteristics beyond emotions. But, to the best of our knowledge, despite the presence of some facial attribute recognition models, there is no image captioning system that can generate captions based on the attributes of a subject's face from an image. This is why we have been inspired to work on building a model of Image Captioning with Facial Attributes, named Face-Att.
Our suggested method, called Face-Att, intends to automatically recognize and describe a wide variety of facial attributes in these images by utilizing the capabilities of Computer Vision and Natural Language Processing. Here are the key points of our contribution:
1. Creation of a comprehensive dataset 1.
Footnote 1: FaceAttDB
2.
5. **Performance Evaluation**: Assessment of the Face-Att model's efficacy.
6. **Linguistic Analysis**: In-depth linguistic examination of generated captions.
## 2 Related work
Image captioning for facial images is a burgeoning field at the crossroads of Computer Vision and Natural Language Processing (NLP), offering opportunities for improved
human-computer interaction, emotional context interpretation, and assistive technologies. While research on image captioning with facial attributes is limited, related work exists in general image captioning, facial expression analysis, and face recognition. This review highlights key contributions in these areas.
Deep Face Recognition [17]: Parkhi et al. (2015) introduced a deep neural network architecture for face recognition, handling variations in lighting and pose. However, it requires extensive training data and computational resources. SENTI-ATTEND [18]: Nezami et al. (2018) integrated sentiment analysis into image captioning, generating emotionally meaningful captions using attention mechanisms.
GroupCap [19]: Fuhai Chen et al. (2018) proposed a framework for group-based image captioning, considering relationships among images to enhance caption coherence and diversity. Face-Cap [20]: Nezami et al. (2019) introduced Face-Cap, combining facial expression analysis with caption generation, outperforming traditional methods.
Facial Expression Sentence (FES) Generation [21]: Hong et al. (2019) associated facial action units with natural language descriptions, capturing facial expressions in image captions. Image Captioning using Facial Expression and Attention [22]: In 2020, authors presented FACE-CAP and FACE-ATTEND, integrating facial expression features into caption generation for emotionally resonant captions.
Facial Recognition for Identity Cards [23]: Usgan et al. (2020) improved facial recognition for electronic identity cards, enhancing identification accuracy. Entity-Aware News Image Captioning [24]: Tran et al. (2020) integrated named entity recognition with transformer-based caption generation for more informative descriptions.
BORNON [25]: In 2021, a Transformer-based approach improved Bengali image captioning, contributing to non-English language image captioning. Emotion-Based Caption Generation [26]: Priya S et al. (2022) captured emotions in images using CSPDenseNet and BiLSTM with self-attention, enriching captions with emotional cues.
Explaining Emotional Attitude [27]: Bisikalo et al. (2022) explored deep learning models' ability to comprehend emotional attitudes, using a novel dataset for emotion annotation. Object-Centric Unsupervised Image Captioning [28]: In 2022, an unsupervised approach focused on objects within images to generate contextually relevant captions.
General Facial Representation Learning [29]: A 2022 study integrated visual and linguistic cues for comprehensive facial representation, aiding facial attribute prediction and emotion recognition. Fair Contrastive Learning for Facial Attribute Classification [30]: Park et al. (2022) addressed biased learning in facial attribute classification, achieving fairer results. EDUVI [31]: In 2023, EDUVI combined Visual Question Answering and Image Captioning to enhance primary-level education through interactive and informative mechanisms.
While these studies contribute significantly to image captioning, a notable gap exists in the exploration of facial attributes in portrait image captioning, presenting an opportunity for future research, such as the proposed Face-Att model.
## 3 Dataset
Our dataset comprises 2,000 curated portrait images, sourced from the CelebA dataset [32], which boasts over 200,000 celebrity images with 10,177 unique identities and various facial attributes. CelebA supports numerous computer vision tasks. Each image in our dataset is paired with five English and five Bangla captions.
The primary goal behind our dataset creation is to advance the field of facial attribute captioning, specifically focusing on portrait images and enabling multilingual caption generation. For ease of use in training and evaluation, all images in the BiLingualFaceCaption dataset are stored in a single folder. Additionally, to establish a clear association between each image and its respective captions, we have prepared an accompanying Excel sheet. This sheet contains the filenames of each image along with their corresponding Bangla and English captions.
The captions in our dataset are generated based on the attribute annotations available in the CelebA dataset. CelebA provides a rich set of attribute labels covering various aspects such as age, gender, expression, hair color, nose shape, and skin complexion. Leveraging these attributes, we have crafted captions that accurately describe the visual characteristics of the portrait images.
It is important to note that while the BiLingualFaceCaption dataset currently comprises 2,000 images, each with five captions, the size of the dataset may be expanded in future work to enhance the diversity and robustness of the models trained on it which ensures its continued usefulness for advancing research in multilingual facial attribute captioning.
## 4 Pre-Preprocessing
In the pre-processing phase of our dataset, we divide the process into two parts:
**1.** Image Preprocessing
**2.** Caption Preprocessing.
### Image Preprocessing
Image preprocessing is vital to ensure our dataset's images are properly formatted for modeling tasks. We standardize sizes, adjust resolutions, and normalize colors to achieve uniformity, aiding feature extraction. We meticulously remove noise and artifacts for clarity. This sets the foundation for reliable model training.
We convert images from BGR to RGB for compatibility and consistent processing.
Precisely resizing images to model-specific dimensions optimizes compatibility.
Reshaping resized images into 3D tensors aligns with model expectations, facilitating feature extraction and precise caption generation. This ensures our images conform to model input formats, enabling accurate feature extraction and captioning.
### Caption preprocessing
In this section, we detail the crucial steps for seamlessly integrating textual data into our image captioning model. Our journey starts with the creation of linguistic dictionaries, one for English and one for Bangla captions. Each word is assigned a
unique integer value, forming the basis for subsequent processing. Next, we delve into Tokenization, a process where sentences are meticulously divided into words and
Figure 1: Sample facial image dataset from Large-scale CelebFaces Attributes (CelebA) Dataset [32]
each word is mapped to an integer value. This prepares the textual data for model input. Padding comes next, ensuring consistent input dimensions by adding padding to tokenized sequences, preventing data loss, and maintaining uniformity in sequence length. Finally, we employ One-Hot Encoding to represent words as binary vectors. This step is pivotal, as it facilitates the input of words into the captioning model, enabling coherent and contextually relevant caption generation.
## 5 Proposed methodology
The proposed methodology of our Image Captioning System with Facial Attributes for Portrait Images unfolds through five essential stages:
### Image preprocessing
In this section, we prepare raw images for analysis by resizing them to specific dimensions, converting color spaces for compatibility, and ensuring uniformity in format. These steps lay the foundation for subsequent feature extraction.
### Image Feature Extraction
Here, we employ advanced techniques like convolutional neural networks (CNNs) and pre-trained models to extract meaningful visual features from images. These features capture essential information required for accurate image captioning.This section elucidates the extraction process using three distinct pre-trained deep learning models: VGGFace, ResNet50, and InceptionV3.
VGGFace [17] is harnessed to capture intricate facial features with its deep convolutional layers, enabling precise recognition and description of facial attributes.
Figure 2: Sample facial image dataset from Large-scale CelebFaces Attributes (CelebA) Dataset [32]
ResNet50 [35], known for its deep and residual neural networks, extracts complex and hierarchical visual features, enhancing the model's understanding of image content.
By utilizing InceptionV3 [38], we obtain a comprehensive view of the images through its multi-scale feature extraction, ensuring a holistic representation of visual information.
### Caption preprocessing
Incorporated into our methodology, caption pre-processing bridges the worlds of language and imagery. Starting with creating dictionaries for English and Bangla captions, words gain numerical identities. Tokenization transforms words into numbers, while padding ensures a uniform narrative canvas. One-hot encoding further reads data for prediction.
### Our Learning Model:Face-Att
The Face-Att model is designed to seamlessly combine image features and textual information, leading to the generation of eloquent portrait captions. Our model architecture comprises three sequential sub-models: **The Image Model** (sequential), **The Language Model** (sequential_1), **The Multi-Modal Model** (model_1). These sub-models are interconnected and collectively contribute to the overall architecture of the Face-ATT model.
The Image Model begins with a **Dense** layer that accepts the image features as input. This layer uses the **ReLU** activation function to introduce non-linearity and capture important patterns in the image data. The Image Model starts with a Dense layer using ReLU activation to capture crucial image patterns. It compresses data, reducing dimensions with 335,744 trainable parameters. This layer acts as a
Figure 3: Image Feature Extraction at a glance
bottleneck with 128 output units. The RepeatVector layer replicates this output for caption matching, preparing it for fusion.
The second sub-model, the Language Model (sequential_1), has a more complex structure. It is designed to process the caption's linguistic context. It starts with an **Embedding** layer to map input sequences into a dense 128-dimensional vector space. Next, a 256-unit **LSTM** layer processes the sequence, capturing temporal dependencies. A **Time-Distributed** layer adds dense transformations to each time step, contributing 32,896 trainable parameters.Overall, the sequential_1 sub-model contains 536,704 trainable parameters.
The final sub-model, model_1, connects the previous two sub-models. It takes two inputs: the output from the **Embedding** layer of the Image Model (sequential) and the output from the **Dense** layer of the Language Model (sequential_1). These inputs are passed through their respective layers, and the outputs are then concatenated. The concatenated output is fed into two subsequent LSTM layers, one with 128 units and the other with 512 units. The model_1 sub-model has a total of 2,821,464 trainable parameters.
The overall Face-ATT model is a combination of these interconnected sub-models, designed to capture intricate facial features and temporal dependencies in the input data. With a total of 2,821,464 trainable parameters, the model can be trained to learn complex relationships and patterns in face-related data. It demonstrates a comprehensive architecture for facial analysis tasks, showcasing the capability of deep learning models in facial feature extraction and analysis.
Figure 4: The structure of our model Face-Att
### Caption Prediction
In the final phase, we use the prepared images and pre-processed captions to predict and generate descriptive captions. The model's output is rigorously evaluated to assess the quality and relevance of the generated captions to the input images.
## 6 Training
The Face-ATT model was trained on Google Colaboratory with GPU support from NVIDIA Tesla K80 and Tesla T4 GPUs. Python 3, along with libraries such as NumPy, Pandas, and OpenCV, was used. The dataset contained 2000 images with 10,000 English and Bangla captions, with 1500 images and 7,500 captions used for training. The model employed Categorical Cross-Entropy loss and RMSprop optimization, with accuracy as the metric. Training analysis revealed ResNet50 as the optimal image feature extraction model, achieving 90.58 percent accuracy with a loss of 0.2131 over 200 epochs, suggesting approximately 150 epochs for practical training.
The number of epochs, varying between 100 and 200, signifies the total number of times the model iterates over the entire training dataset. The table 1 presents the number of epochs utilized during the training of the Face-Att model while employing different image feature extraction models.
For VGGFace and ResNet50 models, English captioning was trained for both 100 and 200 epochs, and Bangla captioning was trained for only 100 epochs, while only English captioning was trained for 100 epochs using InceptionV3.
\begin{table}
\begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline
**Language of the Captions** & **Epochs** & **Image Feature Extraction Model** \\ \hline \multirow{3}{*}{English} & 100 & VGGFace \\ & 100 & ResNet50 \\ & 100 & InceptionV3 \\ \hline Bangla & 100 & VGGFace \\ & 100 & ResNet50 \\ \hline \end{tabular}
\end{table}
Table 1: Epochs for Face-Att Model with Different Image Feature Extraction Models
\begin{table}
\begin{tabular}{p{85.4pt} p{85.4pt} p{85.4pt}} \hline
**Captioning Language** & **Epochs** & **Feature Extraction Model** \\ \hline English & 100 & VGGFace \\ & 100 & ResNet50 \\ & 100 & InceptionV3 \\ \hline Bangla & 100 & VGGFace \\ & 100 & ResNet50 \\ \hline \end{tabular}
\end{table}
Table 2: Epochs for Face-Att Model with Different Image Feature Extraction Models
In this context, we have provided a summarized overview of the training outcomes, as presented in Table 3 and 4. These tables encapsulate training loss and accuracy for each epoch, highlighting the influence of various image feature extraction techniques. However, it's important to note that our experimentation with Bangla captioning has been relatively limited compared to our comprehensive study of English captioning.
Interestingly, our most optimal performance emerged when employing the ResNet50 model for the image feature extraction process, achieving a remarkable accuracy of 90.58% and a correspondingly low loss of 0.2131 over 200 epochs. This underscores the potent impact of leveraging ResNet50 in tandem with our model.
However, the determination of an ideal number of epochs remains pivotal, as an excessive number may risk overfitting, while an insufficient number might result in stagnant accuracy and loss trends. To address this pivotal query, we proceed to delve into a comprehensive analysis of training loss and accuracy graphs presented in Figure 1 and Figure 2.
The graphs indicate a noticeable increase in accuracy up to approximately 150 epochs when employing VGGFace as the basis for Image Feature Extraction, and around 100 epochs when using ResNet50. Similarly, the Loss plot reflects a decreasing trend until approximately 150 and 100 epochs for VGGFace and ResNet50 image feature extraction techniques, respectively. However, beyond these points, both accuracy and loss changes become marginal. Therefore, a practical choice for the epoch size could be around 150, optimizing time and resource utilization.
\begin{table}
\begin{tabular}{l c c c} \hline
**Feature Extraction Model** & **Epoch** & **Loss** & **Accuracy** \\ \hline VGGFace & 100 & 0.9509 & 67.73\% \\ \hline ResNet50 & 100 & **0.3216** & **87.53\%** \\ \hline \end{tabular}
\end{table}
Table 4: Training Loss and Accuracy for Bangla Captioning
\begin{table}
\begin{tabular}{l c c c} \hline
**Feature Extraction Model** & **Epoch** & **Loss** & **Accuracy** \\ \hline VGGFace & 100 & 0.5947 & 78.67\% \\ & 200 & 0.2623 & 89.29\% \\ \hline ResNet50 & 100 & 0.2701 & 89.38\% \\ & 200 & **0.2131** & **90.58\%** \\ \hline InceptionV3 & 100 & 0.5646 & 80.50\% \\ \hline \end{tabular}
\end{table}
Table 3: Training Loss and Accuracy for English Captioning
## 7 Result
We performed the evaluation of our model Face-Att's predicted captions using BLEU and METEOR scores across various experimental scenarios. The obtained scores for 100 epochs are presented in Table 5, and the scores for 200 epochs are outlined in Table 6.
**BLEU** BLEU (Bilingual Evaluation Understudy) [41] is a metric for evaluating machine-generated translations by measuring similarity between generated and reference text using n-gram overlap. The BLEU-n score is calculated as:
\[\text{BLEU}=\text{BP}\times\exp\left(\frac{1}{N}\sum_{n=1}^{N}\log p_{n}\right)\]
Where: BP is the Brevity Penalty, \(N\) is the maximum n-gram order (typically 4), and \(p_{n}\) is the precision of n-grams, calculated as:
\[p_{n}=\frac{\text{Matching n-grams}}{\text{Total n-grams}}\]
BP is calculated as:
\[BP=\begin{cases}1&\text{if }c>r\\ e^{(1-\frac{r}{c})}&\text{if }c\leq r\end{cases}\]
Figure 5: Sample facial image dataset from Large-scale CelebFaces Attributes (CelebA) Dataset [32]
BLEU ranges from 0 to 1, with higher scores indicating better translations. Different BLEU variants (BLEU-1, BLEU-2, BLEU-3, BLEU-4) consider different n-gram orders.
**METEOR** METEOR (Metric for Evaluation of Translation with Explicit Ordering) [42] is another metric for translation quality. It considers precision, recall, and \(\alpha\) to balance them:
\[\text{METEOR}=\frac{(1-\alpha)\times\text{precision}\times\text{recall}}{(1- \alpha)\times\text{precision}+\alpha\times\text{recall}}\]
Where: precision and recall involve n-gram matching, and \(\alpha\) typically equals 0.9.
METEOR provides a comprehensive evaluation of translation quality, including word order and vocabulary differences.
## 8 Conclution
In our research, we introduced the Face-Att model, designed for attribute-centric image caption generation, with a primary focus on highlighting facial features in images. Our work involved a thorough exploration of diverse image feature extraction techniques and training scenarios, leading to significant progress in our objective.
Our experimental results showcased the Face-Att model's effectiveness in generating attribute-focused captions in both English and Bangla. We evaluated the quality of these captions using widely recognized metrics like BLEU and METEOR, revealing the potential of our approach, although there's room for improvement.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Feature Extraction Model** & **BLEU-1** & **BLEU-2** & **BLEU-3** & **BLEU-4** & **METEOR** \\ \hline VGGFace & 0.354 & **0.087** & **0.024** & **0.006** & 0.274 \\ ResNet50 & **0.365** & 0.079 & 0.018 & 0.002 & **0.290** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of Face-Att model over **200 epochs** (only for English Captioning)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Language** & **Feature Extraction Model** & **BLEU-1** & **BLEU-2** & **BLEU-3** & **BLEU-4** & **METEOR** \\ \hline \multirow{3}{*}{English} & VGGFace & 0.334 & **0.085** & **0.025** & **0.006** & **0.267** \\ & ResNet50 & **0.341** & 0.072 & 0.018 & 0.003 & **0.267** \\ & InceptionV3 & 0.305 & 0.060 & 0.014 & 0.001 & 0.208 \\ \hline \multirow{3}{*}{Bangla} & VGGFace & 0.280 & 0.055 & 0.010 & 0.003 & 0.172 \\ & ResNet50 & 0.291 & 0.045 & 0.006 & N/A & 0.169 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of Face-Att model over **100 epochs**
A noteworthy observation was the considerable impact of the chosen image feature extraction method on model performance. Specifically, the ResNet50 model consistently outperformed others, aligning perfectly with our model's core aim of accurately representing facial attributes in captions.
Our research contributes to the field of image captioning by addressing the specific challenge of attribute-focused caption generation, holding promise for applications in accessibility, image indexing, and educational materials. It represents a significant step toward enhancing automated image description capabilities, particularly in scenarios where highlighting specific object attributes is crucial.
|
2309.03412 | From Base to Conversational: Japanese Instruction Dataset and Tuning
Large Language Models | Instruction tuning is essential for large language models (LLMs) to become
interactive. While many instruction tuning datasets exist in English, there is
a noticeable lack in other languages. Also, their effectiveness has not been
well verified in non-English languages. We construct a Japanese instruction
dataset by expanding and filtering existing datasets and apply the dataset to a
Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning
on both Japanese and English existing models using our instruction dataset. We
evaluated these models from both quantitative and qualitative perspectives. As
a result, the effectiveness of Japanese instruction datasets is confirmed. The
results also indicate that even with relatively small LLMs, performances in
downstream tasks would be improved through instruction tuning. Our instruction
dataset, tuned models, and implementation are publicly available online. | Masahiro Suzuki, Masanori Hirano, Hiroki Sakaji | 2023-09-07T00:14:37Z | http://arxiv.org/abs/2309.03412v2 | # From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models
###### Abstract
Instruction tuning is essential for large language models (LLMs) to become interactive. While many instruction tuning datasets exist in English, there is a noticeable lack in other languages. Also, their effectiveness has not been well verified in non-English languages. We construct a Japanese instruction dataset by expanding and filtering existing datasets and apply the dataset to a Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning on both Japanese and English existing models using our instruction dataset. We evaluated these models from both quantitative and qualitative perspectives. As a result, the effectiveness of Japanese instruction datasets is confirmed. The results also indicate that even with relatively small LLMs, performances in downstream tasks would be improved through instruction tuning. Our instruction dataset, tuned models, and implementation are publicly available online.
Large Language Model (LLM), Instruction Dataset, Instruction Tuning, Japanese
## I Introduction
Large language models (LLMs) have been making remarkable progress in performance and generalization in recent years. Various Transformer-based [1] language models, such as BERT [2], RoBERTa [3], and the GPT series [4, 5, 6], have demonstrated high performance derived from pre-training. Furthermore, since 2022, a large number of models, such as OPT [7], GPT-NeoX-20B [8], UL2 [9], PaLM [10], BLOOM [11], Pythia [12], and LLaMA series [13, 14], have emerged as models that show higher performance by scaling their size [15].
Although there is still difficulty in few-shot or zero-shot performance on unseen tasks, instruction tuning can address this issue [16]. Instruction tuning is a training method that improves the performance in unseen tasks by solving various tasks described via natural language instructions [16]. Starting with the enhancement of performance in various tasks by GPT-3 [6] under a few-shot setting given in natural language, there has been an increasing demand for responses in formats that are closer to question-answering or conversation, especially formats that are not similar to the pre-training data.
An increasing number of datasets for instruction tuning and instruct-tuned models are being made available to the public. For instance, various datasets like FLAN [16], P3 [17], databricks-dolly-15k 1, and OASST1 [18] have been proposed and made public. As publicly available models, Flan-T5 [19] was constructed using FLAN and T0 was constructed using P3 respectively. Also, Dolly [20] is a model with instruction tuning applied to Pythia [12], while Vicuna [21] and Alpaca [22] are models with instruction tuning applied to LLaMA [13].
Footnote 1: [https://huggingface.co/datasets/databricks-dolly-15k](https://huggingface.co/datasets/databricks-dolly-15k)
However, these models are not fully compatible with languages other than English. The datasets used for instruction tuning in Dolly, Alpaca, and Vicuna are only in English, making it difficult to gain the benefits of these models in languages other than English. Many instruction datasets have been constructed in English, and there are not many efforts to construct instruction datasets in languages other than English. While there are movements to construct instruction datasets in Chinese [23], most instruction dataset in non-English languages are built from outputs of models with licensing restrictions, such as translations of the Alpaca dataset [22] or the ShareGPT52K 2 constructed from ChatGPT outputs. In languages other than English, the scarcity of comprehensive instruction datasets means that the verification of instruction tuning effects is limited [24]. In Japanese, only some data from translated Alpaca [22] and OASST1 [18] exists, and there's a lack of dataset diversity, with quantitative evaluations of instruction tuning yet to be conducted. While constructing and evaluating datasets in languages other than English is a crucial step towards building language models that can interact in various languages, it's still very much in its early stages.
Footnote 2: [https://huggingface.co/datasets/RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K)
To tackle the issue of the lack of Japanese instruction dataset, the study [25] gathers various Japanese datasets to build an instruction dataset. While this dataset seems valuable, the effect of instruction tuning is only shown qualitatively and not quantitatively. Furthermore, the majority of this dataset consists of translation tasks. While it is considered that the translation tasks are effective when adapting English-based models to Japanese, these tasks may not be optimal for Japanese-based models. To apply the instruction dataset to a Japanese-based model, it is desirable to filter out the translation data and construct an instruction dataset consisting solely of Japanese.
We construct an instruction dataset consisting solely of Japanese for instruction tuning based on a Japanese model by filtering and expanding the Japanese instruction dataset [25].
The constructed dataset contains about 2.5 million samples and 5 tasks, such as commonsense, summarization, reading comprehension, simplification, and correction. Using this dataset, which contains various tasks, we perform instruction tuning on both Japanese-based and English-based LLMs. For Japanese-based models, we conduct tuning using an instruction dataset without translation data, while for English-based models, we do using an instruction dataset that includes translation data. As a result of quantitative evaluation with the tuned model, we demonstrate that instruction tuning in Japanese improve the performance in downstream tasks, thereby illustrating the effectiveness of the Japanese instruction dataset. The following materials used in this study are available as open source.
* Japanese instruction dataset: [https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla)
* Tuned model (Stormy 10 epochs): [https://huggingface.co/izumi-lab/stormy-7b-10ep](https://huggingface.co/izumi-lab/stormy-7b-10ep)
* Tuned model (LLaMA 7B 5 epochs): [https://huggingface.co/izumi-lab/llama-7b-japanese-lora-v0-5ep](https://huggingface.co/izumi-lab/llama-7b-japanese-lora-v0-5ep)
* Implementation for training and evaluation: [https://github.com/retarfi/jallm](https://github.com/retarfi/jallm)
Here are our main contributions: (1) We construct a Japanese instruction dataset, llvm-japanese-dataset-vanilla, for Japanese-based models; (2) We clarified the benefits of instruction tuning for Japanese and English models from evaluating with some Japanese downstream tasks; (3) Unlike previous research [16], we show that even with smaller model sizes, instruction tuning can lead to performance gains in downstream tasks.
## II Instruction Dataset
We construct a Japanese instruction dataset without translation tasks. We use the llvm-japanese-dataset v0.1.0 [25] as a main data source for the Japanese instruction dataset and expand this dataset with additional Japanese datasets. The llvm-japanese-dataset v0.1.0 contains about 8.4 million instruction examples, of which more than 75 % (6,581,044) are constructed based on translation data. This dataset is intended to link English and Japanese and extract the knowledge learned in English for use in Japanese as well, considering that many LLMs like LLaMA show good performance in English. However, when it comes to Japanese-based models, they are usually pre-trained with Japanese corpora. The need for the English part of this dataset is relatively low because the part aimed to link English and Japanese. Therefore, we extract 1,811,964 data excluding translation tasks from the llvm-japanese-dataset v0.1.0. Furthermore, to expand the variety of datasets, we incorporated the Japanese Wikipedia Typo Dataset (Wikipedia Typo) [26] and the Japanese Question-Answering Corpus (JQAC) [27]. From the Wikipedia Typo and JQAC, we newly created 697,316 and 906 instruction entries respectively. Additionally, we addressed licensing issues present in version v0.1.0, and ultimately constructed a total of 2,463,624 instruction data entries, releasing it as llvm-japanese-dataset-vanilla v1.0.1 3. Figure 1 shows datasets and task classifications included in llvm-japanese-dataset-vanilla v1.0.1.
Footnote 3: [https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla)
Footnote 2: Originally written in Japanese.
We use the instruction, input, and response included in llvm-japanese-dataset-vanilla v0.1.0, following the format below.
* Prompt format with input Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
* ### Instruction: {Instruction}
* ### Input: {Input}
* ### Response: {Response} 2
* Prompt format with no input Below is an instruction that describes a task. Write a response that appropriately completes the request.
## Instruction: {Instruction}
* ### Response: {Response} 2
## 3 Instruction LoRA Tuning
We perform Low-Rank Adaptation (LoRA) tuning [28] on two publicly available LLMs. In this section, we describe the base model and the process of LoRA tuning.
### _Models_
We use two models: a Japanese-based model and an English-based model. The models we use are the Japanese-
Fig. 1: Datasets and task clusters used in llvm-japanese-dataset-vanilla v1.0.1.
-based OpenCALM-7B (hereafter CALM) and the English-based LLaMA 7B. CALM is a model with 7 billion parameters released by CyberAgent 3. It is pre-trained on Japanese Wikipedia and Common Crawl using the GPT-NeoX architecture [8]. For the English-based model, we use the 7B model of LLaMA [13] (hereafter LLaMA 7B), which is released by Meta 4. Although LLaMA is trained in English and is not specialized for Japanese, it is capable of Japanese input and output. Even for LLaMA, we attempt to output in Japanese by conducting instruction tuning and evaluation experiments using Japanese contexts.
Footnote 3: [https://huggingface.co/cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)
Footnote 4: Strictly speaking, although it was not initially open-source, it has become available under certain licenses
Due to the differences in characteristics between the Japanese-based CALM and the English-based LLaMA 7B, we use llm-japanese-dataset-vanilla, which we constructed above, for CALM and llm-japanese-dataset for LLaMA 7B as training data. For tuning CALM, we use version v0.1.0 as the training data, which excludes the JQAC and Wikipedia Typo datasets. This is done to align with the model constructed in the literature [25], ensuring dataset consistency with the exception of not including English.
We train LLaMA 7B on the entire llm-japanese-dataset v0.1.0, following the methods outlined in [25]. We adopt the same input format as described in [25]. From this point forward, the tuned CALM will be referred to as "Stormy," and the LLaMA 7B as "Instruct LLaMA 7B."
### _LoRA Tuning_
LLMs, having a large number of parameters, require GPU resources not only for pre-training but also for fine-tuning. In this study, we use LoRA [28] as a method for tuning LLMs without significantly reducing accuracy. In LoRA, only the difference between the initial and updated LLM parameters, represented with small-scale parameters, is calculated. Consider an example of updating the parameter matrix \(W_{0}\in\mathbb{R}^{d\times k}\) of a certain linear layer that LLM has. Instead of training \(W_{0}\) directly, initialize the difference \(\Delta W\in\mathbb{R}^{d\times k}\) to \(W_{0}\) with a zero matrix, update the difference \(\delta W\), and proceed with training by updating the parameters to \(W_{0}+\Delta W\). Here, we set \(\Delta W=BA\) where \(B\in\mathbb{R}^{d\times r}\) and \(A\in\mathbb{R}^{r\times k}\) are matrices of rank \(r\ll\min(d,k)\). This can reduce the number of learnable parameters from _dk_ to \((d+k)r\).
The primary parameters utilized in the experiment are shown in Table I. For comparison, we also mention the model that Instruct LLaMA 13B [25], which was LoRA-tuned with llm-japanese-dataset v0.1.0.
We used PEFT [29] and DeepSpeed ZeRO [30] for implementation. The code is available at [https://github.com/retarfi/jallm](https://github.com/retarfi/jallm).
## IV Evaluating Constructed Models
We evaluate the tuned models both quantitatively and qualitatively. From the quantitative perspective, we evaluate from two perspectives. The first is accuracy derived from the likelihood of choices in text classification tasks with JNLI and MARC-ja. JNLI and MARC-ja are tasks from JGLUE [31]. Further details are described in Section IV-A. The second is perplexity using question-answering data that is not included in the dataset constructed in this study. From the qualitative perspective, we qualitatively evaluate the output for several prompts. The temperature parameter for generation is 0.0, and the repetition penalty [32] is 1.05 for CALM and Stormy and 1.0 for Instruct LLaMA 7B and LLaMA 7B. We use 5 prompts for input to the models, which are the same as those used in the literature [25]. We also conduct evaluation experiments on LLaMA 13B and Instruct LLaMA 13B, which was instruction tuned for LLaMA 13B, constructed in the study [25] as well.
### _Accuracy_
Another evaluation is performed by JNLI and MARC-ja included in JGLUE [31]. JNLI is a task to choose the relationship that the premise sentence shows to the sentence pair of the hypothesis from three options: entailment, contradiction, and neutral. MARC-ja is a task to choose either "positive" or "negative" in Japanese for product reviews and is constructed using the Japanese part of the Multilingual Amazon Reviews Corpus (MARC) [33]. In addition to these, JGLUE includes JCommonsenseQA, which questions common sense, and JSQuAD, which is an extraction task. However, these data are included in the llm-japanese-dataset v0.1.0, which is used
for instruction LoRA tuning. Therefore, they were considered inappropriate as evaluation tasks and excluded.
For the implementation of the experiment, we use the Japanese evaluation branch 5 of Stability-AI/Im-evaluation-harness [34]. Aligning with Im-evaluation-harness, we use the prompt version that achieves the best performance. We adopt v0.2 for Stormy and v0.3 for the others, such as CALM, Instruct LLaMA 7B, LLaMA 7B, Instruct LLaMA 13B, and LLaMA 13B. Detailed prompts are described in the Appendix.
Footnote 5: [https://github.com/Stability-AI/Im-evaluation-harness/tree/jp-stable](https://github.com/Stability-AI/Im-evaluation-harness/tree/jp-stable)
For the input prompt, we compare the likelihood of the strings of each task's choicesand take the highest one as the model's output. In JNLI, the three choices are entailment, contradiction, and neutral, and in MARC-ja, the two choices are "positive" and "negative" in Japanese, and the model outputs the choice with the highest likelihood of output. Therefore, outputs other than the choices are not considered. We evaluate for each of 1-shot, 2-shot, and 3-shot, which show one, two, or three examples in the input, respectively.
### _Perplexity_
Perplexity, as defined by [35], is the exponential of the average negative log-likelihood. The lower the value, the higher the probability that the words in the dataset are correctly output. Given a tokenized sequence \(X=(x_{0},x_{1},\cdots,x_{t})\), the perplexity of \(X\) is represented by Equation (1).
\[\mathrm{Perplexity}(X)=\exp\left\{-\frac{1}{t}\sum_{i}^{t}\log p_{\theta}(x_{ i}|x_{<i})\right\} \tag{1}\]
Here, \(\log p_{\theta}(x_{i}|x_{<i})\) is the log-likelihood of the \(i\)-th token given the preceding tokens \(x_{<i}\).
In this study, we measure perplexity using the Japanese Visual Question Answering (VQA) dataset [36], which is not included in the Ilm-Japanese-dataset v0.1.0 used for tuning the language model. Although this VQA dataset is a question-answering task performed by looking at presented images, it is conjectured that models with a high probability of predicting the correct response sentence are more natural. We convert 793,664 question and answer pairs extracted from the VQA dataset into prompt format and input them. An example of the input is shown below.
* Write a response to answer the following question.
### Question:
What color is the airplane's body?
### Response:
White 2
Footnote 2: [https://github.com/chakki-works/chABA-dataset](https://github.com/chakki-works/chABA-dataset)
It should be noted that the LLaMA-based model uses English for system messages and Japanese for contexts of questions andw responses. Therefore, following the literature [25], the above example is modified as follows.
* Write a response to answer the following question.
* ### Question:
* What color is the airplane's body? 2
Footnote 2: [https://github.com/chakki-works/chABA-dataset](https://github.com/chakki-works/chABA-dataset)
### Response:
* White 2
Footnote 2: [https://github.com/chakki-works/chABA-dataset](https://github.com/chakki-works/chABA-dataset)
The calculation of perplexity is not performed on the input to the model and is only applied to the response. In other words, in the above example, perplexity is calculated only for the token corresponding to the output "white."
## V Results and Discussion
### _Quantitative Evaluation_
Table II shows the results of the evaluation experiments.
In the evaluation by JNLI, the accuracy of Stormy was the highest across 1-shot, 2-shot, and 3-shot settings. Even though the Ilm-Japanese-dataset v0.1.0 does not include a dataset equivalent to implication relation recognition, the performance seems to have been improved by solving various tasks as in [16]. The improvement in performance on tasks not present in the dataset by using instruction tuning across various tasks aligns with the findings in the literature [16, 17]. Japanese instruction datasets are valuable in the point of having constructed similar datasets for languages other than English. The performance of Stormy and Instruct LLaMA 7B, which performed instruction tuning on CALM and LLaMA 7B, respectively, improved, showing the effect as instruction tuning. However, the effect of instruction tuning in LLaMA 13B was relatively small. This is likely because instruction tuning in Instruct LLaMA 13B was performed for only one epoch. When comparing two Instruct LLaMA models with different numbers of parameters, even though there was a difference in the number of training epochs, Instruct LLaMA 7B showed a stronger effect from instruction tuning. This is considered to be because the smaller model size facilitates more effective training. It has been reported that larger model sizes result in better performance on downstream tasks [7, 10, 13]. The performance of Instruct LLaMA 13B might improve with more training epochs.
In the evaluation by MARC-ja, there was no performance improvement by instruction tuning in all of 1-shot, 2-shot, and 3-shot, or the performance became worse. This phenomenon has also been reported in [16, 37]. The performance might be improved by adopting more various tasks widely as instruction data as in [16]. As well as MARC-ja, there are also datasets related to sentiment that can be incorporated in Japanese, such as the chABSA-dataset6 (ABSA stands for Aspect-Based Sentiment Analysis). The decrease in accuracy could be suppressed by additionally training these datasets. Another possible reason why the performance did not improve in the LLaMA-based models is the input length of instruction
tuning in this study. While the LLaMA-based model itself can input up to 2,048 tokens and pre-training is performed at this length, in this study, the input length is limited to 256 tokens. Therefore, in data where long tokens are input, the effect of instruction tuning may not have been demonstrated. Extending the input length of instruction tuning is a future issue.
In the evaluation of perplexity using VQA, all the instructured models showed improved performance with reduced perplexity due to tuning using instruction data. Language models adopting the decoder architecture are trained to increase the probability of correctly predicting the next token in the input. Therefore perplexity is trained to decrease. However, the reason for the reduction in perplexity by instruction tuning might be attributed to differences in the input data. While a language model predicts the next token for consecutive sequences in pre-training, it predicts tokens sequentially in response to a given question in instruction tuning. Since the format of input and output in instruction tuning matches the question-answering in VQA used in this experiment, it can be inferred that the model became more accustomed to producing answers by instruction tuning, leading to a reduction (performance improvement) in perplexity.
The improvement in perplexity was particularly noticeable in the LLaMA-based models. A link with Japanese is considered to have been created and the performance improved by training using instruction data including translation data, even for models other than Japanese, such as English. Among the six models, the one with the highest perplexity and the worst performance was LLaMA 7B. This is thought to be due to the fact that it is an English-based model and has fewer model parameters than LLaMA 13B. On the other hand, the model that showed the best performance with the lowest perplexity was Stormy. The performance was improved by further instruction tuning for CALM, which is a model of Japanese. Comparing CALM, LLaMA 7B, and LLaMA 13B, which were the base models for tuning, the Japanese-based CALM showed the highest performance.
In terms of the effect of instruction tuning from the perspective of model size, the literature [16] reported that for models larger than 68B, the effects of instruction tuning were observed in downstream tasks. However, they also reported for models smaller than 8B, instruction tuning paradoxically degraded performance in downstream tasks. In the results of the MARC-ja experiments in this study, no effect of instruction tuning was observed for all models of 7B and 13B, while for JNLI, the positive effects of instruction tuning were observed in all models. This effect was observed in both Japanese-based CALM and English-based LLaMA models. This suggests that, in non-English languages or when tuning English models to them, instruction tuning does not necessarily have negative effects for smaller models, and could even contribute to performance enhancement.
in comparison with prior research [16, 38], there are fewer types of tasks. This might have led to a potential constraint in performance improvement. For instance, when compared to FLAN [16], tasks like simplification and correction have been newly added, while tasks like natural language inference, sentiment, and paraphrase lack. In this study, although the experiments were conducted using the 5 task types shown in Figure 1, consistent results were observed even in a non-English language like Japanese. Expanding the variety of tasks will be a challenge for future research.
Regarding the base language of the model, there was no difference in terms of performance trends between the Japanese-based model and the English-based model; performance improved in VQA and JNLI, but not in MARC-ja. The results show that using a Japanese instruction dataset can enhance performance in Japanese tasks for both English-based and Japanese-based models. Except for the fact that translation tasks were used in the instruction tuning data of the LLaMA-derived model, the other tasks used in Stormy and Instruct LLaMA are the same. How the translation task impacts both English-based and Japanese-based models remains a future research topic.
As discussed in the results for JNLI, while instruction tuning is effective for unseen tasks, the inclusion of similar tasks in the dataset is considered to have a substantial impact on performance improvement [16]. Based on this, it can be inferred that by adding tasks related to natural language inference for JNLI and sentiment for MARC-ja, the performance on JNLI and MARC-ja tasks would improve more. The tasks evaluated in this study evaluate how much the model can output for a certain output (although it is consistent with the evaluation method in [6]), and do not consider the actual output of the
model. For a more detailed comparison, it is also a future task to evaluate using actual outputs. In addition, it will be necessary in the future to evaluate from the perspective of human preference [39, 40], not just focusing on the task's correctness.
### _Qualitative Evaluation_
The following shows the responses of the models used in this study, such as Stormy, CALM, Instruct LLaMA 7B, and LLaMA 7B. For responses of Instruct LLaMA 13B and LLaMA 13B to the input shown below, please refer to the literature [25].
-- Comparison of Responses (1) --
### **Input:**
Who are you? 2
### **Output (Sformy):**
I am my wife, and I love her. 2
### **Output (CALM):**
I am Nick M. Kelemel. I have degrees in computer science and mathematics, and I am currently working as a researcher. 2
### **Output (Instruct LLaMA 7B):**
What are you doing? 2
### **Output (LLaMA 7B):**
Who are you? 2
-------------------------------- Comparison of Responses (2) --
### **Input:**
Please tell me how to get to Osaka from Tokyo. 2
### **Output (Sformy):**
Department from Tokyo by plane. When you arrive in Osaka, transfer to the Shinkansen and go to Nagoya. From Nagoya, take the Kinetsu Limited Express. 2
### **Output (CALM):**
Which is better, train or Shinkansen?
* "Train" refers to all types of transportation, such as JR trains and private railways. * "Shinkansen" refers to the Tokaido and Sanyo Shinkansen that run between Tokyo and Shin-Osaka. * "Airplane" typically refers to the route between Haneda and Itami. * "Bus" refers to the Toei buses that operate in Tokyo. * "Car" includes not only private cars but also taxis and rental cars. * "Bicycle" includes electric-assisted bicycles. * "Walking," "jogging," and "strolling" are excluded. 2
### **Output (Instruct LLaMA 7B):**
Please tell me how to get to Tokyo from Osaka. 2
### **Output (LLaMA 7B):**
Please tell me how to get to Osaka from Tokyo. 2
-------------------------------- Comparison of Responses (3) --
### **Input:**
Where is the University of Tokyo? 2
### **Output (Sformy):**
The University of Tokyo is a national university located in Bunkyo-ku, Tokyo. 2
### **Output (CALM):**
The University of Tokyo is located at 7-3-1 Hongo, Bunkyo-ku, Tokyo. The latitude and longitude of the University of Tokyo are 35.676044, 135.869422.
--------------------------------
##Answer: What do you use to get to that address? (Maps or Google Map, etc.)
--------------------------------
##Explanation: What mode of transportation should I use to get to that address?
##Answer: Which is better to use, train or bus?
##Explanation: If you use the JR line, please get off at the "Yamanote Line." The JR line goes to "Tokyo" station.
##Answer: Where should I go from Tokyo station?
##Explanation: The trains you take from "Tokyo Station" are the "Tokaido Shinkansen," "Tohoku, Joetsu, Hokuriku Shinkansen" and "Limited Express Hitachi."
##Answer: Which is faster and cheaper, the Johan Line or the Shinkansen?
##Explanation: The Shinkansen is faster. However, the price is different.
##Answer: I don't have money, what should I do?
##Explanation: First, please prepare money. If you have a credit card, you can use it.
##Answer: How should I pay?
##Explanation: You can pay by card. VISA card 2
## Output (Instruct LLaMA 7B):**
The University of Tokyo is in Tokyo. 2
## Output (LLaMA 7B):**
Where is the University of Tokyo? 2
(Authors' note: The correct address is 7-3-1 Hongo, Bunkyo-ku, Tokyo.)
* Comparison of Responses (4) ****
### **Input:**
The phone is ringing. What should I do? 2
****
### **Output (Sformy):**
Should you answer the phone? Or should you put it on hold? 2
****
### **Output (CALM):**
Press the hold button, answer the phone, or call the answering service. When you hear the ringtone, reply within 10 seconds.
If there is no response, try calling again. If that still doesn't work, proceed to the next step.
* Press '0' to end the call.
* Press '+' after the ringtone. This will enable the call. 2
****
### **Output (Instruct LLaMA 7B):**
The phone is ringing. What should I do? 2
****
### **Output (LLaMA 7B):**
The phone is ringing. What should I do? 2
****
* Comparison of Responses (5) ****
****
### **Input:**
What are the three major festivals in Kyoto? 2
****
### **Output (Sformy):**
Aoi Festival, Gion Festival, Jidai Festival 2
****
****
### **Output (CALM):**
The three are Gion Festival, Aoi Festival, and Jidai Festival. They are also referred to as the three major festivals of Kyoto. 2
### **Output (LLaMA 7B):**
What are the three major festivals in Kyoto? 2
****
tasks in natural language format for instruction tuning datasets. Specifically, labeled datasets are applied with instructions written by humans explaining the task, explaining the direction of the output, and instructing the LLM to understand the task from the input [16, 38, 17]. Other construction methods include examples of constructing datasets using the output of ChatGPT or GPT-4 [22, 39, 49, 7].8, and there are few examples of constructing datasets manually [20].
Footnote 8: [https://huggingface.co/datasets/RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K)
Footnote 9: [https://github.com/tekurium1/GPTeacher](https://github.com/tekurium1/GPTeacher)
### _Tuning of LLMs_
Efficient tuning in LLMs with many parameters is attracting attention to adapt LLMs to various downstream tasks. In particular, LoRA [28] is widely applied to open-source LLMs. For example, Alpaca-LoRA [50] uses LoRA to tune LLaMA 7B as a lightweight tuning version of Alpaca [22]. Also, AdaLoRA [51] changes the value of the rank in LoRA. This adjustment occurs according to the layer to be applied.
Other efficient tuning methods include adding an Adapter layer to the existing layers [52, 53, 54], and prompt tuning [55, 56], which fixes the weights of the pre-trained model, adds trainable parameters to the prompt instructions.
## VII Conclusion
We constructed an instruction dataset for Japanese-based LLMs (Large Language Models). The dataset excludes any translation data originally present in the llvm-japanese-dataset and introduces additional tasks to the existing ones. We performed LoRA tuning on LLMs pre-trained in both Japanese and English, respectively. The tuning was done using Japanese instruction data. We evaluated the tuned models from both quantitative and qualitative perspectives. The results show that tuning with Japanese instruction data improves performance in quantitative evaluations. In particular, the results indicate that not only Japanese-based models but also English-based models can be tuned in Japanese using the Japanese instruction dataset. Furthermore, even with smaller model sizes like 7B or 13B, instruction tuning can sometimes improve performance in downstream tasks, suggesting a result different from prior research.
Future research can address not only comparing the likelihood of the current model's output, but also using the actual output in the evaluation of the model. Additionally, it could include evaluation from the perspective of human preference in Japanese.
## Acknowledgment
This work was supported in part by JSPS KAKENHI Grant Number JP21K12010 and JST PRESTO Grant Number JPMIPR2267.
|
2309.15270 | Consistent Query Answering for Primary Keys on Path Queries | We study the data complexity of consistent query answering (CQA) on databases
that may violate the primary key constraints. A repair is a maximal consistent
subset of the database. For a Boolean query $q$, the problem
$\mathsf{CERTAINTY}(q)$ takes a database as input, and asks whether or not each
repair satisfies $q$. It is known that for any self-join-free Boolean
conjunctive query $q$, $\mathsf{CERTAINTY}(q)$ is in $\mathbf{FO}$,
$\mathbf{LSPACE}$-complete, or $\mathbf{coNP}$-complete. In particular,
$\mathsf{CERTAINTY}(q)$ is in $\mathbf{FO}$ for any self-join-free Boolean path
query $q$. In this paper, we show that if self-joins are allowed, the
complexity of $\mathsf{CERTAINTY}(q)$ for Boolean path queries $q$ exhibits a
tetrachotomy between $\mathbf{FO}$, $\mathbf{NL}$-complete,
$\mathbf{PTIME}$-complete, and $\mathbf{coNP}$-complete. Moreover, it is
decidable, in polynomial time in the size of the query~$q$, which of the four
cases applies. | Paraschos Koutris, Xiating Ouyang, Jef Wijsen | 2023-09-26T21:05:59Z | http://arxiv.org/abs/2309.15270v1 | # Consistent Query Answering for Primary Keys on Path Queries+
###### Abstract
We study the data complexity of consistent query answering (CQA) on databases that may violate the primary key constraints. A repair is a maximal consistent subset of the database. For a Boolean query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) takes a database as input, and asks whether or not each repair satisfies \(q\). It is known that for any self-join-free Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is in **FO**, \(\mathsf{L}\)-complete, or \(\mathsf{coNP}\)-complete. In particular, \(\mathsf{CERTAINTY}(q)\) is in **FO** for any self-join-free Boolean path query \(q\). In this paper, we show that if self-joins are allowed, the complexity of \(\mathsf{CERTAINTY}(q)\) for Boolean path queries \(q\) exhibits a tetrachtotomy between **FO**, **NL**-complete, **PTIME**-complete, and \(\mathsf{coNP}\)-complete. Moreover, it is decidable, in polynomial time in the size of the query \(q\), which of the four cases applies.
## 1 Introduction
Primary keys are probably the most common integrity constraints in relational database systems. Although databases should ideally satisfy their integrity constraints, data integration is today frequently cited as a cause for primary key violations, for example, when a same client is stored with different birthdays in two data sources. A _repair_ of such an inconsistent database instance is then naturally defined as a maximal consistent subinstance. Two approaches are then possible. In _data cleaning_, the objective is to single out the "best" repair, which however may not be practically possible. In _consistent query answering_ (CQA) [3], instead of cleaning the inconsistent database instance, we change the notion of query answer: the _consistent_ (or _certain_) _answer_ is defined as the intersection of the query answers over all (exponentially many) repairs. In computational complexity studies, consistent query answering is commonly defined as the data complexity of the following decision problem, for a fixed Boolean query \(q\):
**Problem: \(\mathsf{CERTAINTY}(q)\)**
**Input:** A database instance **db**.
**Question:** Does \(q\) evaluate to true on every repair of **db**?
For every first-order query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) is obviously in \(\mathsf{coNP}\). However, despite significant research efforts (see Section 9), a fine-grained complexity classification is still largely open. A notorious open conjecture is the following.
**Conjecture 1**.: _For each Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is either in **PTIME** or \(\mathsf{coNP}\)-complete._
On the other hand, for the smaller class of self-join-free Boolean conjunctive queries, the complexity landscape is by now well understood, as summarized by the following theorem.
**Theorem 1** ([32]).: _For each self-join-free Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is in **FO**, \(\mathbf{L}\)-complete, or \(\mathsf{coNP}\)-complete, and it is decidable which of the three cases applies._
Abandoning the restriction of self-join-freeness turns out to be a major challenge. The difficulty of self-joins is caused by the obvious observation that a single database fact can be used to satisfy more than one atom of a conjunctive query, as illustrated by Example 1. Self-joins happen to significantly change the complexity landscape laid down in Theorem 1; this is illustrated by Example 2. Self-join-freeness is a simplifying assumption that is also used outside CQA (e.g., [15, 4, 16]).
**Example 1**.: Take the self-join \(q_{1}=\exists x\exists y(R(\underline{x},y)\wedge R(\underline{y},x))\) and its self-join-free counterpart \(q_{2}=\exists x\exists y(R(\underline{x},y)\wedge S(\underline{y},x))\), where the primary key positions are underlined. Consider the inconsistent database instance \(\mathbf{db}\) in Figure 1. We have that \(\mathbf{db}\) is a "no"-instance of \(\mathsf{CERTAINTY}(q_{2})\), because \(q_{2}\) is not satisfied by the repair \(\{R(\underline{a},a)\), \(R(\underline{b},b)\), \(S(\underline{a},b)\), \(S(\underline{b},a)\}\). However, \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{1})\). This is because every repair that contains \(R(\underline{a},a)\) or \(R(\underline{b},b)\) will satisfy \(q_{1}\), while a repair that contains neither of these facts must contain \(R(\underline{a},b)\) and \(R(\underline{b},a)\), which together also satisfy \(q_{1}\).
**Example 2**.: Take the self-join \(q_{1}=\exists x\exists y\exists z(R(\underline{x},z)\wedge R(\underline{y},z))\) and its self-join-free counterpart \(q_{2}=\exists x\exists y\exists z(R(\underline{x},z)\wedge S(\underline{y},z))\). \(\mathsf{CERTAINTY}(q_{2})\) is known to be \(\mathbf{coNP}\)-complete, whereas it is easily verified that \(\mathsf{CERTAINTY}(q_{1})\) is in \(\mathbf{FO}\), by observing that a database instance is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{1})\) if and only if it satisfies \(\exists x\exists y(R(\underline{x},y))\).
This paper makes a contribution to the complexity classification of \(\mathsf{CERTAINTY}(q)\) for conjunctive queries, possibly with self-joins, of the form
\[q=\exists x_{1}\cdots\exists x_{k+1}(R_{1}(\underline{x_{1}},x_{2})\wedge R_{2 }(\underline{x_{2}},x_{3})\wedge\cdots\wedge R_{k}(\underline{x_{k}},x_{k+1})),\]
which we call _path queries_. The primary key positions are underlined. As will become apparent in our technical treatment, the classification of path queries is already very challenging, even though it is only a first step towards Conjecture 1, which is currently beyond reach. If all \(R_{i}\)'s are distinct (i.e., if there are no self-joins), then \(\mathsf{CERTAINTY}(q)\) is known to be in \(\mathbf{FO}\) for path queries \(q\). However, when self-joins are allowed, the complexity landscape of \(\mathsf{CERTAINTY}(q)\) for path queries exhibits a tetrachtotomy, as stated by the following main result of our paper.
**Theorem 2**.: _For each Boolean path query \(q\), \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, \(\mathbf{PTIME}\)-complete, or \(\mathbf{coNP}\)-complete, and it is decidable in polynomial time in the size of \(q\) which of the four cases applies._
Comparing Theorem 1 and Theorem 2, it is striking that there are path queries \(q\) for which \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{NL}\)-complete or \(\mathbf{PTIME}\)-complete, whereas these complexity classes do not occur for self-join-free queries (under standard complexity assumptions). So even for the restricted class of path queries, allowing self-joins immediately results in a more varied complexity landscape.
Let us provide some intuitions behind Theorem 2 by means of examples. Path queries use only binary relation names. A database instance \(\mathbf{db}\) with binary facts can be viewed as a directed edge-colored graph: a fact \(R(\underline{a},b)\) is a directed edge from \(a\) to \(b\) with color \(R\). A repair of \(\mathbf{db}\) is obtained by choosing, for each vertex, precisely one outgoing edge among all outgoing edges of the same color. We will use the shorthand \(q=RR\) to denote the path query \(q=\exists x\exists y\exists z(R(\underline{x},y)\wedge R(\underline{y},z))\).
In general, path queries can be represented by words over the alphabet of relation names. Throughout this paper, relation names are in uppercase letters \(R\), \(S\), \(X\), \(Y\) etc., while lowercase letters \(u\), \(v\), \(w\) stand for (possibly empty) words. An important operation on words is dubbed _rewinding_: if a word has a factor of the form \(RvR\), then rewinding refers to the operation that replaces this factor with \(RvRvR\). That is, rewinding the factor \(RvR\) in the word \(uRvRw\) yields the longer word \(uRvRvRw\). For short, we also say that \(uRvRv\)_rewinds to_ the word
Figure 1: An inconsistent database instance \(\mathbf{db}\).
\(u\cdot Rv\cdot\underline{Rv}\cdot Rw\), where we used concatenation \((\cdot)\) and underlining for clarity. For example, \(TWITTER\) rewinds to \(TWI\cdot\underline{TWI}\cdot TTER\), but also to \(TWIT\cdot\underline{TWIT}\cdot TER\) and to \(TWIT\cdot T\cdot\underline{T}\cdot TER\).
Let \(q_{1}=RR\). It is easily verified that a database instance is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{1})\) if and only if it satisfies the following first-order formula:
\[\varphi=\exists x(\exists yR(\underline{x},y)\wedge\forall y(R(\underline{x},y)\rightarrow\exists zR(\underline{y},z))).\]
Informally, every repair contains an \(R\)-path of length \(2\) if and only if there exists some vertex \(x\) such that every repair contains a path of length \(2\) starting in \(x\).
Let \(q_{2}=RRX\), and consider the database instance in Figure 2. Since the only conflicting facts are \(R(\underline{1},2)\) and \(R(\underline{1},3)\), this database instance has two repairs. Both repairs satisfy \(RRX\), but unlike the previous example, there is no vertex \(x\) such that every repair has a path colored \(RRX\) that starts in \(x\). Indeed, in one repair, such path starts in \(0\); in the other repair it starts in \(1\). For reasons that will become apparent in our theoretical development, it is significant that both repairs have paths that start in \(0\) and are colored by a word in the regular language defined by \(RR\left(R\right)^{*}X\). This is exactly the language that contains \(RRX\) and is closed under the rewinding operation. In general, it can be verified with some effort that a database instance is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{2})\) if and only if it contains some vertex \(x\) such that every repair has a path that starts in \(x\) and is colored by a word in the regular language defined by \(RR\left(R\right)^{*}X\). The latter condition can be tested in **PTIME** (and even in **NL**).
The situation is still different for \(q_{3}=ARRX\), for which it will be shown that \(\mathsf{CERTAINTY}(q_{3})\) is **coNP**-complete. Unlike our previous example, repeated rewinding of \(ARRX\) into words of the language \(ARR\left(R\right)^{*}X\) is not helpful. For example, in the database instance of Figure 3, every repair has a path that starts in \(0\) and is colored with a word in the language defined by \(ARR\left(R\right)^{*}X\). However, the repair that contains \(R(\underline{a},c)\) does not satisfy \(q_{3}\). Unlike Figure 2, the "bifurcation" in Figure 3 can be used as a gadget for showing **coNP**-completeness in Section 7.
**Organization**. Section 2 introduces the preliminaries. In Section 3, the statement of Theorem 3 gives the syntactic conditions for deciding the complexity of \(\mathsf{CERTAINTY}(q)\) for path queries \(q\). To prove this theorem, we view the rewinding operator from the perspectives of regular expressions and automata, which are presented in Sections 4 and 5 respectively. Sections 6 and 7 present, respectively, complexity upper bounds and lower bounds of our classification. In Section 8, we extend our classification result to path queries with constants. Section 9 discusses related work, and Section 10 concludes this paper.
## 2 Preliminaries
We assume disjoint sets of _variables_ and _constants_. A _valuation_ over a set \(U\) of variables is a total mapping \(\theta\) from \(U\) to the set of constants.
**Atoms and key-equal facts**. We consider only \(2\)-ary _relation names_, where the first position is called the _primary key_. If \(R\) is a relation name, and \(s,t\) are variables or constants, then \(R(\underline{s},t)\) is an _atom_. An atom without variables is a _fact_. Two facts are _key-equal_ if they use the same relation name and agree on the primary key.
**Database instances, blocks, and repairs**. A _database schema_ is a finite set of relation names. All constructs that follow are defined relative to a fixed database schema.
Figure 3: An example database instance \(\mathbf{db}\) for \(q_{3}=ARRX\).
Figure 2: An example database instance \(\mathbf{db}\) for \(q_{2}=RRX\).
A _database instance_ is a finite set \(\mathbf{db}\) of facts using only the relation names of the schema. We write \(\mathsf{adom}(\mathbf{db})\) for the active domain of \(\mathbf{db}\) (i.e., the set of constants that occur in \(\mathbf{db}\)). A _block_ of \(\mathbf{db}\) is a maximal set of key-equal facts of \(\mathbf{db}\). Whenever a database instance \(\mathbf{db}\) is understood, we write \(R(\underline{c},*)\) for the block that contains all facts with relation name \(R\) and primary-key value \(c\). A database instance \(\mathbf{db}\) is _consistent_ if it contains no two distinct facts that are key-equal (i.e., if no block of \(\mathbf{db}\) contains more than one fact). A _repair_ of \(\mathbf{db}\) is an inclusion-maximal consistent subset of \(\mathbf{db}\).
**Boolean conjunctive queries**. A _Boolean conjunctive query_ is a finite set \(q=\{R_{1}(\underline{x_{1}},y_{1}),\,\ldots,\,R_{n}(\underline{x_{n}},y_{n})\}\) of atoms. We denote by \(\mathsf{vars}(q)\) the set of variables that occur in \(q\). The set \(q\) represents the first-order sentence
\[\exists u_{1}\cdots\exists u_{k}(R_{1}(\underline{x_{1}},y_{1})\wedge\cdots \wedge R_{n}(\underline{x_{n}},y_{n})),\]
where \(\{u_{1},\ldots,u_{k}\}=\mathsf{vars}(q)\).
We say that a Boolean conjunctive query \(q\) has a _self-join_ if some relation name occurs more than once in \(q\). A conjunctive query without self-joins is called _self-join-free._
**Consistent query answering**. For every Boolean conjunctive query \(q\), the decision problem \(\mathsf{CERTAINTY}(q)\) takes as input a database instance \(\mathbf{db}\), and asks whether \(q\) is satisfied by every repair of \(\mathbf{db}\). It is straightforward that for every Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{coNP}\).
**Path queries**. A _path query_ is a Boolean conjunctive query without constants of the following form:
\[q=\{R_{1}(\underline{x_{1}},x_{2}),R_{2}(\underline{x_{2}},x_{3}),\ldots,R_{k} (\underline{x_{k}},x_{k+1})\},\]
where \(x_{1}\), \(x_{2}\),..., \(x_{k+1}\) are distinct variables, and \(R_{1}\), \(R_{2}\),..., \(R_{k}\) are (not necessarily distinct) relation names. It will often be convenient to denote this query as a _word_\(R_{1}R_{2}\cdots R_{k}\) over the alphabet of relation names. This "word" representation is obviously lossless up to a variable renaming. Importantly, path queries may have self-joins, i.e., a relation name may occur multiple times. Path queries containing constants will be discussed in Section 8. The treatment of constants is significant, because it allows moving from Boolean to non-Boolean queries, by using that free variables behave like constants.
## 3 The Complexity Classification
We define syntactic conditions \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) for path queries \(q\). Let \(R\) be any relation name in \(q\), and let \(u,v,w\) be (possibly empty) words over the alphabet of relation names of \(q\).
\(\mathcal{C}_{1}\)**:**: Whenever \(q=uRvRv\), \(q\) is a prefix of \(uRvRvRv\). \(\mathcal{C}_{2}\)**:**: Whenever \(q=uRvRv\), \(q\) is a factor of \(uRvRvRv\); and whenever \(q=uRv_{1}Rv_{2}Rw\) for consecutive occurrences of \(R\), \(v_{1}=v_{2}\) or \(Rw\) is a prefix of \(Rv_{1}\). \(\mathcal{C}_{3}\)**:**: Whenever \(q=uRvRv\), \(q\) is a factor of \(uRvRvRv\).
It is instructive to think of these conditions in terms of the rewinding operator introduced in Section 1: \(\mathcal{C}_{1}\) is tantamount to saying that \(q\) is a prefix of every word to which \(q\) rewinds; and \(\mathcal{C}_{3}\) says that \(q\) is a factor of every word to which \(q\) rewinds. These conditions can be checked in polynomial time in the length of the word \(q\). The following result has an easy proof.
**Proposition 1**.: _Let \(q\) be a path query. If \(q\) satisfies \(\mathcal{C}_{1}\), then \(q\) satisfies \(\mathcal{C}_{2}\); and if \(q\) satisfies \(\mathcal{C}_{2}\), then \(q\) satisfies \(\mathcal{C}_{3}\)._
The main part of this paper comprises a proof of the following theorem, which refines the statement of Theorem 2 by adding syntactic conditions. The theorem is illustrated by Example 3.
**Theorem 3**.: _For every path query \(q\), the following complexity upper bounds obtain:_
* _if_ \(q\) _satisfies_ \(\mathcal{C}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{FO}\)_;_
* _if_ \(q\) _satisfies_ \(\mathcal{C}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{NL}\)_; and_
* _if_ \(q\) _satisfies_ \(\mathcal{C}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{PTIME}\)_._
_Moreover, for every path query \(q\), the following complexity lower bounds obtain:_
* _if_ \(q\) _violates_ \(\mathcal{C}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{NL}\)_-hard;_
* _if_ \(q\) _violates_ \(\mathcal{C}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{PTIME}\)_-hard; and_
* _if_ \(q\) _violates_ \(\mathcal{C}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{coNP}\)_-complete._
**Example 3**.: The query \(q_{1}=RXRX\) rewinds to (and only to) \(RX\!\cdot\!\underline{RX}\!\cdot\!RX\) and \(R\!\cdot\!XR\!\cdot\!\underline{X}\!\cdot\!X\), which both contain \(q_{1}\) as a prefix. It is correct to conclude that \(\mathsf{CERTAINTY}(q_{1})\) is in \(\mathbf{FO}\).
The query \(q_{2}=RXRY\) rewinds only to \(RX\!\cdot\!\underline{RX}\!\cdot\!RY\), which contains \(q_{2}\) as a factor, but not as a prefix. Therefore, \(q_{2}\) satisfies \(\mathcal{C}_{3}\), but violates \(\mathcal{C}_{1}\). Since \(q_{2}\) vacuuously satisfies \(\mathcal{C}_{2}\) (because no relation name occurs three times in \(q_{2}\)), it is correct to conclude that \(\mathsf{CERTAINTY}(q_{2})\) is \(\mathbf{NL}\)-complete.
The query \(q_{3}=RXRY\) rewinds to \(RX\!\cdot\!\underline{RX}\!\cdot\!RYRY\), to \(RX\!RY\!\cdot\!\underline{RX}\!\cdot\!RY\), and to \(RX\!\cdot\!RY\!\cdot\!\underline{RY}\!\cdot\!RY=RXR\!\cdot\!YR\!\cdot\!Y\). Since these words contain \(q_{3}\) as a factor, but not always as a prefix, we have that \(q_{3}\) satisfies \(\mathcal{C}_{3}\) but violates \(\mathcal{C}_{1}\). It can be verified that \(q_{3}\) violates \(\mathcal{C}_{2}\) by writing it as follows:
\[q_{3}=\underbrace{\varepsilon}_{u}\underbrace{\underline{RX}}_{Rv_{1}} \underbrace{\underline{RY}}_{Rv_{2}}\underbrace{\underline{RY}}_{Rw}\]
We have \(X=v_{1}\neq v_{2}=Y\), but \(Rw=RY\) is not a prefix of \(Rv_{1}=RX\). Thus, \(\mathsf{CERTAINTY}(q_{3})\) is \(\mathbf{PTIME}\)-complete.
Finally, the path query \(q_{4}=RXRXRYRY\) rewinds, among others, to \(RX\!\cdot\!RXRY\!\cdot\!\underline{RX}\!\cdot\!RY\), which does not contain \(q_{4}\) as a factor. It is correct to conclude that \(\mathsf{CERTAINTY}(q_{4})\) is \(\mathbf{coNP}\)-complete.
## 4 Reggexes for \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\)
In this section, we show that the conditions \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) can be captured by regular expressions (or regexes) on path queries, which will be used in the proof of Theorem 3. Since these results are within the field of _combinatorics of words_, we will use the term _word_ rather than _path query_.
**Definition 1**.: We define four properties \(\mathcal{B}_{1}\), \(\mathcal{B}_{2a}\), \(\mathcal{B}_{2b}\), \(\mathcal{B}_{3}\) that a word \(q\) can possess:
\(\mathcal{B}_{1}\)**:**: For some integer \(k\geq 0\), there are words \(v\), \(w\) such that \(vw\) is self-join-free and \(q\) is a prefix of \(w\left(v\right)^{k}\).
\(\mathcal{B}_{2a}\)**:**: For some integers \(j,k\geq 0\), there are words \(u\), \(v\), \(w\) such that \(uvw\) is self-join-free and \(q\) is a factor of \(\left(u\right)^{j}w\left(v\right)^{k}\).
\(\mathcal{B}_{2b}\)**:**: For some integer \(k\geq 0\), there are words \(u\), \(v\), \(w\) such that \(uvw\) is self-join-free and \(q\) is a factor of \(\left(uv\right)^{k}wv\).
\(\mathcal{B}_{3}\)**:**: For some integer \(k\geq 0\), there are words \(u\), \(v\), \(w\) such that \(uvw\) is self-join-free and \(q\) is a factor of \(\left(uv\right)^{k}wv\).
We can identify each condition among \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), \(\mathcal{C}_{3}\), \(\mathcal{B}_{1}\), \(\mathcal{B}_{2a}\), \(\mathcal{B}_{2b}\), \(\mathcal{B}_{3}\) with the set of all words satisfying this condition. Note then that \(\mathcal{B}_{1}\subseteq\mathcal{B}_{2a}\cap\mathcal{B}_{3}\). The results in the remainder of this section can be summarized as follows:
* \(\mathcal{C}_{1}=\mathcal{B}_{1}\) (Lemma 1)
* \(\mathcal{C}_{2}=\mathcal{B}_{2a}\cup\mathcal{B}_{2b}\) (Lemma 3)
* \(\mathcal{C}_{3}=\mathcal{B}_{2a}\cup\mathcal{B}_{2b}\cup\mathcal{B}_{3}\) (Lemma 2)
Moreover, Lemma 3 characterizes \(\mathcal{C}_{3}\setminus\mathcal{C}_{2}\).
**Lemma 1**.: _For every word \(q\), the following are equivalent:_
1. \(q\) _satisfies_ \(\mathcal{C}_{1}\)_; and_
2. \(q\) _satisfies_ \(\mathcal{B}_{1}\)_._
**Lemma 2**.: _For every word \(q\), the following are equivalent:_
1. \(q\) _satisfies_ \(\mathcal{C}_{3}\)_; and_
2. \(q\) _satisfies_ \(\mathcal{B}_{2a}\)_,_ \(\mathcal{B}_{2b}\)_, or_ \(\mathcal{B}_{3}\)_._
**Definition 2** (First and last symbol).: For a nonempty word \(u\), we write \(\mathsf{first}(u)\) and \(\mathsf{last}(u)\) for, respectively, the first and the last symbol of \(u\)
**Lemma 3**.: _Let \(q\) be a word that satisfies \(\mathcal{C}_{3}\). Then, the following three statements are equivalent:_
1. \(q\) _violates_ \(\mathcal{C}_{2}\)_;_
2. \(q\) _violates both_ \(\mathcal{B}_{2a}\) _and_ \(\mathcal{B}_{2b}\)_; and_
3. _there are words_ \(u\)_,_ \(v\)_,_ \(w\) _with_ \(u\neq\varepsilon\) _and_ \(uvw\) _self-join-free such that either_ 1. \(v\neq\varepsilon\) _and_ \(\mathsf{last}(u)\cdot wuvu\cdot\mathsf{first}(v)\) _is a factor of_ \(q\)_; or_ 2. \(v=\varepsilon\)_,_ \(w\neq\varepsilon\)_, and_ \(\mathsf{last}(u)\cdot w\left(u\right)^{2}\cdot\mathsf{first}(u)\) _is a factor of_ \(q\)_._
The shortest word of the form (3a) in the preceding lemma is \(RSRRS\) (let \(u=R\), \(v=S\), and \(w=\varepsilon\)), and the shortest word of the form (3b) is \(RSRRR\) (let \(u=R\), \(v=\varepsilon\), and \(w=S\)). Note that since each of \(\mathcal{C}_{2}\), \(\mathcal{B}_{2a}\), and \(\mathcal{B}_{2b}\) implies \(\mathcal{C}_{3}\), it is correct to conclude that the equivalence between the first two items in Lemma 3 does not need the hypothesis that \(q\) must satisfy \(\mathcal{C}_{3}\).
## 5 Automaton-Based Perspective
In this section, we prove an important lemma, Lemma 7, which will be used for proving the complexity upper bounds in Theorem 3.
### From Path Queries to Finite Automata
We can view a path query \(q\) as a word where the alphabet is the set of relation names. We now associate each path query \(q\) with a nondeterministic finite automaton (NFA), denoted \(\mathsf{NFA}(q)\).
**Definition 3** (\(\mathsf{NFA}(q)\)).: Every word \(q\) gives rise to a nondeterministic finite automaton (NFA) with \(\varepsilon\)-moves, denoted \(\mathsf{NFA}(q)\), as follows.
**States:**: The set of states is the set of prefixes of \(q\). The empty word \(\varepsilon\) is a prefix of \(q\).
**Forward transitions:**: If \(u\) and \(uR\) are states, then there is a transition with label \(R\) from state \(u\) to state \(uR\). These transitions are said to be _forward_.
**Backward transitions:**: If \(uR\) and \(wR\) are states such that \(|u|<|w|\) (and therefore \(uR\) is a prefix of \(w\)), then there is a transition with label \(\varepsilon\) from state \(wR\) to state \(uR\). These transitions are said to be _backward_, and capture the operation dubbed rewinding.
**Initial and accepting states:**: The initial state is \(\varepsilon\) and the only accepting state is \(q\).
Figure 4 shows the automaton \(\mathsf{NFA}(RXRRR)\). Informally, the forward transitions capture the automaton that would accept the word \(RXRRR\), while the backward transitions capture the existence of self-joins that allow an application of the rewind operator. We now take an alternative route for defining the language accepted by \(\mathsf{NFA}(q)\), which straightforwardly results in Lemma 4. Then, Lemma 5 gives alternative ways for expressing \(\mathcal{C}_{1}\) and \(\mathcal{C}_{3}\).
**Definition 4**.: Let \(q\) be a path query, represented as a word over the alphabet of relation names. We define the language \(\mathcal{L}^{\rtimes}(q)\) as the smallest set of words such that
1. \(q\) belongs to \(\mathcal{L}^{\looploop}(q)\); and
2. _Rewinding:_ if \(uRvRv\) is in \(\mathcal{L}^{\looploop}(q)\) for some relation name \(R\) and (possibly empty) words \(u,v,w\), then \(uRvRvRv\) is also in \(\mathcal{L}^{\looplooploop}(q)\).
That is, \(\mathcal{L}^{\looploop}(q)\) is the smallest language that contains \(q\) and is closed under rewinding.
**Lemma 4**.: _For every path query \(q\), the automaton \(\mathsf{NFA}(q)\) accepts the language \(\mathcal{L}^{\looploop}(q)\)._
**Lemma 5**.: _Let \(q\) be a path query. Then,_
1. \(q\) _satisfies_ \(\mathcal{C}_{1}\) _if and only if_ \(q\) _is a prefix of each_ \(p\in\mathcal{L}^{\looplooploop}(q)\)_;_
2. \(q\) _satisfies_ \(\mathcal{C}_{3}\) _if and only if_ \(q\) _is a factor of each_ \(p\in\mathcal{L}^{\looplooploop}(q)\)_._
Proof.: [leftmargin=*]
Proof.: [leftmargin=*]
\(\Longleftarrow\) in (1) and (2)] This direction is trivial, because whenever \(q=uRvRv\), we have that \(uRvRvRv\in\mathcal{L}^{\looplooploop}(q)\).
We now show the \(\implies\) direction in both items. To this end, we call an application of the rule (b) in Definition 4 a _rewind_. By construction, each word in \(\mathcal{L}^{\looplooploop}(q)\) can be obtained from \(q\) by using \(k\) rewinds, for some nonnegative integer \(k\). Let \(q_{k}\) be a word in \(\mathcal{L}^{\looplooploop}(q)\) that can be obtained from \(q\) by using \(k\) rewinds.
[leftmargin=*]
\(\Longrightarrow\) in (1)] We use induction on \(k\) to show that \(q\) is a prefix of \(q_{k}\). For he induction basis, \(k=0\), we have that \(q\) is a prefix of \(q_{0}=q\). We next show the induction step \(k\to k+1\). Let \(q_{k+1}=uRvRvRv\) where \(q_{k}=uRvRv\) is a word in \(\mathcal{L}^{\looplooploop}(q)\) obtained with \(k\) rewinds. By the induction hypothesis, we can assume a word \(s\) such that \(q_{k}=q\cdot s\).
* If \(q\) is a prefix of \(uRvR\), then \(q_{k+1}=uRvRvRv\) trivially contains \(q\) as a prefix;
* otherwise \(uRvR\) is a proper prefix of \(q\). Let \(q=uRvRt\) where \(t\) is nonempty. Since \(q\) satisfies \(\mathcal{C}_{1}\), \(Rt\) is a prefix of \(Rv\). Then \(q_{k+1}=uRvRvRv\) contains \(q=u\cdot Rv\cdot Rt\) as a prefix.
[leftmargin=*]
\(\Longrightarrow\) in (2)] We use induction on \(k\) to show that \(q\) is a factor of \(q_{k}\). For the induction basis, \(k=0\), we have that \(q\) is a prefix of \(q_{0}=q\). For the induction step, \(k\to k+1\), let \(q_{k+1}=uRvRvRv\) where \(q_{k}=uRvRv\) is a word in \(\mathcal{L}^{\looplooploop}(q)\) obtained with \(k\) rewinds. By the induction hypothesis, \(q_{k}=uRvRv\) contains \(q\) as a factor. If \(q\) is a factor of either \(uRvR\) or \(RvRv\), then \(q_{k+1}=uRvRvRv\) contains \(q\) as a factor. Otherwise, we can decompose \(q_{k}=u^{-}q^{-}RvRvq^{+}w^{+}\) where \(q=q^{-}RvRq^{+}\), \(u=u^{-}q^{-}\) and \(w=q^{+}w^{+}\). Since \(q\) satisfies \(\mathcal{C}_{3}\), the word \(q^{-}RvRvRq^{+}\), which is a factor of \(q_{k+1}\), contains \(q\) as a factor.
In the technical treatment, it will be convenient to consider the automaton obtained from \(\mathsf{NFA}(q)\) by changing its start state, as defined next.
**Definition 5**.: If \(u\) is a prefix of \(q\) (and thus \(u\) is a state in \(\mathsf{NFA}(q)\)), then \(\mathsf{S}\)-\(\mathsf{NFA}(q,u)\) is the automaton obtained from \(\mathsf{NFA}(q)\) by letting the initial state be \(u\) instead of the empty word. Note that \(\mathsf{S}\)-\(\mathsf{NFA}(q,\varepsilon)=\mathsf{NFA}(q)\). It may be helpful to think of the first \(\mathsf{S}\) in \(\mathsf{S}\)-\(\mathsf{NFA}(q,u)\) as "Start at \(u\)."
### Reification Lemma
In this subsection, we first define how an automaton executes on a database instance. We then state an helping lemma which will be used in the proof of Lemma 7, which constitutes the main result of Section 5. To improve the readability and logical flow of our presentation, we postpone the proof of the helping lemma to Section 5.3.
**Definition 6** (Automata on database instances).: Let \(\mathbf{db}\) be a database instance. A _path (in \(\mathbf{db}\))_ is defined as a sequence \(R_{1}(\underline{c_{1}},c_{2})\), \(R_{2}(\underline{c_{2}},c_{3})\),..., \(R_{n}(\underline{c_{n}},c_{n+1})\) of facts in \(\mathbf{db}\). Such a path is said to _start in \(c_{1}\)_. We call \(R_{1}R_{2}\cdots R_{n}\) the _trace_ of this path. A path is said to be _accepted_ by an automaton if its trace is accepted by the automaton.
Let \(q\) be a path query and \(\mathbf{r}\) be a consistent database instance. We define \(\mathsf{start}(q,\mathbf{r})\) as the set containing all (and only) constants \(c\in\mathsf{adom}(\mathbf{r})\) such that there is a path in \(\mathbf{r}\) that starts in \(c\) and is accepted by \(\mathsf{NFA}(q)\).
**Example 4**.: Consider the query \(q_{2}=RRX\) and the database instance of Figure 2. Let \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) be the repairs containing, respectively, \(R(\underline{1},2)\) and \(R(\underline{1},3)\). The only path with trace \(RRX\) in \(\mathbf{r}_{1}\) starts in \(1\); and the only path with trace \(RRX\) in \(\mathbf{r}_{2}\) starts in \(0\). The regular expression for \(\mathcal{L}^{\looplooploop}(q)\) is \(RR\left(R\right)^{*}X\). We have \(\mathsf{start}(q,\mathbf{r}_{1})=\{0,1\}\) and \(\mathsf{start}(q,\mathbf{r}_{2})=\{0\}\)
The following lemma tells us that, among all repairs, there is one that is inclusion-minimal with respect to \(\mathsf{start}(q,\cdot)\). In the preceding example, the repair \(\mathbf{r}_{2}\) minimizes \(\mathsf{start}(q,\cdot)\).
**Lemma 6**.: _Let \(q\) be a path query, and \(\mathbf{db}\) a database instance. There exists a repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\)._
Informally, we think of the next Lemma 7 as a _reification lemma_. The notion of _reifiable variable_ was coined in [40, Definition 8.5], to refer to a variable \(x\) in a query \(\exists x\,(\varphi(x))\) such that whenever that query is true in every repair of a database instance, then there is a constant \(c\) such that \(\varphi(c)\) is true in every repair. The following lemma captures a very similar concept.
**Lemma 7** (Reification Lemma for \(\mathcal{C}_{3}\)).: _Let \(q\) be a path query that satisfies \(\mathcal{C}_{3}\). Then, for every database instance \(\mathbf{db}\), the following are equivalent:_
1. \(\mathbf{db}\) _is a "yes"-instance of_ \(\mathsf{CERTAINTY}(q)\)_; and_
2. _there exists a constant_ \(c\) _(which depends on_ \(\mathbf{db}\)_) such that for every repair_ \(\mathbf{r}\) _of_ \(\mathbf{db}\)_,_ \(c\in\mathsf{start}(q,\mathbf{r})\)_._
Proof.: \(\left\lfloor\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
It is to be noted here that whenever \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) are repairs containing \(f\), then by Lemma 8, \(\mathsf{ST}_{q}(f,\mathbf{r}_{1})\) and \(\mathsf{ST}_{q}(f,\mathbf{r}_{2})\) are comparable by set inclusion. Therefore, informally, \(\mathsf{cqaST}_{q}(f,\mathbf{db})\) is the \(\subseteq\)-minimal states set of \(f\) over all repairs that contain \(f\).
**Definition 9** (Preorder \(\preceq_{q}\) on repairs).: Let \(\mathbf{db}\) be a database instance. For all repairs \(\mathbf{r},\mathbf{s}\) of \(\mathbf{db}\), we define \(\mathbf{r}\preceq_{q}\mathbf{s}\) if for every \(f\in\mathbf{r}\) and \(g\in\mathbf{s}\) such that \(f\) and \(g\) are key-equal, we have \(\mathsf{ST}_{q}(f,\mathbf{r})\subseteq\mathsf{ST}_{q}(g,\mathbf{s})\).
Clearly, \(\preceq_{q}\) is a reflexive and transitive binary relation on the set of repairs of \(\mathbf{db}\). We write \(\mathbf{r}\prec_{q}\mathbf{s}\) if both \(\mathbf{r}\preceq_{q}\mathbf{s}\) and for some \(f\in\mathbf{r}\) and \(g\in\mathbf{s}\) such that \(f\) and \(g\) are key-equal, \(\mathsf{ST}_{q}(f,\mathbf{r})\subseteq\mathsf{ST}_{q}(g,\mathbf{s})\).
**Lemma 9**.: _Let \(q\) be a path query. For every database instance \(\mathbf{db}\), there is a repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathbf{r}^{*}\preceq_{q}\mathbf{r}\)._
Proof.: Construct a repair \(\mathbf{r}^{*}\) as follows. For every block \(\mathbf{blk}\) of \(\mathbf{db}\), insert into \(\mathbf{r}^{*}\) a fact \(f\) of \(\mathbf{blk}\) such that \(\mathsf{cqaST}_{q}(f,\mathbf{db})=\bigcap\{\mathsf{cqaST}_{q}(g,\mathbf{db}) \mid g\in\mathbf{blk}\}\). More informally, we insert into \(\mathbf{r}^{*}\) a fact \(f\) from \(\mathbf{blk}\) with a states set that is \(\subseteq\)-minimal over all repairs and all facts of \(\mathbf{blk}\). We first show the following claim.
**Claim 1**.: For every fact \(f\) in \(\mathbf{r}^{*}\), we have \(\mathsf{ST}_{q}(f,\mathbf{r}^{*})=\mathsf{cqaST}_{q}(f,\mathbf{db})\).
Proof.: Let \(f_{1}\) be an arbitrary fact in \(\mathbf{r}^{*}\). We show \(\mathsf{ST}_{q}(f_{1},\mathbf{r}^{*})=\mathsf{cqaST}_{q}(f_{1},\mathbf{db})\).
\(\boxed{\exists}\): Obvious, because \(\mathbf{r}^{*}\) is itself a repair of \(\mathbf{db}\) that contains \(f_{1}\).
\(\boxed{\exists}\): Let \(f_{1}=R_{1}(\underline{c}_{0},c_{1})\). Assume by way of a contradiction that there is \(p_{1}\in\mathsf{ST}_{q}(f_{1},\mathbf{r}^{*})\) such that \(p_{1}\notin\mathsf{cqaST}_{q}(f_{1},\mathbf{db})\). Then, for some (possibly empty) prefix \(p_{0}\) of \(q\), there is a sequence:
\[p_{0}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{0}^{\prime} \xrightarrow{f_{1}=R_{1}(\underline{c}_{0},c_{1})}p_{1}\xrightarrow{\varepsilon }p_{1}^{\prime}\xrightarrow{f_{2}=R_{2}(\underline{c}_{1},c_{2})}p_{2}\quad \cdots\quad p_{n-1}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{n -1}^{\prime}\xrightarrow{f_{n}=R_{n}(c_{n-1},c_{n})}p_{n}=q, \tag{1}\]
where \(f_{1},f_{2},\ldots,f_{n}\in\mathbf{r}^{*}\), for each \(i\in\{1,\ldots,n\}\), \(p_{i}=p_{i-1}^{\prime}R_{i}\), and for each \(i\in\{0,\ldots,n-1\}\), either \(p_{i}^{\prime}=p_{i}\) or \(p_{i}^{\prime}\) is a strict prefix of \(p_{i}\) such that \(p_{i}^{\prime}\) and \(p_{i}\) agree on their rightmost relation name. Informally, the sequence (1) represents an accepting run of \(\mathsf{S}\)-\(\mathsf{NFA}(q,p_{0})\) in \(\mathbf{r}^{*}\). Since \(p_{1}\in\mathsf{ST}_{q}(f_{1},\mathbf{r}^{*})\setminus\mathsf{cqaST}_{q}(f_{1}, \mathbf{db})\), we can assume a largest index \(\ell\in\{1,\ldots,n\}\) such that \(p_{\ell}\in\mathsf{ST}_{q}(f_{\ell},\mathbf{r}^{*})\setminus\mathsf{cqaST}_{q}(f _{\ell},\mathbf{db})\). By construction of \(\mathbf{r}^{*}\), there is a repair \(\mathbf{s}\) such that \(f_{\ell}\in\mathbf{s}\) and \(\mathsf{ST}_{q}(f_{\ell},\mathbf{s})=\mathsf{cqaST}_{q}(f_{\ell},\mathbf{db})\). Consequently, \(p_{\ell}\notin\mathsf{ST}_{q}(f_{\ell},\mathbf{s})\). We distinguish two cases:
**Case that \(\ell=n\).**: Thus, the run (1) ends with
\[\cdots\quad p_{\ell-1}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{ \ell-1}^{\prime}\xrightarrow{f_{\ell}=R_{\ell}(c_{\ell-1},c_{\ell})}p_{\ell}=q.\]
Thus, the rightmost relation name in \(q\) is \(R_{\ell}\). Since \(f_{\ell}\in\mathbf{s}\), it is clear that \(p_{\ell}\in\mathsf{ST}_{q}(f_{\ell},\mathbf{s})\), a contradiction.
**Case that \(\ell<n\).**: Thus, the run (1) includes
\[\cdots\quad p_{\ell-1}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{ \ell-1}^{\prime}\xrightarrow{f_{\ell}=R_{\ell}(c_{\ell-1},c_{\ell})}p_{\ell} \xrightarrow{\varepsilon}p_{\ell}^{\prime}\xrightarrow{f_{\ell+1}=R_{\ell+1}( \underline{c}_{\ell},c_{\ell+1})}p_{\ell+1}\quad\cdots,\]
where \(\ell+1\) can be equal to \(n\). Clearly, \(p_{\ell+1}\in\mathsf{ST}_{q}(f_{\ell+1},\mathbf{r}^{*})\). Assume without loss of generality that \(\mathbf{s}\) contains \(f_{\ell+1}^{\prime}:=R_{\ell+1}(c_{\ell},c_{\ell+1}^{\prime})\), which is key-equal to \(f_{\ell+1}\) (possibly \(c_{\ell+1}^{\prime}=c_{\ell+1}\)). From \(p_{\ell}\notin\mathsf{ST}_{q}(f_{\ell},\mathbf{s})\), it follows \(p_{\ell+1}\notin\mathsf{ST}_{q}(f_{\ell+1}^{\prime},\mathbf{s})\). Consequently, \(p_{\ell+1}\notin\mathsf{cqaST}_{q}(f_{\ell+1}^{\prime},\mathbf{db})\). By our construction of \(r^{*}\), we have \(p_{\ell+1}\notin\mathsf{cqaST}_{q}(f_{\ell+1},\mathbf{db})\). Consequently, \(p_{\ell+1}\in\mathsf{ST}_{q}(f_{\ell+1},\mathbf{r}^{*})\setminus\mathsf{cqaST}_{q}( f_{\ell+1},\mathbf{db})\), which contradicts that \(\ell\) was chosen to be the largest such an index possible.
The proof of Claim 1 is now concluded.
To conclude the proof of the lemma, let \(\mathbf{r}\) be any repair of \(\mathbf{db}\), and let \(f\in\mathbf{r}^{*}\) and \(f^{\prime}\in\mathbf{r}\) be two key-equal facts in \(\mathbf{db}\). By Claim 1 and the construction of \(\mathbf{r}^{*}\), we have that \(\mathsf{ST}_{q}(f,\mathbf{r}^{*})=\mathsf{cqaST}_{q}(f,\mathbf{db})\subseteq \mathsf{cqaST}_{q}(f^{\prime},\mathbf{db})\subseteq\mathsf{ST}_{q}(f^{\prime}, \mathbf{r})\), as desired.
We can now give the proof of Lemma 6.
Proof of Lemma 6.: Let \(\mathbf{db}\) be a database instance. Then by Lemma 9, there is a repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathbf{r}^{*}\preceq_{q}\mathbf{r}\). It suffices to show that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\). To this end, consider any repair \(\mathbf{r}\) and \(c\in\mathsf{start}(q,\mathbf{r}^{*})\). Let \(R\) be the first relation name of \(q\). Since \(c\in\mathsf{start}(q,\mathbf{r}^{*})\), there is \(d\in\mathsf{adom}(\mathbf{r}^{*})\) such that \(R\in\mathsf{ST}_{q}(R(\underline{c},d),\mathbf{r}^{*})\). Then, there is a unique \(d
## 6 Complexity Upper Bounds
We now show the complexity upper bounds of Theorem 3.
### A PTIME Algorithm for \(\mathcal{C}_{3}\)
We now specify a polynomial-time algorithm for \(\mathsf{CERTAINTY}(q)\), for path queries \(q\) that satisfy condition \(\mathcal{C}_{3}\). The algorithm is based on the automata defined in Definition 5, and uses the concept defined next.
**Definition 10** (Relation \(\vdash_{q}\)).: Let \(q\) be a path query and \(\mathbf{db}\) a database instance. For every \(c\in\mathsf{adom}(q)\) and every prefix \(u\) of \(q\), we write \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) if every repair of \(\mathbf{db}\) has a path that starts in \(c\) and is accepted by \(\mathsf{S\mbox{-}NFA}(q,u)\).
An algorithm that decides the relation \(\vdash_{q}\) can be used to solve \(\mathsf{CERTAINTY}(q)\) for path queries satisfying \(\mathcal{C}_{3}\). Indeed, by Lemma 7, for path queries satisfying \(\mathcal{C}_{3}\), \(\mathbf{db}\) is a "yes"-instance for the problem \(\mathsf{CERTAINTY}(q)\) if and only if there is a constant \(c\in\mathsf{adom}(\mathbf{db})\) such that \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) with \(u=\varepsilon\).
Figure 5 shows an algorithm that computes \(\{\langle c,u\rangle\mid\mathbf{db}\vdash_{q}\langle c,u\rangle\}\) as the fixed point of a binary relation \(N\). The _Initialization Step_ inserts into \(N\) all pairs \(\langle c,q\rangle\), which is correct because \(\mathbf{db}\vdash_{q}\langle c,q\rangle\) holds vacuously, as \(q\) is the accepting state of \(\mathsf{S\mbox{-}NFA}(q,q)\). Then, the _Iterative Rule_ is executed until \(N\) remains unchanged; it intrinsically reflects the constructive proof of Lemma 9: \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) if and only if for every fact \(f=R(\underline{c},d)\in\mathbf{db}\), we have \(uR\in\mathsf{caST}_{q}(f,\mathbf{db})\). Figure 6 shows an example run of the algorithm in Figure 5. The next lemma states the correctness of the algorithm.
**Lemma 10**.: _Let \(q\) be a path query. Let \(\mathbf{db}\) be a database instance. Let \(N\) be the output relation returned by the algorithm in Figure 5 on input \(\mathbf{db}\). Then, for every \(c\in\mathsf{adom}(\mathbf{db})\) and every prefix \(u\) of \(q\),_
\[\langle c,u\rangle\in N\text{ if and only if }\mathbf{db}\vdash_{q}\langle c,u\rangle.\]
Proof.: [\(\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 0\sim$}}\raise 3.0pt \hbox{$\mathchar 0\sim$}}\hskip-3.0pt\)] Proof by contraposition. Assume \(\langle c,u\rangle\notin N\). The proof shows the construction of a repair \(\mathbf{r}\) of \(\mathbf{db}\) such that \(\mathbf{r}\) has no path that starts in \(c\) and is accepted by \(\mathsf{S\mbox{-}NFA}(q,u)\). Such a repair shows \(\mathbf{db}\vdash_{q}\langle c,u\rangle\).
We explain which fact of an arbitrary block \(R(\underline{a},*)\) of \(\mathbf{db}\) will be inserted in \(\mathbf{r}\). Among all prefixes of \(q\) that end with \(R\), let \(u_{0}R\) be the longest prefix such that \(\langle a,u_{0}\rangle\notin N\). If such \(u_{0}R\) does not exist, then an arbitrarily picked fact of the block \(R(\underline{a},*)\) is inserted in \(\mathbf{r}\). Otherwise, the _Iterative Rule_ in Figure 5 entails the existence of a fact \(R(\underline{a},b)\) such that \(\langle b,u_{0}R\rangle\notin N\). Then, \(R(\underline{a},b)\) is inserted in \(\mathbf{r}\). We remark that this repair \(\mathbf{r}\) is constructed in exactly the same way as the repair \(\mathbf{r}^{*}\) built in the proof of Lemma 9.
Assume for the sake of contradiction that there is a path \(\pi\) in \(\mathbf{r}\) that starts in \(c\) and is accepted by \(\mathsf{S\mbox{-}NFA}(q,u)\). Let \(\pi:=R_{1}(\underline{c}_{0},c_{1})\), \(R_{2}(\underline{c}_{1},c_{2})\),..., \(R_{n}(c_{n-1},c_{n})\) where \(c_{0}=c\). Since \(\langle c_{0},u\rangle\not\in N\) and \(\langle c_{n},q\rangle\in N\), there is a longest prefix \(u_{0}\) of \(q\), where \(|u_{0}|\geq|u|\), and \(i\in\{1,\ldots,n\}\) such that \(\langle c_{i-1},u_{0}\rangle\not\in N\) and \(\langle c_{i},u_{0}R_{i}\rangle\in N\). From \(\langle c_{i-1},u_{0}\rangle\not\in N\), it follows that \(\mathbf{db}\) contains a fact \(R_{i}(\underline{c_{i-1}},d)\) such that \(\langle d,u_{0}R_{i}\rangle\not\in N\). Then \(R_{i}(\underline{c_{i-1}},c_{i})\) would not be chosen in a repair, contradicting \(R_{i}(\underline{c_{i-1}},c_{i})\in\mathbf{r}\).
Figure 5: Polynomial-time algorithm for computing \(\{\langle c,u\rangle\mid\mathbf{db}\vdash_{q}\langle c,u\rangle\}\), for a fixed path query \(q\) satisfying \(\mathcal{C}_{3}\).
Figure 6: Example run of our algorithm for \(q=RRX\), on the database instance \(\mathbf{db}\) shown at the right.
Assume that \(\langle c,u\rangle\in N\). Let \(\ell\) be the number of executions of the _Iterative Rule_ that were used to insert \(\langle c,u\rangle\) in \(N\). We show \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) by induction on \(\ell\).
The basis of the induction, \(\ell=0\), holds because the _Initialization Step_ is obviously correct. Indeed, since \(q\) is an accepting state of \(\mathsf{S\text{-}NFA}(q,q)\), we have \(\mathbf{db}\vdash_{q}\langle c,q\rangle\). For the inductive step, \(\ell\to\ell+1\), we distinguish two cases.
Case that \(\langle c,u\rangle\) is added to \(N\) by the _forward part_ of the _Iterative Rule_.That is, \(\langle c,u\rangle\) is added because \(\mathbf{db}\) has a block \(\{R(\underline{c},d_{1}),\,\dots,\,R(\underline{c},d_{k})\}\) with \(k\geq 1\) and for every \(i\in\{1,\dots,k\}\), we have that \(\langle d_{i},uR\rangle\) was added to \(N\) by a previous execution of the _Iterative Rule_. Let \(\mathbf{r}\) be an arbitrary repair of \(\mathbf{db}\). Since every repair contains exactly one fact from each block, we can assume \(i\in\{1,\dots,k\}\) such that \(R(\underline{c},d_{i})\in\mathbf{r}\). By the induction hypothesis, \(\mathbf{db}\vdash_{q}\langle d_{i},uR\rangle\) and thus \(\mathbf{r}\) has a path that starts in \(d_{i}\) and is accepted by \(\mathsf{S\text{-}NFA}(q,uR)\). Clearly, this path can be left extended with \(R(\underline{c},d_{i})\), and this left extended path is accepted by \(\mathsf{S\text{-}NFA}(q,u)\). Note incidentally that the path in \(\mathbf{r}\) may already use \(R(\underline{c},d_{i})\), in which case the path is cyclic. Since \(\mathbf{r}\) is an arbitrary repair, it is correct to conclude \(\mathbf{db}\vdash_{q}\langle c,u\rangle\).
Case that \(\langle c,u\rangle\) is added to \(N\) by the _backward part_ of the _Iterative Rule_.Then, there exists a relation name \(S\) and words \(v,w\) such that \(u=vSwS\), and \(\langle c,u\rangle\) is added because \(\langle c,vS\rangle\) was added in the same iteration. Then, \(\mathsf{S\text{-}NFA}(q,u)\) has an \(\varepsilon\)-transition from state \(u\) to \(vS\). Let \(\mathbf{r}\) be an arbitrary repair of \(\mathbf{db}\). By the reasoning in the previous case, \(\mathbf{r}\) has a path that starts in \(c\) and is accepted by \(\mathsf{S\text{-}NFA}(q,vS)\). We claim that \(\mathbf{r}\) has a path that starts in \(c\) and is accepted by \(\mathsf{S\text{-}NFA}(q,u)\). Indeed, \(\mathsf{S\text{-}NFA}(q,u)\) can use the \(\varepsilon\)-transition to reach the state \(vS\), and then behave like \(\mathsf{S\text{-}NFA}(q,vS)\). This concludes the proof.
The following corollary is now immediate.
**Corollary 1**.: _Let \(q\) be a path query. Let \(\mathbf{db}\) be a database instance, and \(c\in\mathsf{adom}(\mathbf{db})\). Then, the following are equivalent:_
1. \(c\in\mathsf{start}(q,\mathbf{r})\) _for every repair_ \(\mathbf{r}\) _of_ \(\mathbf{db}\)_; and_
2. \(\langle c,\epsilon\rangle\in N\)_, where_ \(N\) _is the output of the algorithm in Figure_ 5_._
Finally, we obtain the following tractability result.
**Lemma 11**.: _For each path query \(q\) satisfying \(\mathcal{C}_{3}\), \(\mathsf{CERTAINTY}(q)\) is expressible in Least Fixpoint Logic, and hence is in \(\mathbf{PTIME}\)._
Proof.: For a path query \(q\), define the following formula in LFP [33]:
\[\psi_{q}(s,t):=\left[\mathbf{If}_{\mathbf{}_{N,x,z}}\varphi_{q}(N,x,z)\right] (s,t), \tag{2}\]
where \(\varphi_{q}(N,x,z)\) is given in Figure 7. Herein, \(\alpha(x)\) denotes a first-order query that computes the active domain. That is, for every database instance \(\mathbf{db}\) and constant \(c\), \(\mathbf{db}\models\alpha(c)\) if and only if \(c\in\mathsf{adom}(\mathbf{db})\). Further, \(u\leq v\) means that \(u\) is a prefix of \(v\); and \(u<v\) means that \(u\) is a proper prefix of \(v\). Thus, \(u<v\) if and only if \(u\leq v\) and \(u\neq v\). The formula \(\varphi_{q}(N,x,z)\) is positive in \(N\), which is a \(2\)-ary predicate symbol. It is understood that the middle disjunction ranges over all nonempty prefixes \(uR\) of \(q\) (possibly \(u=\varepsilon\)). The last disjunction ranges over all pairs \((u,uv)\) of distinct nonempty prefixes of \(q\) that agree on their last symbol. We used a different typesetting to distinguish the constant words \(\mathsf{q}\), \(\mathsf{uR}\), \(\mathsf{uv}\) from first-order variables \(x\), \(z\). It is easily verified that the LFP query (2) expresses the algorithm of Figure 5.
Since the formula (2) in the proof of Lemma 11 uses universal quantification, it is not in Existential Least Fixpoint Logic, which is equal to \(\mathsf{DATALOG}_{\sim}\)[33, Theorem 10.18].
Figure 7: Definition of \(\varphi_{q}(N,x,z)\). The predicate \(\alpha(x)\) states that \(x\) is in the active domain, and \(<\) is shorthand for “_is a strict prefix of”_.
### FO-Rewritability for \(\mathcal{C}_{1}\)
We now show that if a path query \(q\) satisfies \(\mathcal{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), and a first-order rewriting for \(q\) can be effectively constructed.
**Definition 11** (First-order rewriting).: If \(q\) is a Boolean query such that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), then a _(consistent) first-order rewriting_ for \(q\) is a first-order sentence \(\psi\) such that for every database instance \(\mathbf{db}\), the following are equivalent:
1. \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q)\); and
2. \(\mathbf{db}\) satisfies \(\psi\).
**Definition 12**.: If \(q=\{R_{1}(\underline{x}_{1},x_{2}),\,R_{2}(\underline{x}_{2},x_{3}),\,\ldots, \,R_{k}(\underline{x}_{k},x_{k+1})\}\), \(k\geq 1\), and \(c\) is a constant, then \(q_{[c]}\) is the Boolean conjunctive query \(q_{[c]}:=\{R_{1}(\underline{c},\underline{x}_{2}),R_{2}(\underline{x}_{2},x_ {3}),\ldots,R_{k}(\underline{x}_{k},x_{k+1})\}\).
**Lemma 12**.: _For every nonempty path query \(q\) and constant \(c\), the problem \(\mathsf{CERTAINTY}(q_{[c]})\) is in \(\mathbf{FO}\). Moreover, it is possible to construct a first-order formula \(\psi(x)\), with free variable \(x\), such that for every constant \(c\), the sentence \(\exists x\left(\psi(x)\wedge x=c\right)\) is a first-order rewriting for \(q_{[c]}\)._
Proof.: The proof inductively constructs a first-order rewriting for \(q_{[c]}\), where the induction is on the number \(n\) of atoms in \(q\). For the basis of the induction, \(n=1\), we have \(q_{[c]}=R(\underline{c},y)\). Then, the first-order formula \(\psi(x)=\exists yR(\underline{x},y)\) obviously satisfies the statement of the lemma.
We next show the induction step, \(n\to n+1\). Let \(R(\underline{x}_{1},x_{2})\) be the left-most atom of \(q\), and assume that \(p:=q\setminus\{R(\underline{x}_{1},x_{2})\}\) is a path query with \(n\geq 1\) atoms. By the induction hypothesis, it is possible to construct a first-order formula \(\varphi(z)\), with free variable \(z\), such that for every constant \(d\),
\[\exists z\left(\varphi(z)\wedge z=d\right)\text{ is a first-order rewriting for }p_{[d]}. \tag{3}\]
We now define \(\psi(x)\) as follows:
\[\psi(x)=\exists y\left(R(\underline{x},y)\right)\wedge\forall z\left(R( \underline{x},z)\rightarrow\varphi(z)\right). \tag{4}\]
We will show that for every constant \(c\), \(\exists x\left(\psi(x)\wedge x=c\right)\) is a first-order rewriting for \(q_{[c]}\). To this end, let \(\mathbf{db}\) be a database instance. It remains to be shown that \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{[c]})\) if and only if \(\mathbf{db}\) satisfies \(\exists x\left(\psi(x)\wedge x=c\right)\).
Assume \(\mathbf{db}\) satisfies \(\exists x\left(\psi(x)\wedge x=c\right)\). Because of the conjunct \(\exists y\left(R(\underline{x},y)\right)\) in (4), we have that \(\mathbf{db}\) includes a block \(R(\underline{c},*)\). Let \(\mathbf{r}\) be a repair of \(\mathbf{db}\). We need to show that \(\mathbf{r}\) satisfies \(q_{[c]}\). Clearly, \(\mathbf{r}\) contains \(R(\underline{c},d)\) for some constant \(d\). Since \(\mathbf{db}\) satisfies \(\exists z\left(\varphi(z)\wedge z=d\right)\), the induction hypothesis (3) tells us that \(\mathbf{r}\) satisfies \(p_{[d]}\). It is then obvious that \(\mathbf{r}\) satisfies \(q_{[c]}\).
Assume \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAINTY}(q_{[c]})\). Then \(\mathbf{db}\) must obviously satisfy \(\exists y\left(R(\underline{c},y)\right)\). Therefore, \(\mathbf{db}\) includes a block \(R(\underline{c},*)\). Let \(\mathbf{r}\) be an arbitrary repair of \(\mathbf{db}\). There exists \(d\) such that \(R(\underline{c},d)\in\mathbf{r}\). Since \(\mathbf{r}\) satisfies \(q_{[c]}\), it follows that \(\mathbf{r}\) satisfies \(p_{[d]}\). Since \(\mathbf{r}\) is an arbitrary repair, the induction hypothesis (3) tells us that \(\mathbf{db}\) satisfies \(\exists z\left(\varphi(z)\wedge z=d\right)\). It is then clear that \(\mathbf{db}\) satisfies \(\exists x\left(\psi(x)\wedge x=c\right)\).
**Lemma 13**.: _For every path query \(q\) that satisfies \(\mathcal{C}_{1}\), the problem \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), and a first-order rewriting for \(q\) can be effectively constructed._
Proof.: By Lemmas 5 and 7, a database instance \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAINTY}(q)\) if and only if there is a constant \(c\) (which depends on \(\mathbf{db}\)) such that \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAINTY}(q_{[c]})\). By Lemma 12, it is possible to construct a first-order rewriting \(\exists x\left(\psi(x)\wedge x=c\right)\) for \(q_{[c]}\). It is then clear that \(\exists x\left(\psi(x)\right)\) is a first-order rewriting for \(q\).
### An NL Algorithm for \(\mathcal{C}_{2}\)
We show that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{NL}\) if \(q\) satisfies \(\mathcal{C}_{2}\) by expressing it in linear Datalog with stratified negation. The proof will use the syntactic characterization of \(\mathcal{C}_{2}\) established in Lemma 3.
**Lemma 14**.: _For every path query \(q\) that satisfies \(\mathcal{C}_{2}\), the problem \(\mathsf{CERTAINTY}(q)\) is in linear Datalog with stratified negation (and hence in \(\mathbf{NL}\))._
In the remainder of this section, we develop the proof of Lemma 14.
**Definition 13**.: Let \(q\) be a path query. We define \(\mathsf{NFA}^{\min}(q)\) as the automaton that accepts \(w\) if \(w\) is accepted by \(\mathsf{NFA}(q)\) and no proper prefix of \(w\) is accepted by \(\mathsf{NFA}(q)\).
It is well-known that such an automaton \(\mathsf{NFA}^{\min}(q)\) exists.
**Example 6**.: Let \(q=RXRYR\). Then, \(RXRYRYR\) is accepted by \(\mathsf{NFA}(q)\), but not by \(\mathsf{NFA}^{\min}(q)\), because the proper prefix \(RXRYR\) is also accepted by \(\mathsf{NFA}(q)\).
**Definition 14**.: Let \(q\) be a path query and \(\mathbf{r}\) be a consistent database instance. We define \(\mathsf{start}^{\min}(q,\mathbf{r})\) as the set containing all (and only) constants \(c\in\mathsf{adom}(\mathbf{r})\) such that there is a path in \(\mathbf{r}\) that starts in \(c\) and is accepted by \(\mathsf{NFA}^{\min}(q)\).
**Lemma 15**.: _Let \(q\) be a path query. For every consistent database instance \(\mathbf{r}\), we have that \(\mathsf{start}(q,\mathbf{r})=\mathsf{start}^{\min}(q,\mathbf{r})\)._
Proof.: By construction, \(\mathsf{start}^{\min}(q,\mathbf{r})\subseteq\mathsf{start}(q,\mathbf{r})\). Next assume that \(c\in\mathsf{start}(q,\mathbf{r})\) and let \(\pi\) be the path that starts in \(c\) and is accepted by \(\mathsf{NFA}(q)\). Let \(\pi^{-}\) be the shortest prefix of \(\pi\) that is accepted by \(\mathsf{NFA}(q)\). Since \(\pi^{-}\) starts in \(c\) and is accepted by \(\mathsf{NFA}^{\min}(q)\), it follows \(c\in\mathsf{start}^{\min}(q,\mathbf{r})\).
**Lemma 16**.: _Let \(u\cdot v\cdot w\) be a self-join-free word over the alphabet of relation names. Let \(s\) be a suffix of \(uv\) that is distinct from \(uv\). For every integer \(k\geq 0\), \(\mathsf{NFA}^{\min}(s\left(uv\right)^{k}\,wv)\) accepts the language of the regular expression \(s\left(uv\right)^{k}\left(uv\right)^{*}wv\)._
Proof.: Let \(q=s\left(uv\right)^{k}\,wv\). Since \(u\cdot v\cdot w\) is self-join-free, applying the rewinding operation, zero, one, or more times, in the part of \(q\) that precedes \(w\) will repeat the factor \(uv\). This gives words of the form \(s\left(uv\right)^{\ell}wv\) with \(\ell\geq k\). The difficult case is where we rewind a factor of \(q\) that itself contains \(w\) as a factor. In this case, the rewinding operation will repeat a factor of the form \(v_{2}\left(uv\right)^{\ell}wv_{1}\) such that \(v=v_{1}v_{2}\) and \(v_{2}\neq\varepsilon\), which results in words of one of the following forms (\(s=s_{1}\cdot v_{2}\)):
\[\left(s\left(uv\right)^{\ell_{1}}wv_{1}\right)\cdot v_{2}\left(uv \right)^{\ell_{2}}wv_{1}\cdot v_{2}\left(uv\right)^{\ell_{2}}wv_{1}\cdot(v_{ 2});\text{ or}\] \[\left(s_{1}\right)\cdot v_{2}\left(uv\right)^{\ell}wv_{1}\cdot v_{ 2}\left(uv\right)^{\ell}wv_{1}\cdot(v_{2}).\]
These words have a prefix belonging to the language of the regular expression \(s\left(uv\right)^{k}\left(uv\right)^{*}wv\).
**Definition 15**.: Let \(\mathbf{db}\) be a database instance, and \(q\) a path query.
For \(a,b\in\mathsf{adom}(\mathbf{db})\), we write \(\mathbf{db}\models a\overset{q}{\longrightarrow}b\) if there exists a path in \(\mathbf{db}\) from \(a\) to \(b\) with trace \(q\). Even more formally, \(\mathbf{db}\models a\overset{q}{\longrightarrow}b\) if \(\mathbf{db}\) contains facts \(R_{1}(\underline{a_{1}},a_{2}),R_{2}(\underline{a_{2}},a_{3}),\ldots,R_{|q|}( a_{|q|},a_{|q|+1})\) such that \(R_{1}R_{2}\cdots R_{|q|}=q\). We write \(\mathbf{db}\models a\overset{q_{1}}{\longrightarrow}b\overset{q_{2}}{ \longrightarrow}c\) as a shorthand for \(\mathbf{db}\models a\overset{q_{1}}{\longrightarrow}b\) and \(\mathbf{db}\models b\overset{q_{2}}{\longrightarrow}c\).
We write \(\mathbf{db}\models a\overset{q}{\longrightarrow}b\) if there exists a _consistent path_ in \(\mathbf{db}\) from \(a\) to \(b\) with trace \(q\), where a path is called consistent if it does not contain two distinct key-equal facts.
A constant \(c\in\mathsf{adom}(\mathbf{db})\) is called _terminal for \(q\) in \(\mathbf{db}\)_ if for some (possibly empty) proper prefix \(p\) of \(q\), there is a consistent path in \(\mathbf{db}\) with trace \(p\) that cannot be right extended to a consistent path in \(\mathbf{db}\) with trace \(q\).
Note that for every \(c\in\mathsf{adom}(\mathbf{db})\), we have \(c\overset{\varepsilon}{\longrightarrow}c\). Clearly, if \(q\) is self-join-free, then \(c\overset{q}{\longrightarrow}d\) implies \(c\overset{q}{\longrightarrow}q\) (the converse implication holds vacuously true).
**Example 7**.: Let \(\mathbf{db}=\{R(\underline{c},d),S(\underline{d},c),R(\underline{c},e),T( \underline{e},f)\}\). Then, \(c\) is terminal for \(RSRT\) in \(\mathbf{db}\) because the path \(R(\underline{c},d),S(\underline{d},c)\) cannot be right extended to a consistent path with trace \(RSRT\), because \(d\) has no outgoing \(T\)-edge. Note incidentally that \(\mathbf{db}\models c\overset{RS}{\longrightarrow}c\overset{RT}{\longrightarrow}f\), but \(\mathbf{db}\not\models c\overset{RSRT}{\longrightarrow}f\).
**Lemma 17**.: _Let \(\mathbf{db}\) be a database instance, and \(c\in\mathsf{adom}(\mathbf{db})\). Let \(q\) be a path query. Then, \(c\) is terminal for \(q\) in \(\mathbf{db}\) if and only if \(\mathbf{db}\) is a "no"-instance of \(\mathsf{CERTAINTY}(q_{[c]})\), with \(q_{[c]}\) as defined by Definition 12._
Proof.: Straightforward.
Assume \(\mathbf{db}\) is a "no"-instance of \(\mathsf{CERTAINTY}(q_{[c]})\). Then, there is a repair \(\mathbf{r}\) of \(\mathbf{db}\) such that \(\mathbf{r}\not\models q_{[c]}\). The empty path is a path in \(\mathbf{r}\) that starts in \(c\) and has trace \(\varepsilon\), which is a prefix of \(q\). We can therefore assume a longest prefix \(p\) of \(q\) such there exists a path \(\pi\) in \(\mathbf{r}\) that starts in \(c\) and has trace \(p\). Since \(\mathbf{r}\) is consistent, \(\pi\) is consistent. From \(\mathbf{r}\not\models q_{[c]}\), it follows that \(p\) is a proper prefix of \(q\). By Definition 15, \(c\) is terminal for \(q\) in \(\mathbf{db}\).
We can now give the proof of Lemma 14.
Proof of Lemma 14.: Assume \(q\) satisfies \(\mathcal{C}_{2}\). By Lemma 3, \(q\) satisfies \(\mathcal{B}_{2a}\) or \(\mathcal{B}_{2b}\). We treat the case that \(q\) satisfies \(\mathcal{B}_{2b}\) (the case that \(q\) satisfies \(\mathcal{B}_{2a}\) is even easier). We have that \(q\) is a factor of \(\left(uv\right)^{k}wv\), where \(k\) is chosen as small as possible, and \(uvw\) is self-join-free. The proof is straightforward if \(k=0\); we assume \(k\geq 1\) from here on. To simplify notation, we will show the case where \(q\) is a suffix of \(\left(uv\right)^{k}wv\); our proof can be easily extended to the case where \(q\) is not a suffix, at the price of some extra notation. There is a suffix \(s\) of \(uv\) such that \(q=s\left(uv\right)^{k-1}wv\).
We first define a unary predicate \(P\) (which depends on \(q\)) such that \(\mathbf{db}\models P(d)\) if for some \(\ell\geq 0\), there are constants \(d_{0},d_{1},\ldots,d_{\ell}\in\mathsf{adom}(\mathbf{db})\) with \(d_{0}=d\) such that:
1. \(\mathbf{db}\models d_{0}\xrightarrow{\mathit{uv}}d_{1}\xrightarrow{\mathit{ uv}}d_{2}\xrightarrow{\mathit{uv}}\cdots\xrightarrow{\mathit{uv}}d_{\ell}\);
2. for every \(i\in\{0,1,\ldots,\ell\}\), \(d_{i}\) is terminal for \(wv\) in \(\mathbf{db}\); and
3. either \(d_{\ell}\) is terminal for \(uv\) in \(\mathbf{db}\), or \(d_{\ell}\in\{d_{0},\ldots,d_{\ell-1}\}\).
**Claim 2**.: The definition of the predicate \(P\) does not change if we replace item 1 by the stronger requirement that for every \(i\in\{0,1,\ldots,\ell-1\}\), there exists a path \(\pi_{i}\) from \(d_{i}\) to \(d_{i+1}\) with trace \(uv\) such that the composed path \(\pi_{0}\cdot\pi_{1}\cdots\pi_{\ell-1}\) is consistent.
Proof.: It suffices to show the following statement by induction on increasing \(l\):
whenever there exist \(l\geq 1\) and constants \(d_{0},d_{1},\ldots,d_{l}\) with \(d_{0}=d\) such that conditions 1, 2, and 3 hold, there exist another constant \(k\geq 1\) and constants \(c_{0},c_{1},\ldots,c_{k}\) with \(c_{0}=d\) such that conditions 1, 2, and 3 hold, and, moreover, for each \(i\in\{0,1,\ldots,k-1\}\), there exists a path \(\pi_{i}\) from \(c_{i}\) to \(c_{i+1}\) such that the composed path \(\pi_{0}\cdot\pi_{1}\cdots\pi_{k-1}\) is consistent.
**Basis \(l=1\).**: Then we have \(\mathbf{db}\models d_{0}\xrightarrow{\mathit{uv}}d_{1}\), witnessed by a path \(\pi_{0}\). Since \(uv\) is self-join-free, the path \(\pi_{0}\) is consistent. The claim thus follows with \(k=l=1\), \(c_{0}=d_{0}\) and \(c_{1}=d_{1}\).
**Inductive step \(l\to l+1\).**: Assume that the statement holds for any integer in \(\{1,2,\ldots,l\}\). Suppose that there exist \(l\geq 2\) and constants \(d_{0},d_{1},\ldots,d_{l+1}\) with \(d_{0}=d\) such that conditions 1, 2, and 3 hold. For \(i\in\{0,\ldots,l\}\), let \(\pi_{i}\) be a path with trace \(uv\) from \(d_{i}\) to \(d_{i+1}\) in \(\mathbf{db}\). The claim holds if the composed path \(\pi_{0}\cdot\pi_{1}\cdots\pi_{l}\) is consistent, with \(k=l+1\) and \(c_{i}=d_{i}\) for \(i\in\{0,1,\ldots,l+1\}\). Now, assume that for some \(i<j\), the paths that show \(\mathbf{db}\models d_{i}\xrightarrow{\mathit{uv}}d_{i+1}\) and \(\mathbf{db}\models d_{j}\xrightarrow{\mathit{uv}}d_{j+1}\) contain, respectively, \(R(\underline{a},b_{1})\) and \(R(\underline{a},b_{2})\) with \(b_{1}\neq b_{2}\). It is easily verified that
\[\mathbf{db}\models d_{0}\xrightarrow{\mathit{uv}}d_{1}\xrightarrow{\mathit{ uv}}d_{2}\xrightarrow{\mathit{uv}}\cdots\xrightarrow{\mathit{uv}}d_{i}\xrightarrow{ \mathit{uv}}d_{j+1}\xrightarrow{\mathit{uv}}\cdots\xrightarrow{\mathit{uv}}d _{l+1},\]
where the number of \(uv\)-steps is strictly less than \(l+1\). Informally, we follow the original path until we reach \(R(\underline{a},b_{1})\), but then follow \(R(\underline{a},b_{2})\) instead of \(R(\underline{a},b_{1})\), and continue on the path that proves \(\mathbf{db}\models d_{j}\xrightarrow{\mathit{uv}}d_{j+1}\). Then the claim holds by applying the inductive hypothesis on constants \(d_{0},d_{1},\ldots,d_{i},d_{j+1},\ldots,d_{l+1}\).
The proof is now complete.
Since we care about the expressibility of the predicate \(P\) in Datalog, Claim 2 is not cooked into the definition of \(P\). The idea is the same as in an **NL**-algorithm for reachability: if there exists a directed path from \(s\) to \(t\), then there is such a path without repeated vertices; but we do not care for repeated vertices when computing reachability.
**Claim 3**.: The definition of predicate \(P\) does not change if we require that for \(i\in\{0,1,\ldots,\ell-1\}\), \(d_{i}\) is not terminal for \(uv\) in \(\mathbf{db}\).
Proof.: Assume that for some \(0\leq i<\ell\), \(d_{i}\) is terminal for \(uv\) in \(\mathbf{db}\). Then, all conditions in the definition are satisfied by choosing \(\ell\) equal to \(j\).
Claim 3 is not cooked into the definition of \(P\) to simplify the the encoding of \(P\) in Datalog.
Next, we define a unary predicate \(O\) such that \(\mathbf{db}\models O(c)\) for a constant \(c\) if \(c\in\mathsf{adom}(\mathbf{db})\) and one of the following holds true:
1. \(c\) is terminal for \(s\left(uv\right)^{k-1}\) in \(\mathbf{db}\); or
2. there is a constant \(d\in\mathsf{adom}(\mathbf{db})\) such that both \(\mathbf{db}\models c\xrightarrow{s\left(uv\right)^{k-1}}d\) and \(\mathbf{db}\models P(d)\).
**Claim 4**.: Let \(c\in\mathsf{adom}(\mathbf{db})\). The following are equivalent:
1. there is a repair \(\mathbf{r}\) of \(\mathbf{db}\) that contains no path that starts in \(c\) and whose trace is in the language of the regular expression \(s\left(uv\right)^{k-1}\left(uv\right)^{*}wv\); and
2. \(\mathbf{db}\models O(c)\).
Proof.: Let \(wv=S_{0}S_{1}\cdots S_{m-1}\) and \(uv=R_{0}R_{1}\cdots R_{n-1}\).
(I)\(\implies\)(II) Assume that item (I) holds true. Let the first relation name of \(s\) be \(R_{i}\). Starting from \(c\), let \(\pi\) be a maximal (possibly infinite) path in \(\mathbf{r}\) that starts in \(c\) and has trace \(R_{i}R_{i+1}R_{i+2}\cdots\), where addition is modulo \(n\). Since \(\mathbf{r}\) is consistent, \(\pi\) is deterministic. Since \(\mathbf{r}\) is finite, \(\pi\) contains only finitely many distinct edges. Therefore, \(\pi\) ends either in a loop or in an edge \(R_{j}(\underline{d},e)\) such that \(\mathbf{db}\models\neg\exists yR_{j+1}(\underline{e},y)\) (recall that \(\mathbf{r}\) contains a fact from every block of \(\mathbf{db}\)). Assume that \(\pi\) has a prefix \(\pi^{\prime}\) with trace \(s\left(uv\right)^{k-1}\); if \(e\) occurs at the non-primary key position of the last \(R_{n-1}\)-fact of \(\pi^{\prime}\) or of any \(R_{n-1}\)-fact occurring afterwards in \(\pi\), then it follows from item (I) that there exist a (possibly empty) prefix \(pS_{j}\) of \(wv\) and a constant \(f\in\mathsf{adom}(\mathbf{r})\) such that \(\mathbf{r}\models e\stackrel{{ p}}{{\longrightarrow}}f\) and \(\mathbf{db}\models\neg\exists yS_{j}(\underline{f},y)\). It is now easily verified that \(\mathbf{db}\models O(c)\).
(II) Assume \(\mathbf{db}\models O(c)\). It is easily verified that the desired result holds true if \(c\) is terminal for \(s\left(uv\right)^{k-1}\) in \(\mathbf{db}\). Assume from here on that \(c\) is not terminal for \(s\left(uv\right)^{k-1}\) in \(\mathbf{db}\). That is, for every repair \(\mathbf{r}\) of \(\mathbf{db}\), there is a constant \(d\) such that \(\mathbf{r}\models c\stackrel{{ s\left(uv\right)^{k-1}}}{{ \longrightarrow}}d\). Then, there is a consistent path \(\alpha\) with trace \(s\left(uv\right)^{k-1}\) from \(c\) to some constant \(d\in\mathsf{adom}(\mathbf{db})\) such that \(\mathbf{db}\models P(d)\), using the stronger definition of \(P\) implied by Claims 2 and 3. Let \(d_{0},\ldots,d_{\ell}\) be as in our (stronger) definition of \(P(d)\), that is, first, \(d_{1},\ldots,d_{\ell-1}\) are not terminal for \(uv\) in \(\mathbf{db}\) (cf. Claim 3), and second, there is a \(\subseteq\)-minimal consistent subset \(\pi\) of \(\mathbf{db}\) such that \(\pi\models d_{0}\stackrel{{ wv}}{{\longrightarrow}}d_{1} \stackrel{{ wv}}{{\longrightarrow}}d_{2}\stackrel{{ wv}}{{ \longrightarrow}}\cdots\stackrel{{ wv}}{{\longrightarrow}}d_{\ell}\) (cf. Claim 2). We construct a repair \(\mathbf{r}\) as follows:
1. insert into \(\mathbf{r}\) all facts of \(\pi\);
2. for every \(i\in\{0,\ldots,\ell\}\), \(d_{i}\) is terminal for \(wv\) in \(\mathbf{db}\). We ensure that \(\mathbf{r}\models d_{i}\stackrel{{ S_{0}S_{1}\cdots S_{j_{i}}}}{{ \longrightarrow}}e_{i}\) for some \(j_{i}\in\{0,\ldots,m-2\}\) and some constant \(e_{i}\) such that \(\mathbf{db}\models\neg\exists yS_{j+1}(\underline{e}_{i},y)\);
3. if \(d_{\ell}\) is terminal for \(uv\) in \(\mathbf{db}\), then we ensure that \(\mathbf{r}\models d_{\ell}\stackrel{{ R_{0}R_{1}\cdots R_{j}}}{{ \longrightarrow}}e\) for some \(j\in\{0,\ldots,n-2\}\) and some constant \(e\) such that \(\mathbf{db}\models\neg\exists yS_{j+1}(\underline{e},y)\);
4. insert into \(\mathbf{r}\) the facts of \(\alpha\) that are not key-equal to a fact already in \(\mathbf{r}\); and
5. complete \(\mathbf{r}\) into a \(\subseteq\)-maximal consistent subset of \(\mathbf{db}\).
Since \(\mathbf{r}\) is a repair of \(\mathbf{db}\), there exists a path \(\delta\) with trace \(s\left(uv\right)^{k-1}\) in \(\mathbf{r}\) that starts from \(c\). If \(\delta\neq\alpha\), then \(\delta\) must contain a fact of \(\pi\) that was inserted in step 1. Consequently, no matter whether \(\delta=\alpha\) or \(\delta\neq\alpha\), the endpoint of \(\delta\) belongs to \(\{d_{0},\ldots,d_{\ell}\}\). It follows that there is a (possibly empty) path from \(\delta\)'s endpoint to \(d_{\ell}\) whose trace is of the form \(\left(uv\right)^{*}\). Two cases can occur:
* \(d_{\ell}\) is terminal for \(uv\) in \(\mathbf{db}\).
* \(d_{\ell}\) is not terminal for \(uv\) in \(\mathbf{db}\). Then there is \(j\in\{0,\ldots,\ell-1\}\) such that \(d_{j}=d_{\ell}\). Then, there is a path of the form \(\left(uv\right)^{*}\) that starts from \(\delta\)'s endpoint and eventually loops.
Since, by construction, each \(d_{i}\) is terminal for \(wv\) in \(\mathbf{r}\), it will be the case that \(\delta\) cannot be extended to a path in \(\mathbf{r}\) whose trace is of the form \(s\left(uv\right)^{k}\left(uv\right)^{*}wv\).
**Claim 5**.: The unary predicate \(O\) is expressible in linear Datalog with stratified negation.
Proof.: The construction of the linear Datalog program is straightforward. Concerning the computation of predicates \(P\) and \(O\), note that it can be checked in \(\mathbf{FO}\) whether or not a constant \(c\) is terminal for some path query \(q\), by Lemmas 12 and 17. The only need for recursion comes from condition (i) in the definition of the predicate \(P\), which searches for a directed path of a particular form. We give a program for \(q=UVUVWV\), where \(\mathtt{c}(\mathtt{X})\) states that \(\mathtt{X}\) is a constant, and \(\mathtt{ukey}(\mathtt{X})\) states that \(\mathtt{X}\) is the primary key of some \(U\)-fact. \(\mathtt{consistent}(\mathtt{X1},\mathtt{X2},\mathtt{X3},\mathtt{X4})\) is true if either \(\mathtt{X1}\neq\mathtt{X3}\) or \(\mathtt{X2}=\mathtt{X4}\) (or both).
wterminal(X) := c(X), not wkey(X). wterminal(X) := u(X,Y), not wkey(Y). wterminal(X) := c(X), not wkey(X). wterminal(X) := w(X,Y), not wkey(Y).
wv2terminal(X) := wterminal(X). wv2terminal(X1) :- u(X1,K2), v(X2,X3), uvterminal(X3).
wppath(X1,K3) :- u(X1,K2), v(X2,K3), wvterminal(X1), wvterminal(X2), wvterminal(X3). wppath(X1,K4) := uppath(X1,K2), u(K2,K3), v(X3,K4), wvterminal(X3), wvterminal(K4).
p(X) :- wvterminal(X), wvterminal(X). %%%% the empty path. p(X) :- wppath(X,Y), wterminal(Y). p(X) :- upwind(X,Y), wvpath(Y,Y). %%%% p and upwind are not mutually recursive.
o(X) :- uv2terminal(X). o(X1) :- u(X1,K2), v(X2,K3), u(X3,K4), v(X4,K5), consistent(X1,K2,K3,K4), consistent(X2,K3,K4,K5), p(X5).
The above program is in linear Datalog with stratified negation. It is easily seen that any path query satisfying \(\mathcal{B}_{2b}\) admits such a program for the predicate \(O\).
By Lemmas 7, 15, and 16, the following are equivalent:
1. **db** is a "no"-instance of \(\mathsf{CERTAINTY}(q)\); and
2. for every constant \(c_{i}\in\mathsf{adom}(q)\), there is a repair \(\mathbf{r}\) of **db** that contains no path that starts in \(c_{i}\) and whose trace is in the language of the regular expression \(s\left(uv\right)^{k-1}\left(uv\right)^{*}uv\).
By Claim 4, item (b) holds true if and only if for every \(c\in\mathsf{adom}(\mathbf{db})\), \(\mathbf{db}\models\neg O(c)\). It follows from Claim 5 that the latter test is in linear Datalog with stratified negation, which concludes the proof of Lemma 14.
## 7 Complexity Lower Bounds
In this section, we show the complexity lower bounds of Theorem 3. For a path query \(q=\{R_{1}(x_{1},x_{2})\),..., \(R_{k}(x_{k},x_{k+1})\}\) and constants \(a,b\), we define the following database instances:
\[\phi_{a}^{b}[q] := \{R_{1}(a,\Box_{2}),R_{2}(\Box_{2},\Box_{3}),\ldots,R_{k}(\Box_{ k},b)\}\] \[\phi_{\bot}^{\bot}[q] := \{R_{1}(a,\Box_{2}),R_{2}(\Box_{2},\Box_{3}),\ldots,R_{k}(\Box_{ k},\Box_{k+1})\}\] \[\phi_{\bot}^{b}[q] := \{R_{1}(\Box_{1},\Box_{2}),R_{2}(\Box_{2},\Box_{3}),\ldots,R_{ k}(\Box_{k},b)\}\]
where the symbols \(\Box_{i}\) denoted fresh constants not occurring elsewhere. Significantly, two occurrences of \(\Box_{i}\) will represent different constants.
### NL-Hardness
We first show that if a path query violates \(\mathcal{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is **NL**-hard, and therefore not in **FO**.
**Lemma 18**.: _If a path query \(q\) violates \(\mathcal{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is **NL**-hard._
Proof.: Assume that \(q\) does not satisfy \(\mathcal{C}_{1}\). Then, there exists a relation name \(R\) such that \(q=uRvRw\) and \(q\) is not a prefix of \(uRvRvRw\). It follows that \(Rw\) is not a prefix of \(RvRw\). Since \(Rv\neq\varepsilon\), there exists no (conjunctive query) homomorphism from \(q\) to \(uRv\).
The problem \(\mathsf{REACHABILITY}\) takes as input a directed graph \(G(V,E)\) and two vertices \(s,t\in V\), and asks whether \(G\) has a directed path from \(s\) to \(t\). This problem is **NL**-complete and remains **NL**-complete when the inputs are acyclic graphs. Recall that **NL** is closed under complement. We present a first-order reduction from \(\mathsf{REACHABILITY}\) to the complement of \(\mathsf{CERTAINTY}(q)\), for acyclic directed graphs.
Let \(G=(V,E)\) be an acyclic directed graph and \(s,t\in V\). Let \(G^{\prime}=(V\cup\{s^{\prime},t^{\prime}\},E\cup\{(s^{\prime},s),(t,t^{\prime})\})\), where \(s^{\prime},t^{\prime}\) are fresh vertices. We construct an input instance \(\mathbf{db}\) for \(\mathsf{CERTAINTY}(q)\) as follows:
* for each vertex \(x\in V\cup\{s^{\prime}\}\), we add \(\phi_{\bot}^{x}[u]\);
* for each edge \((x,y)\in E\cup\{(s^{\prime},s),(t,t^{\prime})\}\), we add \(\phi_{x}^{y}[Rv]\); and
* for each vertex \(x\in V\), we add \(\phi_{x}^{\perp}[Rw]\).
This construction can be executed in **FO**. Figure 8 shows an example of the above construction. Observe that the only conflicts in **db** occur in \(R\)-facts outgoing from a same vertex.
We now show that there exists a directed path from \(s\) to \(t\) in \(G\) if and only if there exists a repair of **db** that does not satisfy \(q\).
Suppose that there is a directed path from \(s\) to \(t\) in \(G\). Then, \(G^{\prime}\) has a directed path \(P=s,x_{0},x_{1},\ldots,t,t^{\prime}\). Then, consider the repair \(\mathbf{r}\) that chooses the first \(R\)-fact from \(\phi_{x}^{y}[Rv]\) for each edge \((x,y)\) on the path \(P\), and the first \(R\)-fact from \(\phi_{y}^{\perp}[Rw]\) for each \(y\) not on the path \(P\). We show that \(\mathbf{r}\) falsifies \(q\). Assume for the sake of contradiction that \(\mathbf{r}\) satisfies \(q\). Then, there exists a valuation \(\theta\) for the variables in \(q\) such that \(\theta(q)\subseteq\mathbf{r}\). Since, as argued in the beginning of this proof, there exists no (conjunctive query) homomorphism from \(q\) to \(uRw\), it must be that all facts in \(\theta(q)\) belong to a path in \(\mathbf{r}\) with trace \(u\left(Rv\right)^{k}\), for some \(k\geq 0\). Since, by construction, no constants are repeated on such paths, there exists a (conjunctive query) homomorphism from \(q\) to \(u\left(Rv\right)^{k}\), which implies that \(Rw\) is a prefix of \(RvRw\), a contradiction. We conclude by contradiction that \(\mathbf{r}\) falsifies \(q\).
Proof by contradiction. Suppose that there is no directed path from \(s\) to \(t\) in \(G\). Let \(\mathbf{r}\) be any repair of **db**; we will show that \(\mathbf{r}\) satisfies \(q\). Indeed, there exists a maximal path \(P=x_{0},x_{1},\ldots,x_{n}\) such that \(x_{0}=s^{\prime}\), \(x_{1}=s\), and \(\phi_{x}^{x_{i+1}}[Rv]\subseteq\mathbf{r}\). By construction, \(s^{\prime}\) cannot reach \(t^{\prime}\) in \(G^{\prime}\), and thus \(x_{n}\neq t^{\prime}\). Since \(P\) is maximal, we must have \(\phi_{x_{n}}^{\perp}[Rw]\subseteq\mathbf{r}\). Then \(\phi_{\perp}^{x_{n-1}}[u]\cup\phi_{x_{n-1}}^{x_{n}}[Rv]\cup\phi_{x_{n}}^{\perp} [Rw]\) satisfies \(q\).
### coNP-Hardness
Next, we show the **coNP**-hard lower bound.
**Lemma 19**.: _If a path query \(q\) violates \(\mathcal{C}_{3}\), then \(\mathsf{CERTAINTY}(q)\) is_ **coNP**_-hard._
Proof.: If \(q\) does not satisfy \(\mathcal{C}_{3}\), then there exists a relation \(R\) such that \(q=uRvRw\) and \(q\) is not a factor of \(uRvRvRv\). Note that this means that there is no homomorphism from \(q\) to \(uRvRvRv\). Also, \(u\) must be nonempty (otherwise, \(q=RvRv\) is trivially a suffix of \(RvRvRv\)). Let \(S\) be the first relation of \(u\).
The proof is a first-order reduction from \(\mathsf{SAT}\) to the complement of \(\mathsf{CERTAINTY}(q)\). The problem \(\mathsf{SAT}\) asks whether a given propositional formula in CNF has a satisfying truth assignment.
Given any formula \(\psi\) for \(\mathsf{SAT}\), we construct an input instance **db** for \(\mathsf{CERTAINTY}(q)\) as follows:
* for each variable \(z\), we add \(\phi_{z}^{\perp}[Rw]\) and \(\phi_{z}^{\perp}[RvRv]\);
* for each clause \(C\) and positive literal \(z\) of \(C\), we add \(\phi_{C}^{z}[u]\);
* for each clause \(C\) and variable \(z\) that occurs in a negative literal of \(C\), we add \(\phi_{C}^{z}[uRv]\).
This construction can be executed in **FO**. Figure 9 depicts an example of the above construction. Intuitively, \(\phi_{z}^{\perp}[Rw]\) corresponds to setting the variable \(z\) to true, and \(\phi_{z}^{\perp}[RvRw]\) to false. There are two types of conflicts that occur in **db**. First, we have conflicting facts of the form \(S(\underline{C},*)\); resolving this conflict corresponds to the clause \(C\) choosing one of its literals. Moreover, for each variable \(z\), we have conflicting facts of the form \(R(\underline{z},*)\); resolving this conflict corresponds to the variable \(z\) choosing a truth assignment.
We show now that \(\psi\) has a satisfying truth assignment if and only if there exists a repair of **db** that does not satisfy \(q\).
Assume that there exists a satisfying truth assignment \(\sigma\) for \(\psi\). Then for any clause \(C\), there exists a variable \(z_{C}\in C\) whose corresponding literal is true in \(C\) under \(\sigma\). Consider the repair \(\mathbf{r}\) that:
* for each variable \(z\), it chooses the first \(R\)-fact of \(\phi_{z}^{\perp}[Rw]\) if \(\sigma(z)\) is true, otherwise the first \(R\)-fact of \(\phi_{z}^{\perp}[RvRw]\);
Figure 8: Database instance for the **NL**-hardness reduction from the graph \(G\) with \(V=\{s,a,t\}\) and \(E=\{(s,a),(a,t)\}\).
* for each clause \(C\), it chooses the first \(S\)-fact of \(\phi^{z}_{C}[u]\) if \(z_{C}\) is positive in \(C\), or the first \(S\)-fact of \(\phi^{z}_{C}[uRv]\) if \(z_{C}\) is negative in \(C\).
Assume for the sake of contradiction that \(\mathbf{r}\) satisfies \(q\). Then we must have a homomorphism from \(q\) to either \(uRw\) or \(uRvRvRv\). But the former is not possible, while the latter contradicts \(\mathcal{C}_{3}\). We conclude by contradiction that \(\mathbf{r}\) falsifies \(q\).
Suppose that there exists a repair \(\mathbf{r}\) of \(\mathbf{db}\) that falsifies \(q\). Consider the assignment \(\sigma\):
\[\sigma(z)=\begin{cases}\text{true}&\text{if }\phi^{\perp}_{\perp}[Rw]\subseteq \mathbf{r}\\ \text{false}&\text{if }\phi^{\perp}_{z}[RvRv]\subseteq\mathbf{r}\end{cases}\]
We claim that \(\sigma\) is a satisfying truth assignment for \(\psi\). Indeed, for each clause \(C\), the repair must have chosen a variable \(z\) in \(C\). If \(z\) appears as a positive literal in \(C\), then \(\phi^{z}_{C}[u]\subseteq\mathbf{r}\). Since \(\mathbf{r}\) falsifies \(q\), we must have \(\phi^{\perp}_{z}[Rw]\subseteq\mathbf{r}\). Thus, \(\sigma(z)\) is true and \(C\) is satisfied. If \(z\) appears in a negative literal, then \(\phi^{z}_{C}[uRv]\subseteq\mathbf{r}\). Since \(\mathbf{r}\) falsifies \(q\), we must have \(\phi^{\perp}_{z}[RvRv]\subseteq\mathbf{r}\). Thus, \(\sigma(z)\) is false and \(C\) is again satisfied.
### PTIME-Hardness
Finally, we show the **PTIME**-hard lower bound.
**Lemma 20**.: _If a path query \(q\) violates \(\mathcal{C}_{2}\), then \(\mathsf{CERTAINTY}(p)\) is_ **PTIME**_-hard._
Proof.: Suppose \(q\) violates \(\mathcal{C}_{2}\). If \(q\) also violates \(\mathcal{C}_{3}\), then the problem \(\mathsf{CERTAINTY}(q)\) is **PTIME**-hard since it is **coNP**-hard by Lemma 19. Otherwise, it is possible to write \(q=uRv_{1}Rv_{2}Rw\), with three consecutive occurrences of \(R\) such that \(v_{1}\neq v_{2}\) and \(Rw\) is not a prefix of \(Rv_{1}\). Let \(v\) be the maximal path query such that \(v_{1}=vv_{1}^{+}\) and \(v_{2}=vv_{2}^{+}\). Thus \(v_{1}^{+}\neq v_{2}^{+}\) and the first relation names of \(v_{1}^{+}\) and \(v_{2}^{+}\) are different.
Our proof is a reduction from the Monotone Circuit Value Problem (MCVP) known to be **PTIME**-complete [18]:
**Problem:**: MCVP
**Input:**: A monotone Boolean circuit \(C\) on inputs \(x_{1}\), \(x_{2}\),..., \(x_{n}\) and output gate \(o\); an assignment \(\sigma:\{x_{i}\mid 1\leq i\leq n\}\rightarrow\{0,1\}\).
**Question:**: What is the value of the output \(o\) under \(\sigma\)?
We construct an instance \(\mathbf{db}\) for \(\mathsf{CERTAINTY}(q)\) as follows:
* for the output gate \(o\), we add \(\phi^{\perp}_{\perp}[uRv_{1}]\);
* for each input variable \(x\) with \(\sigma(x)=1\), we add \(\phi^{\perp}_{x}[Rv_{2}Rw]\);
* for each gate \(g\), we add \(\phi^{g}_{\perp}[u]\) and \(\phi^{\perp}_{g}[Rv_{2}Rw]\);
* for each AND gate \(g=g_{1}\wedge g_{2}\), we add \[\phi^{g_{1}}_{g}[Rv_{1}]\cup\phi^{g_{2}}_{g}[Rv_{1}].\] Here, \(g_{1}\) and \(g_{2}\) can be gates or input variables; and
* for each OR gate \(g=g_{1}\lor g_{2}\), we add \[\begin{array}{ccc}\phi_{g}^{e_{1}}[Rv]&\cup&\phi_{e_{1}}^{g_{1}}[v_{1}^{+}]& \cup&\phi_{e_{1}}^{e_{2}}[v_{2}^{+}]\\ \cup&\phi_{\perp}^{e_{2}}[u]&\cup&\phi_{e_{2}}^{g_{2}}[Rv_{1}]&\cup&\phi_{e_{2}}^ {e_{2}}[Rv]\end{array}\] where \(c_{1},c_{2}\) are fresh constants.
This construction can be executed in **FO**. An example of the gadget constructions is shown in Figure 10. We next show that the output gate \(o\) is evaluated to \(1\) under \(\sigma\) if and only if each repair of **db** satisfies \(q\).
Suppose the output gate \(o\) is evaluated to \(1\) under \(\sigma\). Consider any repair \(\mathbf{r}\). We construct a sequence of gates starting from \(o\), with the invariant that every gate \(g\) evaluates to \(1\), and there is a path of the form \(uRv_{1}\) in \(\mathbf{r}\) that ends in \(g\). The output gate \(o\) evaluates to \(1\), and also we have that \(\phi_{\perp}^{o}[uRv_{1}]\subseteq\mathbf{r}\) by construction. Suppose that we are at gate \(g\). If there is a \(Rv_{2}Rw\) path in \(\mathbf{r}\) that starts in \(g\), the sequence ends and the query \(q\) is satisfied. Otherwise, we distinguish two cases:
1. \(g=g_{1}\wedge g_{2}\). Then, we choose the gate with \(\phi_{g}^{g_{1}}[Rv_{1}]\subseteq\mathbf{r}\). Since both gates evaluate to \(1\) and \(\phi_{\perp}^{g}[u]\subseteq\mathbf{r}\), the invariant holds for the chosen gate.
2. \(g=g_{1}\lor g_{2}\). If \(g_{1}\) evaluates to \(1\), we choose \(g_{1}\). Observe that \(\phi_{\perp}^{g}[u]\cup\phi_{g}^{e_{1}}[Rv]\cup\phi_{e_{1}}^{g_{1}}[v_{1}^{+}]\) creates the desired \(uRv_{1}\) path. Otherwise \(g_{2}\) evaluates to \(1\). If \(\phi_{c_{2}}^{\perp}[Rw]\subseteq\mathbf{r}\), then there is a path with trace \(uRv_{1}\) ending in \(g\), and a path with trace \(Rv_{2}Rw\) starting in \(g\), and therefore \(\mathbf{r}\) satisfies \(q\). If \(\phi_{c_{2}}^{\perp}[Rw]\nsubseteq\mathbf{r}\), we choose \(g_{2}\) and the invariant holds.
If the query is not satisfied at any point in the sequence, we will reach an input variable \(x\) evaluated at \(1\). But then there is an outgoing \(Rv_{2}Rw\) path from \(x\), which means that \(q\) must be satisfied.
Proof by contraposition. Assume that \(o\) is evaluated to \(0\) under \(\sigma\). We construct a repair \(\mathbf{r}\) as follows, for each gate \(g\):
* if \(g\) is evaluated to \(1\), we choose the first \(R\)-fact in \(\phi_{g}^{\perp}[Rv_{2}Rw]\);
* if \(g=g_{1}\wedge g_{2}\) and \(g\) is evaluated to \(0\), let \(g_{i}\) be the gate or input variable evaluated to \(0\). We then choose \(\phi_{g}^{g_{i}}[Rv_{1}]\);
* if \(g=g_{1}\lor g_{2}\) and \(g\) is evaluated to \(0\), we choose \(\phi_{g}^{e_{1}}[Rv]\); and
* if \(g=g_{1}\lor g_{2}\), we choose \(\phi_{e_{2}}^{g_{2}}[Rv_{1}]\).
For a path query \(p\), we write \(\mathtt{head}(p)\) for the variable at the key-position of the first atom, and \(\mathtt{rear}(p)\) for the variable at the non-key position of the last atom.
Assume for the sake of contradiction that \(\mathbf{r}\) satisfies \(q\). Then, there exists some valuation \(\theta\) such that \(\theta(uRv_{1}Rv_{2}Rw)\subseteq\mathbf{r}\). Then the gate \(g^{*}:=\theta(\mathtt{head}(Rv_{1}))\) is evaluated to \(0\) by construction. Let \(g_{1}:=\theta(\mathtt{rear}(Rv_{1}))\). By construction, for \(g^{*}=g_{1}\wedge g_{2}\) or \(g^{*}=g_{1}\lor g_{2}\), we must have \(\phi_{g}^{g_{1}}[Rv_{1}]\subseteq\mathbf{r}\) and \(g_{1}\) is a gate or an input variable also evaluated to \(0\). By our construction of \(\mathbf{r}\), there is no path with trace \(Rv_{2}Rw\) outgoing from \(g_{1}\). However, \(\theta(Rv_{2}Rw)\subseteq\mathbf{r}\), this can only happen when \(g_{1}\) is an OR gate, and one of the following occurs:
* Case that \(|Rw|\leq|Rv_{1}|\), and the trace of \(\theta(Rv_{2}Rw)\) is a prefix of \(Rv_{2}^{+}Rv_{1}\). Then \(Rw\) is a prefix of \(Rv_{1}\), a contradiction.
Figure 10: Gadgets for the **PTIME**-hardness reduction.
* Case that \(\left|Rw\right|>\left|Rv_{1}\right|\), and \(Rv_{2}^{+}Rv_{1}\) is a prefix of the trace of \(\theta(Rv_{2}Rw)\). Consequently, \(Rv_{1}\) is a prefix of \(Rw\). Then, for every \(k\geq 1\), \(\mathcal{L}^{\texttt{t}*}(q)\) contains \(uRv_{1}\left(Rv_{2}\right)^{k}Rw\). It is now easily verified that for large enough values of \(k\), \(uRv_{1}Rv_{2}w\) is not a factor of \(uRv_{1}\left(Rv_{2}\right)^{k}Rw\). By Lemmas 5 and 19, \(\mathsf{CERTAINTY}(q)\) is \(\mathsf{coNP}\)-hard.
## 8 Path Queries with Constants
We now extend our complexity classification of \(\mathsf{CERTAINTY}(q)\) to path queries in which constants can occur.
**Definition 16** (Generalized path queries).: A _generalized path query_ is a Boolean conjunctive query of the following form:
\[q=\{R_{1}(\underline{s_{1}},s_{2}),R_{2}(\underline{s_{2}},s_{3}),\ldots,R_{ k}(\underline{s_{k}},s_{k+1})\}, \tag{5}\]
where \(s_{1}\), \(s_{2}\),..., \(s_{k+1}\) are constants or variables, all distinct, and \(R_{1}\), \(R_{2}\),..., \(R_{k}\) are (not necessarily distinct) relation names. Significantly, every constant can occur at most twice: at a non-primary-key position and the next primary-key-position.
The _characteristic prefix_ of \(q\), denoted by \(\mathsf{char}(q)\), is the longest prefix
\[\{R_{1}(\underline{s_{1}},s_{2}),R_{2}(\underline{s_{2}},s_{3}),\ldots,R_{ \ell}(\underline{s_{\ell}},s_{\ell+1})\},0\leq\ell\leq k\]
such that no constant occurs among \(s_{1}\), \(s_{2}\),..., \(s_{\ell}\) (but \(s_{\ell+1}\) can be a constant). Clearly, if \(q\) is constant-free, then \(\mathsf{char}(q)=q\).
**Example 8**.: If \(q=\{R(\underline{x},y)\), \(S(\underline{y},0)\), \(T(\underline{0},1)\), \(R(\underline{1},w)\}\), where \(0\) and \(1\) are constants, then \(\mathsf{char}(q)=\{R(\underline{x},y)\), \(S(\underline{y},0)\}\).
The following lemma implies that if a generalized path query \(q\) starts with a constant, then \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\). This explains why the complexity classification in the remainder of this section will only depend on \(\mathsf{char}(q)\).
**Lemma 21**.: _For any generalized path query \(q\), \(\mathsf{CERTAINTY}(p)\) is in \(\mathbf{FO}\), where \(p:=q\setminus\mathsf{char}(q)\)._
We now introduce some definitions and notations used in our complexity classification. The following definition introduces a convenient syntactic shorthand for characteristic prefixes previously defined in Definition 16.
**Definition 17**.: Let \(q=\{R_{1}(\underline{x_{1}},x_{2})\), \(R_{2}(\underline{x_{2}},x_{3})\),..., \(R_{k}(\underline{x_{k}},x_{k+1})\}\) be a path query. We write \(\llbracket q,c\rrbracket\) for the generalized path query obtained from \(q\) by replacing \(x_{k+1}\) with the constant \(c\). The constant-free path query \(q\) will be denoted by \(\llbracket q,\top\rrbracket\), where \(\top\) is a distinguished special symbol.
**Definition 18** (Prefix homomorphism).: Let
\[q = \{R_{1}(\underline{s_{1}},s_{2}),R_{2}(\underline{s_{2}},s_{3}), \ldots,R_{k}(\underline{s_{k}},s_{k+1})\}\] \[p = \{S_{1}(\underline{t_{1}},t_{2}),S_{2}(\underline{t_{2}},t_{3}), \ldots,R_{\ell}(\underline{s_{\ell}},s_{\ell+1})\}\]
be generalized path queries. A _homomorphism from \(q\) to \(p\)_ is a substitution \(\theta\) for the variables in \(q\), extended to be the identity on constants, such that for every \(i\in\{1,\ldots,k\}\), \(R_{i}(\underline{\theta(s_{i})},\theta(s_{i+1}))\in p\). Such a homomorphism is a _prefix homomorphism_ if \(\theta(s_{1})=t_{1}\).
**Example 9**.: Let \(q=\{R(\underline{x},y)\), \(R(\underline{y},1)\), \(S(\underline{1},z)\}\), and \(p=\{R(\underline{x},y)\), \(R(\underline{y},z)\), \(R(\underline{y},1)\}\). Then \(\mathsf{char}(q)=\{R(\underline{x},y),R(\underline{y},1)\}=\llbracket RR,1\rrbracket\) and \(p=\llbracket RRR,1\rrbracket\). There is a homomorphism from \(\mathsf{char}(q)\) to \(p\), but there is no prefix homomorphism from \(\mathsf{char}(q)\) to \(p\).
The following conditions generalize \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) from constant-free path queries to generalized path queries. Let \(\gamma\) be either a constant or the distinguished symbol \(\top\).
\(\mathcal{D}_{1}\)**:**: Whenever \(\mathsf{char}(q)=\llbracket uRvRv,\gamma\rrbracket\), there is a prefix homomorphism from \(\mathsf{char}(q)\) to \(\llbracket uRvRvRv,\gamma\rrbracket\). \(\mathcal{D}_{2}\)**:**: Whenever \(\mathsf{char}(q)=\llbracket uRvRv,\gamma\rrbracket\), there is a homomorphism from \(\mathsf{char}(q)\) to \(\llbracket uRvRvRv,\gamma\rrbracket\); and whenever \(\mathsf{char}(q)=\llbracket uRvRvRv,\gamma\rrbracket\) for consecutive occurrences of \(R\), \(v_{1}=v_{2}\) or there is a prefix homomorphism from \(\llbracket Rw,\gamma\rrbracket\) to \(\llbracket Rv_{1},\gamma\rrbracket\). \(\mathcal{D}_{3}\)**:**: Whenever \(\mathsf{char}(q)=\llbracket uRvRv,\gamma\rrbracket\), there is a homomorphism from \(\mathsf{char}(q)\) to \(\llbracket uRvRvRv,\gamma\rrbracket\).
It is easily verified that if \(\gamma=\top\), then \(\mathcal{D}_{1}\), \(\mathcal{D}_{2}\), and \(\mathcal{D}_{3}\) are equivalent to, respectively, \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\). Likewise, the following theorem degenerates to Theorem 3 for path queries without constants.
**Theorem 4**.: _For every generalized path query \(q\), the following complexity upper bounds obtain:_
* _if_ \(q\) _satisfies_ \(\mathcal{D}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{FO}\)_;_
* _if_ \(q\) _satisfies_ \(\mathcal{D}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{NL}\)_; and_
* _if_ \(q\) _satisfies_ \(\mathcal{D}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{PTIME}\)_._
_The following complexity lower bounds obtain:_
* _if_ \(q\) _violates_ \(\mathcal{D}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{NL}\)_-hard;_
* _if_ \(q\) _violates_ \(\mathcal{D}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{PTIME}\)_-hard; and_
* _if_ \(q\) _violates_ \(\mathcal{D}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{coNP}\)_-complete._
Finally, the proof of Theorem 4 reveals that for generalized path queries \(q\) containing at least one constant, the complexity of \(\mathsf{CERTAINTY}(q)\) exhibits a trichotomy (instead of a tetrachtotomy as in Theorem 4).
**Theorem 5**.: _For any generalized path query \(q\) containing at least one constant, the problem \(\mathsf{CERTAINTY}(q)\) is either in \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, or \(\mathbf{coNP}\)-complete._
## 9 Related Work
Inconsistencies in databases have been studied in different contexts [8, 21, 22]. Consistent query answering (CQA) was initiated by the seminal work by Arenas, Bertossi, and Chomicki [3]. After twenty years, their contribution was acknowledged in a _Gems of PODS session_[5]. An overview of complexity classification results in CQA appeared recently in the _Database Principles_ column of SIGMOD Record [41].
The term \(\mathsf{CERTAINTY}(q)\) was coined in [39] to refer to CQA for Boolean queries \(q\) on databases that violate primary keys, one per relation, which are fixed by \(q\)'s schema. The complexity classification of \(\mathsf{CERTAINTY}(q)\) for the class of self-join-free Boolean conjunctive queries started with the work by Fuxman and Miller [17], and was further pursued in [23, 26, 27, 28, 30, 32], which eventually revealed that the complexity of \(\mathsf{CERTAINTY}(q)\) for self-join-free conjunctive queries displays a trichotomy between \(\mathbf{FO}\), \(\mathbf{L}\)-complete, and \(\mathbf{coNP}\)-complete. A few extensions beyond this trichotomy result are known. It remains decidable whether or not \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\) for self-join-free Boolean conjunctive queries with negated atoms [29], with respect to multiple keys [31], and with unary foreign keys [20], all assuming that \(q\) is self-join-free.
Little is known about \(\mathsf{CERTAINTY}(q)\) beyond self-join-free conjunctive queries. Fontaine [14] showed that if we strengthen Conjecture 1 from conjunctive queries to unions of conjunctive queries, then it implies Bulatov's dichotomy theorem for conservative CSP [6]. This relationship between CQA and CSP was further explored in [34]. In [1], the authors show the \(\mathbf{FO}\) boundary for \(\mathsf{CERTAINTY}(q)\) for constant-free Boolean conjunctive queries \(q\) using a single binary relation name with a singleton primary key. Figueira et al. [13] have recently discovered a simple fixpoint algorithm that solves \(\mathsf{CERTAINTY}(q)\) when \(q\) is a self-join free conjunctive query or a path query such that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{PTIME}\).
The counting variant of the problem \(\mathsf{CERTAINTY}(q)\), denoted \(\sharp\mathsf{CERTAINTY}(q)\), asks to count the number of repairs that satisfy some Boolean query \(q\). For self-join-free Boolean conjunctive queries, \(\sharp\mathsf{CERTAINTY}(q)\) exhibits a dichotomy between \(\mathbf{FP}\) and \(\sharp\mathbf{PTIME}\)-complete [37]. This dichotomy has been shown to extend to self-joins if primary keys are singletons [38], and to functional dependencies [7].
In practice, systems supporting CQA have often used efficient solvers for Disjunctive Logic Programming, Answer Set Programming (ASP) or Binary Integer Programming (BIP), regardless of whether the CQA problem admits a first-order rewriting [2, 9, 10, 11, 12, 19, 24, 35, 36].
## 10 Conclusion
We established a complexity classification in consistent query answering relative to primary keys, for path queries that can have self-joins: for every path query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, \(\mathbf{PTIME}\)-complete, or \(\mathbf{coNP}\)-complete, and it is decidable in polynomial time in the size of \(q\) which of the four cases applies.
If \(\mathsf{CERTAINTY}(q)\) is in **FO** or in **PTIME**, rewritings of \(q\) can be effectively constructed in, respectively, first-order logic and Least Fixpoint Logic.
For binary relation names and singleton primary keys, an intriguing open problem is to generalize the form of the queries, from paths to directed rooted trees, DAGs, or general digraphs. The ultimate open problem is Conjecture 1, which conjectures that for every Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is either in **PTIME** or **coNP**-complete.
**Acknowledgements.** This work is supported by the National Science Foundation under grant IIS-1910014.
|
2305.19947 | A Geometric Perspective on Diffusion Models | Recent years have witnessed significant progress in developing effective
training and fast sampling techniques for diffusion models. A remarkable
advancement is the use of stochastic differential equations (SDEs) and their
marginal-preserving ordinary differential equations (ODEs) to describe data
perturbation and generative modeling in a unified framework. In this paper, we
carefully inspect the ODE-based sampling of a popular variance-exploding SDE
and reveal several intriguing structures of its sampling dynamics. We discover
that the data distribution and the noise distribution are smoothly connected
with a quasi-linear sampling trajectory and another implicit denoising
trajectory that even converges faster. Meanwhile, the denoising trajectory
governs the curvature of the corresponding sampling trajectory and its finite
differences yield various second-order samplers used in practice. Furthermore,
we establish a theoretical relationship between the optimal ODE-based sampling
and the classic mean-shift (mode-seeking) algorithm, with which we can
characterize the asymptotic behavior of diffusion models and identify the
empirical score deviation. Code is available at
\url{https://github.com/zju-pi/diff-sampler}. | Defang Chen, Zhenyu Zhou, Jian-Ping Mei, Chunhua Shen, Chun Chen, Can Wang | 2023-05-31T15:33:16Z | http://arxiv.org/abs/2305.19947v3 | # A Geometric Perspective on Diffusion Models
###### Abstract
Recent years have witnessed significant progress in developing efficient training and fast sampling approaches for diffusion models. A recent remarkable advancement is the use of stochastic differential equations (SDEs) to describe data perturbation and generative modeling in a unified mathematical framework. In this paper, we reveal several intriguing geometric structures of diffusion models and contribute a simple yet powerful interpretation to their sampling dynamics. Through carefully inspecting a popular variance-exploding SDE and its marginal-preserving ordinary differential equation (ODE) for sampling, we discover that the data distribution and the noise distribution are smoothly connected with an explicit, quasi-linear _sampling trajectory_, and another implicit _denoising trajectory_, which even converges faster in terms of visual quality. We also establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm, with which we can characterize the asymptotic behavior of diffusion models and identify the score deviation. These new geometric observations enable us to improve previous sampling algorithms, re-examine latent interpolation, as well as re-explain the working principles of distillation-based fast sampling techniques.
## 1 Introduction
Diffusion models, or score-based generative models [22, 23, 24, 25] have attracted growing attention and seen impressive success in various domains, including image [16, 21, 28, 29], video [20, 21, 23], audio [14, 25], and especially text-to-image generation [15, 26, 27]. Such models are essentially governed by a certain kind of stochastic differential equations (SDEs) that smooth data into noise in a forward process and then generate data from noise in a backward process [25].
Generally, the forward SDE is formulated as a spectrum of Gaussian _kernel density estimation_ of the original data distribution with a specifically designed scaling factor and bandwidth [25]. As such, one can couple (theoretically infinite) data-noise pairs and train a noise-dependent neural network (_i.e._, the diffusion model) to minimize the least square error for data reconstruction [11]. Once such a denoising model with sufficient capacity is well optimized, it will faithfully capture the score (gradient of the log-density _w.r.t._ the input) of the data density smoothed with various levels of noise [25, 26, 27]. The generative ability is then emerged by simulating the (score-based) backward SDE with any numerical solvers [25]. Alternatively, we can simulate the corresponding ordinary differential equation (ODE) that preserves the same marginal distribution as the SDE [25, 26, 27, 28]. The deterministic ODE-based sampling gets rid of the stochasticity apart from the randomness of drawing initial samples, and thus makes the whole generative procedure more comprehensible and controllable [11]. However, more details about how diffusion models behave under this dense mathematical framework are still largely unknown.
In this paper, we provide a geometric perspective to deepen our understanding of diffusion models, especially the sampling dynamics. The state-of-the-art variance-exploding SDE [11] is taken as an example to reveal the underlying intriguing structures. Our empirical observations are illustrated
in Figure 1. Intuitively, given an initial sample from the noise distribution, the difference between its denoising output and its current position forms the scaled score for simulating the sampling trajectory. This explicit trajectory is almost straight such that the ODE simulation can be greatly accelerated at a modest cost of truncation error. Furthermore, the denoising output itself forms another implicit trajectory that starts near the final sample and quickly appears decent visual quality. These two simple and smooth trajectories depict the characters of ODE-based sampling and we further establish a theoretical relationship between the optimal ODE-based sampling and annealed mean shift to understand the asymptotic behavior of diffusion models. Additionally, we provide several applications to demonstrate the potential of our geometric perspective to reform existing practices of diffusion models, such as speeding up previous samplers, re-examining latent interpolation and re-interpreting distillation-based fast sampling techniques.
## 2 Score-Based Generative Models
We begin with a brief overview of the basic concepts in developing score-based generative models. To enable effective generative modeling, we are required to bridge the data distribution \(p_{d}(\mathbf{x})\) with a non-informative tractable distribution \(p_{n}(\mathbf{x})\). Nowadays, a prevailing and promising approach is score-based generative modeling [13, 14, 15], which can be formulated into a concise framework from the lens of _stochastic differential equations_ (SDEs) [13, 14]. With this powerful tool, the data perturbation is modeled as a continuous stochastic process \(\{\mathbf{x}_{t}\}_{t=0}^{T}\):
\[\mathrm{d}\mathbf{x}=\mathbf{f}(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d} \mathbf{w}_{t},\qquad\mathbf{f}(\cdot,t):\mathbb{R}^{d}\to\mathbb{R}^{d}, \quad g(\cdot):\mathbb{R}\to\mathbb{R}, \tag{1}\]
where \(\mathbf{w}_{t}\) is the standard Wiener process; \(\mathbf{f}(\cdot,t)\) and \(g(t)\) are drift and diffusion coefficients, respectively [16]. We denote the distribution of \(\mathbf{x}_{t}\) as \(p_{t}(\mathbf{x})\) and such an Ito SDE can smoothly transform the data distribution \(p_{0}(\mathbf{x})=p_{d}(\mathbf{x})\) to the (approximate) noise distribution \(p_{T}(\mathbf{x})\approx p_{n}(\mathbf{x})\) in a forward manner. By properly setting the coefficients, some established models referred to as variance-preserving (VP) and variance-exploding (VE) SDEs can be recovered [13, 15, 16].
The reversal of Eq. (1) is another SDE that allows to synthesize data from noise in a backward manner [1]. Remarkably, there exists a _probability flow ordinary differential equation_ (PF-ODE) sharing the same marginal distribution \(\{p_{t}(\mathbf{x})\}_{t=0}^{T}\) at each time step of the diffusion process:
\[\mathrm{d}\mathbf{x}=\left[\mathbf{f}(\mathbf{x},t)\mathrm{d}t-\frac{1}{2}g(t )^{2}\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\right]\mathrm{d}t. \tag{2}\]
The deterministic nature of ODE enjoys several benefits such as efficient sampling, unique encoding, and meaningful latent manipulations [13, 14]. We thus choose Eq. (2) to analyze model behaviors throughout this paper. Simulating the above ODE requests having the _score function_\(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) in hand, which is typically estimated with the _denoising score matching_ (DSM) criterion [17, 18, 19]. From the perspective of _empirical Bayes_[1, 19, 18], there exists a
Figure 1: A geometric perspective of ODE-based sampling in diffusion models. An initial sample (from the noise distribution) starts from a big sphere and converges to its final sample (in the data manifold) along a smooth, quasi-linear sampling trajectory. Meanwhile, its denoising output lays in an implicit, smooth denoising trajectory starting from the approximate dataset mean. The denoising output is very close to the final sample and enjoys much faster convergence in terms of visual quality.
profound connection between DSM and _denoising autoencoder_ (DAE) [20, 14, 13] (see Appendix B.1). Therefore, we can equivalently obtain the score function _at each noise level_ by solving the corresponding least squares estimation:
\[\mathbb{E}_{\mathbf{x}\sim p_{d}}\mathbb{E}_{\mathbf{z}\sim\mathcal{N}( \mathbf{0},\sigma^{2}\mathbf{I})}\|r_{\mathbf{\theta}}\left(\hat{\mathbf{x}}; \sigma_{t}\right)-\mathbf{x}\|_{2}^{2},\qquad\hat{\mathbf{x}}=\mathbf{x}+ \mathbf{z},\quad\sigma_{t}=\sqrt{\int_{0}^{t}g(\xi)^{2}\mathrm{d}\xi}. \tag{3}\]
The optimal estimator \(r_{\mathbf{\theta}}^{\star}\left(\hat{\mathbf{x}};\sigma_{t}\right)\) equals to \(\hat{\mathbf{x}}+\sigma_{t}^{2}\nabla_{\hat{\mathbf{x}}}\log p_{t}(\hat{ \mathbf{x}})\) as revealed in the literature [11, 14]. Unless otherwise specified, we follow the configurations of VE SDEs with \(\mathbf{f}(\mathbf{x},t)=\mathbf{0}\) and \(g(t)=\sqrt{2t}\)[13, 12]. In this case, \(\sigma_{t}=t\), the perturbation kernel \(p_{t}(\hat{\mathbf{x}}|\mathbf{x})=\mathcal{N}(\hat{\mathbf{x}};\mathbf{x},t^ {2}\mathbf{I})\), and the Parzen window density \(p_{t}(\hat{\mathbf{x}})=\int p_{\delta}(\mathbf{x})p_{t}(\hat{\mathbf{x}}| \mathbf{x})\mathrm{d}\mathbf{x}\) with \(p_{\delta}(\mathbf{x})\) as the empirical data distribution. After training, we can leverage the empirical PF-ODE for sampling:
\[\mathrm{d}\mathbf{x}=-\frac{r_{\mathbf{\theta}}\left(\mathbf{x};t\right)-\mathbf{x }}{t}\mathrm{d}t. \tag{4}\]
Specifically, we first draw \(\hat{\mathbf{x}}_{T}\sim p_{n}(\mathbf{x})=\mathcal{N}(\mathbf{0},T^{2} \mathbf{I})\) and then numerically solve the ODE backwards with \(N\) steps to obtain a discrete sequence \(\{\hat{\mathbf{x}}_{s}\}\) with \(s\in\{s_{0}=0,s_{1},\cdots,s_{N}=T\}\). The final sample \(\hat{\mathbf{x}}_{s_{0}}\) is considered to approximately follow the data distribution \(p_{d}(\mathbf{x})\).
## 3 Visualization of High Dimensional Trajectory
In this section, we present several viewpoints to inspect the trajectory of probability flow ODE in high-dimensional space. We follow the experimental settings of a recent and influential framework called EDMs [13]. Specifically, we focus on a forward VE SDE with \(\mathrm{d}\mathbf{x}=\sqrt{2t}\,\mathrm{d}\mathbf{w}_{t}\) and its empirical ODE as Eq. (4) for sampling. We mostly take unconditional generation on the CIFAR-10 dataset as an example to demonstrate our observations. The conclusions also hold on other datasets (such as LSUN Cat, LSUN Bedroom) and other model settings (such as conditional generation, various network architectures). More results and implementation details are provided in Appendix A.
### Magnitude Expansion/Shrinkage
As discussed in Section 2, the forward diffusion process is generally interpreted as a progressive smoothing from the data distribution to the noise distribution with a Gaussian kernel \(p_{t}(\hat{\mathbf{x}}|\mathbf{x})\). In contrast, we further paraphrase it as the expansion of magnitude and manifold, which means that samples escape from the original _small-magnitude low-rank_ manifold and settle into a _large-magnitude high-rank_ manifold. The following proposition gives us a glimpse of the geometric structure in high dimensions:
**Proposition 1**.: _Given a high-dimensional vector \(\mathbf{x}\in\mathbb{R}^{d}\) and an isotropic Gaussian noise \(\mathbf{z}\sim\mathcal{N}\left(\mathbf{0};\sigma^{2}\mathbf{I}_{d}\right)\), \(\sigma>0\), we have \(\mathbb{E}\left\|\mathbf{z}\right\|^{2}=\sigma^{2}d\), and with high probability, \(\mathbf{z}\) stays within a "thin shell": \(\|\mathbf{z}\|=\sigma\sqrt{d}\pm O(1)\). Additionally, \(\mathbb{E}\left[\|\mathbf{x}+\mathbf{z}\|^{2}-\|\mathbf{x}\|^{2}\right]= \sigma^{2}d\), \(\lim_{d\to\infty}\mathbb{P}\left(\|\mathbf{x}+\mathbf{z}\|>\|\mathbf{x}\| \right)=1\)._
The proofs are provided in Appendix B.2. Proposition 1 implies that in the forward process, the squared magnitude of the noisy sample \(\mathbf{x}+\mathbf{z}\) is expected to be larger than that of the original sample \(\mathbf{x}\), and their magnitude gap becomes especially huge for the high-dimensional case \(d\gg 1\) and severe noise case \(\sigma\gg 0\). We can further conclude that asymptotically (\(d\to\infty\)), the sample magnitude will expand with probability one and the isotropic Gaussian noise will distribute as a uniform distribution on the sphere, _i.e._, \(\mathbf{z}\sim\mathcal{N}(\mathbf{0};\sigma^{2}\mathbf{I}_{d})=\mathrm{Unif}( \sigma\sqrt{d}\,\mathcal{S}^{d-1})\), due to the _concentration of measure_[15, 16]. In practical generation, \(d\) is sufficiently large to make the above claim approximately correct. The low-rank data manifold is thus lifted to about \(d-1\) rank sphere of radius \(\sigma\sqrt{d}\) with a thin spherical shell of width \(O(1)\). Due to the marginal preserving property of PF-ODE [12], the backward process behaves in a magnitude shrinking fashion and the analysis is similar.
In Figure 1(a), we track the magnitude (\(\ell_{2}\)) of original data (the pixel values are re-scaled to \([-1,1]\)) in the forward process and the magnitude of synthetic samples in the backward process. A clear trend is the sample magnitude expands in the forward diffusion process and shrinks in the backward sampling process, and they are well-matched thanks to the marginal preserving property. Furthermore, the isotropic Gaussian noise is distributed around the sphere of radius (about \(4433\pm 57\)), which is significantly larger than the original data in magnitude (about \(27\pm 7\)).
### Geometric Shape of Sampling/Denoising Trajectory
Given an ODE in Eq. (4) linking the data distribution \(p_{d}(\mathbf{x})\) and the noise distribution \(p_{n}(\mathbf{x})\), we denote _sampling trajectory_ as the discrete sequence \(\{\hat{\mathbf{x}}_{s}\}_{s_{N}}^{s_{0}}\) with \(s\in\{s_{0}=0,s_{1},s_{2},\cdots,s_{N}=T\}\)1, starting from \(\hat{\mathbf{x}}_{s_{N}}\sim\mathcal{N}(\mathbf{0},T^{2}\mathbf{I})\). We adopt symbol \(d(\cdot,\cdot)\) to denote the \(\ell_{2}\) distance between two points, such as \(d(\hat{\mathbf{x}}_{s},\hat{\mathbf{x}}_{s_{0}})\), and the _trajectory deviation_ from a point to the straight line passing through the initial and final points in the trajectory, such as \(d(\hat{\mathbf{x}}_{s},[\hat{\mathbf{x}}_{s_{0}}\hat{\mathbf{x}}_{s_{N}}])\). Additionally, we denote another important yet easy to be ignored sequence as \(\{r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s},s)\}_{s_{N}}^{s_{1}}\) or simplified to \(\{r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s})\}_{s_{N}}^{s_{1}}\) if no ambiguity, and designate it as _denoising trajectory_. A sampling trajectory and its associated denoising trajectory are provided in Figure 1(b) for illustration.
Footnote 1: The time horizon is divided with the formula \(s_{n}=(s_{1}^{1/\rho}+\frac{n-1}{N-1}(s_{N}^{1/\rho}-s_{1}^{1/\rho}))^{\rho}\), where \(s_{1}=0.002\), \(s_{N}=80\), \(n\in[1,N]\) and \(\rho=7\)[1].
**Proposition 2**.: _The denoising output \(r_{\boldsymbol{\theta}}\left(\mathbf{x};t\right)\) reflects the prediction made by a single Euler step from any sample \(\mathbf{x}\) at any time towards \(t=0\) with Eq. (4)._
Proof.: The prediction of such an Euler step equals to \(\mathbf{x}-\left(0-t\right)\left(r_{\boldsymbol{\theta}}\left(\mathbf{x};t \right)-\mathbf{x}\right)/t=r_{\boldsymbol{\theta}}\left(\mathbf{x};t\right)\).
This property was previously stated as an intuitive evidence to advocate the use of Eq. (4) for sampling [1]. There, Karras _et al._ suspected that this ODE trajectory is approximately linear across most noise levels due to the slow change of denoising output, and verified it in the 1-dimensional situation. In contrast, we provide an in-depth analysis of the high-dimensional trajectory with real data, and reveal its connection to the classic mean-shift (mode seeking) algorithm [15, 14, 16].
**Visualization.** It is very challenging to visualize the whole sampling trajectory and denoising trajectory laying in high-dimensional space. In this paper, we are particularly interested in their geometric properties, and find that the trajectory structure exhibits a surprisingly simple form. Our observations, which have been confirmed by empirical evidence, are summarized and elaborated in the following paragraphs. The expectation quantities (such as distance, magnitude) in each discrete time step are estimated by averaging 50k generated samples.
**Observation 1**.: _The sampling trajectory is almost straight while the denoising trajectory is bent._
We develop an efficient visualization technique based on _trajectory deviation_ to assess the linearity of trajectories. From Figure 1(b), we can see that the deviation of sampling trajectory and denoising trajectory (red curve) gradually increases from \(t=80\) to around \(t=10\) or \(t=5\), respectively, and then quickly decreases until reaching the final point. This implies that the initial point may be affected by all possible modes with a large influence at first, and become intensely attracted by its unique mode after a turning point. This phenomenon also supports the strategy of placing time intervals densely near the minimum timestamp yet sparsely near the maximum one [1, 1]. However, based on the ratio of maximum deviation (such as \(\max d(\hat{\mathbf{x}}_{s},[\hat{\mathbf{x}}_{s_{0}}\hat{\mathbf{x}}_{s_{N}}])\)) to the endpoint distance (such as \(d(\hat{\mathbf{x}}_{s_{0}},\hat{\mathbf{x}}_{s_{N}})\)), the curvature of sampling trajectory is incredibly small (about \(16/4428\approx 0.0036\)), while the curvature of denoising trajectory is relatively significant (about \(7/26\approx 0.27\)).
Figure 2: (a) The magnitude of samples in the forward process (blue curve), the backward process (black circle) and the denoising outputs (red curve). (b) The _trajectory deviation_ (red curve) and the \(\ell_{2}\) distance between intermediate samples and the final sample in the trajectory (blue curve).
Another evidence for quasi-linearity of the sampling trajectory is from the aspect of _angle deviation_, which is calculated by the cosine between the backward ODE direction \(-\frac{\mathrm{d}\mathbf{x}_{i}}{\mathrm{d}t}\big{|}_{s_{N}}\) and the direction pointing to the final point \((\hat{\mathbf{x}}_{s_{0}}-\hat{\mathbf{x}}_{s})\) at discrete time \(s\). We find that \(\cos\big{(}-\frac{\mathrm{d}\mathbf{x}_{i}}{\mathrm{d}t}\big{|}_{s},(\hat{ \mathbf{x}}_{s_{0}}-\hat{\mathbf{x}}_{s})\big{)}\) always stays in a narrow range from 0.98 to 1.00 (see Appendix A), which indicates that the angle deviation is extremely small and all backward ODE directions almost exactly point to the final point.
**Observation 2**.: _The generated samples on the sampling trajectory and denoising trajectory both move monotonically from the initial points toward their converged points in expectation, i.e., \(\{\mathbb{E}\left[d(\hat{\mathbf{x}}_{s},\hat{\mathbf{x}}_{s_{0}})\right]\}_{ s_{N}}^{s_{0}}\) and \(\{\mathbb{E}\left[d\left(r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s},r_{ \boldsymbol{\theta}}(\hat{\mathbf{x}}_{s_{1}})\right)\right]\}_{s_{N}}^{s_{1}}\) are monotone decreasing sequences._
This is inferred from the blue curves in Figure 1(b). In fact, such behavior is expected for the sampling trajectory given its slight angle deviation. Since \(\forall s,\;-\frac{d(\mathbf{x}_{i},\hat{\mathbf{x}}_{s_{0}})}{\mathrm{d}t} \big{|}_{s}\propto\cos\big{(}(\hat{\mathbf{x}}_{s_{0}}-\hat{\mathbf{x}}_{s}),\frac{\mathrm{d}\mathbf{x}_{s}}{\mathrm{d}t}\big{|}_{s}\big{)}\approx-1\), the initial point will converge monotonically and rapidly by moving along the backward ODE direction, similar to the behavior of gradient descent algorithm in a well-behaved convex function.
The above two observations enable us to safely adopt large numerical Euler steps or higher-order ODE solvers without incurring much truncation error in most cases [1, 1, 2].
**Observation 3**.: _The sampling trajectory converges to the data distribution in a monotone magnitude shrinking way. Conversely, the denoising trajectory converges to the data distribution in a monotone magnitude expanding way. Formally, we have \(\{\mathbb{E}\|\hat{\mathbf{x}}_{s}\|\}_{s_{N}}^{s_{0}}\downarrow\) and \(\{\mathbb{E}\|r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s})\|\}_{s_{N}}^{s_{1}}\uparrow\)._
Although generated samples converge monotonically, whether along the sampling trajectory or denoising trajectory (Observation 2), their magnitude behaves differently (Figure 1(a)). Geometrically, the initial noise distribution \(p(\hat{\mathbf{x}}_{s_{N}})\) starts from a big sphere of radius \(T\sqrt{d}\) and then anisotropically squashes its "radius" and twists the sample range into the exact data manifold. Meanwhile, the initial denoising output is an approximate _Dirac delta function_ centering in the dataset mean vector \(\mathbf{x}_{m}\), _i.e._, \(p(r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s_{N}}))=\delta(\mathbf{x}_{m})\)[1]. This Dirac delta function anisotropically expands its range until exactly matching the data manifold. The overall picture is illustrated in Figure 1.
## 4 Theoretical Connection to Mean Shift
Given a parametric diffusion model with the denoising output \(r_{\boldsymbol{\theta}}\), the sampling trajectory is simulated by numerically solving Eq. (4), and meanwhile, an implicitly coupled denoising trajectory is formed as a by-product. Once such a model has converged, it will hopefully capture the underlying score at different levels of noise, _i.e._, \(\forall\sigma_{t},\;r_{\boldsymbol{\theta}}\left(\hat{\mathbf{x}};\sigma_{t} \right)\rightarrow r_{\boldsymbol{\theta}}^{\star}\left(\hat{\mathbf{x}}; \sigma_{t}\right)\)[2, 1, 1]. We next derive the formula of _optimal denoising output_ to analyze the asymptotic behavior of diffusion models.
**Proposition 3**.: _The optimal denoising output of Eq. (3) is a convex combination of the original data, where each weight is calculated based on the time-scaled and normalized \(\ell_{2}\) distance between \(\hat{\mathbf{x}}\) and \(\mathbf{x}_{i}\) belonging to the dataset \(\mathcal{D}\):_
\[r_{\boldsymbol{\theta}}^{\star}\left(\hat{\mathbf{x}};\sigma_{t} \right)=\sum_{i}u_{i}\mathbf{x}_{i}=\sum_{i}\frac{\exp\left(-\|\hat{\mathbf{x}} -\mathbf{x}_{i}\|^{2}/2\sigma_{t}^{2}\right)}{\sum_{j}\exp\left(-\|\hat{ \mathbf{x}}-\mathbf{x}_{j}\|^{2}/2\sigma_{t}^{2}\right)}\mathbf{x}_{i},\quad \sum_{i}u_{i}=1. \tag{5}\]
The proof is provided in Appendix B.3. This equation appears to be highly similar to the classic non-parametric mean shift [14, 1, 15], and we provide a brief overview of it as follows.
**Proposition 4**.: _The mean-shift algorithm with a Gaussian kernel and bandwidth \(h\) iteratively moves a point \(\mathbf{x}\) along the mean-shift vector \(m(\mathbf{x})-\mathbf{x}\), i.e., \(\mathbf{x}\leftarrow[m(\mathbf{x})-\mathbf{x}]+\mathbf{x}\), towards the maximum increase in the Parzen window density \(p(\hat{\mathbf{x}})=\int p_{\delta}(\mathbf{x})\mathbf{N}(\hat{\mathbf{x}}; \mathbf{x},h^{2}\mathbf{I})\mathrm{d}\mathbf{x}\), the mean vector is_
\[\mathbf{m}\left(\mathbf{x},h\right)=\sum_{i}v_{i}\mathbf{x}_{i} =\sum_{i}\frac{\exp\left(-\|\mathbf{x}-\mathbf{x}_{i}\|^{2}/2h^{2}\right)}{ \sum_{j}\exp\left(-\|\mathbf{x}-\mathbf{x}_{j}\|^{2}/2h^{2}\right)} \mathbf{x}_{i},\quad\mathbf{x}_{i}\in\mathcal{D},\quad\sum_{i}v_{i}=1. \tag{6}\]
_From the interpretation of expectation-maximization (EM) algorithm, the above mean-shift iteration converges from almost any initial point with a generally linear convergence rate [1]._
As a mode-seeking algorithm, mean shift has shown particularly successful in clustering [15], image segmentation [1] and video tracking [11]. In fact, the ODE-based sampling is closely connected with _annealed mean shift_, or _multi-bandwidth mean shift_[11], which was developed as a metaheuristic algorithm for global model-seeking. By treating the optimal denoising output in Eq. (5) as the mean vector in annealed mean-shift iterations, we have the following theorem:
**Theorem 1**.: _Given an ODE \(\mathrm{d}\mathbf{x}=-\frac{r_{\boldsymbol{\theta}}^{*}(\mathbf{x};t)-\mathbf{x}}{t} \mathrm{d}t\), one Euler step equals to a convex combination of the annealed mean-shift iteration and the current position._
Proof.: Given a current sample \(\hat{\mathbf{x}}_{s_{n+1}}\), \(n\in[0,N-1]\), the prediction of an one-step Euler equals to
\[\begin{split}\hat{\mathbf{x}}_{s_{n}}&=\hat{ \mathbf{x}}_{s_{n+1}}-\frac{s_{n}-s_{n+1}}{s_{n+1}}\left(r_{\boldsymbol{ \theta}}^{*}\left(\hat{\mathbf{x}}_{s_{n+1}};s_{n+1}\right)-\hat{\mathbf{x}}_{ s_{n+1}}\right)\\ &=\frac{s_{n}}{s_{n+1}}\hat{\mathbf{x}}_{s_{n+1}}+\frac{s_{n+1}- s_{n}}{s_{n+1}}\mathbf{m}\left(\hat{\mathbf{x}}_{s_{n+1}};s_{n+1}\right), \end{split} \tag{7}\]
where we treat timestamp \(s_{n+1}\) in \(r_{\boldsymbol{\theta}}^{*}\left(\cdot\right)\) as the annealing-like bandwidth of Gaussian kernel in Eq. (6) and then replace it as one iteration of annealed mean shift \(\mathbf{m}\left(\cdot\right)\)[1].
The ratio of timestep \(w(n)=(s_{n+1}-s_{n})/s_{n+1}\) in Eq. (7) actually reflects our preference for annealed mean shift over sticking to the current position at \(s_{n+1}\). Since the optimal denoising output, or annealed mean shift, starts with a spurious mode (dataset mean) and gradually converges towards a true mode over time, a reasonable choice is to progressively increase our emphasis on them, _i.e._, \(\{w(n)\}_{s_{N-1}}^{s_{0}}\uparrow\). In this sense, various time-schedule functions (such as uniform, quadratic, polynomial [1, 13, 14]) essentially boil down to different weighting functions (see Appendix B.4). This interpretation inspires us to train a parametric neural network to adaptively select proper weights in sampling for better visual quality [13]. We leave it for future work.
Theorem 1 also implies that once diffusion models have converged to the optimum, all ODE trajectories will be uniquely determined and governed by a bandwidth-varying mean shift [13, 1]. In this case, the (forward) encoding process and (backward) decoding process only depend on the data distribution itself and a given noise distribution, regardless of model architectures or training algorithms. Such property was previously referred to as _uniquely identifiable encoding_ and was empirically verified in [12], while we clearly characterize the optimum by drawing a connection with annealed mean shift, and thus reveal the asymptotic behavior of diffusion models.
## 5 Applications
### Diagnosis of Score Deviation
We then simulate four new trajectories based on the optimal denoising output \(r_{\boldsymbol{\theta}}^{*}\) to monitor the score deviation from the optimum: one is _optimal sampling trajectory_\(\{\hat{\mathbf{x}}_{s}^{*}\}_{s_{N}}^{s_{0}}\), where we generate samples
Figure 3: _Top:_ We visualize a forward diffusion process of a randomly-selected image to obtain its encoding \(\hat{\mathbf{x}}_{s_{N}}\) and simulate multiple trajectories starting from this encoding. _Bottom:_ The k-nearest neighbors (k=5) of \(r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s_{1}})\) and \(r_{\boldsymbol{\theta}}^{*}(\hat{\mathbf{x}}_{s_{1}}^{*})\).
as \(\{\hat{\mathbf{x}}_{s}\}_{s_{N}}^{s_{0}}\) but adopt \(r_{\mathbf{\theta}}^{\star}\) for score estimation, and other three are formed with the (optimal) denoising output and are designated as \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s})\}_{s_{N}}^{s_{1}}\), \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s}^{\star})\}_{s_{N}}^{s_{1}}\), \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s}^{\star})\}_{s_{N}}^{s_{1}}\). We calculate the deviation of denoising output to quantify the score deviation in all time steps in terms of the \(\ell_{2}\) distance.
**Observation 4**.: _The learned score is well-matched to the optimal score in the large-noise region (from \(80\) to around \(10\)), otherwise they may diverge or almost coincide depending on different regions._
In fact, our learned score has to moderately diverge from the optimum to guarantee the generative ability. Otherwise, the sampling reduces to annealed mean shift for mode-seeking, or simply replays the dataset. Empirically, score deviation in a small region is sufficient to bring forth such ability.
From \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s}^{\star})\}\), \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s}^{\star})\}\) sequences in Figure 3 and the score deviation in Figure 4, we can clearly see that _along the optimal sampling trajectory_\(\{\hat{\mathbf{x}}_{s}^{\star}\}\), the deviation between the learned score \(r_{\mathbf{\theta}}\) and the optimal score \(r_{\mathbf{\theta}}^{\star}\) behaves differently in three successive regions: the deviation starts off as almost negligible (about \(10<t\leq 80\)), gradually increases (about \(3<t\leq 10\)), and then drops down to a low level once again (about \(0\leq t\leq 3\)). This phenomenon was also validated by a contemporaneous work [23] with a different viewpoint and measurement. We further observe that _along the sampling trajectory_\(\{\hat{\mathbf{x}}_{s}\}\), this phenomenon disappears and the score deviation keeps increasing (see \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s})\}\), \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s})\}\) sequences in Figure 3 and Figure 4). This indicates that our score-based model strives to explore novel regions. Additionally, the generated samples in trajectory are attracted to a real-data mode but do not fall into it, as supported by their k-nearest-neighbor samples in Figure 3.
### Sampling with ODE-Jump
**Observation 5**.: _The (optimal) denoising trajectory converges faster than the (optimal) sampling trajectory in terms of visual quality._
This observation is inferred from the comparison of \(\{\hat{\mathbf{x}}_{s}\}\) and \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s})\}\), \(\{\hat{\mathbf{x}}_{s}^{\star}\}\) and \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s})\}\) sequences in Figure 3, which inspires us to develop a new sampling algorithm named as _ODE-Jump_ that directly jumps from _any_ sample at _any_ time in the sampling trajectory simulated by _any_ ODE solver to the associated denoising trajectory, and returns the denoising output as the final synthesized image. This simple algorithm is highly flexible, extremely easy to implement, and converges considerably faster than the original ODE solver along the sampling trajectory. The quantitative comparison of Frechet Inception Distance (FID [17]) _w.r.t._ the number of score function evaluations (NFEs) is provided in Figure 4 (right) and the visual comparison is provided in Figure 5.
Figure 4: The score deviation in expectation (left and middle) and FID with different NFEs (right).
Figure 5: The synthesized images of our proposed ODE-Jump sampling (bottom) converge much faster than that of EDMs [1] (top) in terms of visual quality.
### In-Distribution Latent Interpolation
An attractive application of diffusion models is to achieve semantic image editing by manipulating latent representations [11, 12, 13]. We then take _latent interpolation_ as an example to reveal its working principle and the potential pitfalls in practice from a geometric viewpoint.
The training objective Eq. (3) for score estimation tells that given a noise level \(\sigma_{t}^{2}\), the denoiser is _only_ trained with samples _belonging to_ the distribution \(p_{t}(\hat{\mathbf{x}})\). This important fact implies that for the latent encoding \(\hat{\mathbf{x}}_{s_{N}}\sim\mathcal{N}(\mathbf{0},T^{2}\mathbf{I})\), the denoiser performance is only guaranteed for the input approximately distributed in a sphere of radius \(T\sqrt{d}\) (see Section 3.1). This geometric picture helps in understanding the conditions under which latent interpolation may fail.
**Proposition 5**.: _In high dimensions, linear interpolation [11] shifts the latent distribution while spherical linear interpolation [13] asymptotically (\(d\to\infty\)) maintains the latent distribution._
Given two independent latent encodings, \(\hat{\mathbf{x}}_{s_{N}}^{(1)}\), \(\hat{\mathbf{x}}_{s_{N}}^{(2)}\sim\mathcal{N}(\mathbf{0},T^{2}\mathbf{I})\), they are almost orthogonal with the angle \(\frac{1}{2}\pi+O_{p}(d^{-0.5})\) in high dimensions [10, 14]. In this case, _linear interpolation_\(\hat{\mathbf{x}}_{s_{N}}^{(\alpha)}=(1-\alpha)\hat{\mathbf{x}}_{s_{N}}^{(1) }+\alpha\hat{\mathbf{x}}_{s_{N}}^{(2)}\) quickly pushes the resulting encoding \(\hat{\mathbf{x}}_{s_{N}}^{(\alpha)}\) away from the original distribution into a squashed sphere of radius \(T\sqrt{d((1-\alpha)^{2}+\alpha^{2})}\), which almost has no intersection with the original sphere. Our trained denoiser thus can not provide a reliable estimation for \(r_{\theta}(\hat{\mathbf{x}}_{s_{N}}^{(\alpha)},s_{N})\) to derive the score direction, as shown in Figure 6. Another strategy named as _spherical linear interpolation_ (slerp) [13, 13, 14] greatly alleviates (but is not free from) the squashing effect in high dimensions and thus stabilizes the synthesis quality of interpolated encodings. But it still suffers from distribution shift in low dimensional cases (see Appendix B.5).
**Proposition 6**.: _In-distribution interpolation preserves the latent distribution under interpolation._
In particular, for the Gaussian encoding \(\hat{\mathbf{x}}_{s_{N}}\), there exists a variance-preserving interpolation \(\hat{\mathbf{x}}_{s_{N}}^{(\lambda)}=\sqrt{(1-\lambda^{2})}\hat{\mathbf{x}}_{s _{N}}^{(1)}+\lambda\hat{\mathbf{x}}_{s_{N}}^{(2)}\sim\mathcal{N}\left(\mathbf{ 0},T^{2}\mathbf{I}\right)\) to prevent distribution shift. Since a uniform \(\lambda\) makes \(\hat{\mathbf{x}}_{s_{N}}^{(\lambda)}\) largely biased to \(\hat{\mathbf{x}}_{s_{N}}^{(1)}\), we derive \(\lambda\) by re-scaling other heuristic strategies to scatter the coefficient more evenly, such as the _normalized linear_ (n-linear) interpolation (\(\lambda=\alpha/\sqrt{\alpha^{2}+(1-\alpha^{2})}\)) with uniformly sampled coefficient \(\alpha\). As shown in Figure 6, this simple re-scaling trick significantly boosts the visual quality compared with the original counterpart. Additionally, slerp behaves as \(\lambda=\sin\alpha\frac{\pi}{2}\) in high dimensions due to \(\psi\approx\frac{\pi}{2}\), and this coefficient was used in [14] for interpolation.
With the help of such an in-distribution interpolation, all interpolated encodings faithfully move along our trained ODE trajectory with a reliable denoising estimation for \(r_{\theta}(\hat{\mathbf{x}}_{s_{N}}^{(\lambda)},s_{N})\). We further calculate the k-nearest neighbors of our generated images to the real data (see Appendix A), to demonstrate how different modes are smoothly traversed in this process \(\hat{\mathbf{x}}_{s_{N}}\to\hat{\mathbf{x}}_{s_{N}}^{\lambda}\to\hat{\mathbf{ x}}_{s_{0}}^{\lambda}\).
### Rethinking Distillation-Based Fast Sampling Techniques
A learned score-based model with a specified ODE solver fully determine the sampling trajectory and the denoising trajectory. From our geometric observations, many distillation-based fast sampling techniques can be re-interpreted as different ways to _linearize_ the original sampling trajectory at several discrete time steps. Recently, _consistency distillation_ (CD) [15] and TRACT [1] begin to rely on the denoising trajectory to guide the score fine-tuning and thus enable fast sampling.
Figure 6: Linear latent interpolation results in blurry images, while a simple re-scaling trick greatly preserves the fine-grained image details and enables a smooth traversal among different modes.
The slow sampling speed is a major bottleneck of score-based generative models. To address this issue, one possible solution is to explicitly straighten the ODE trajectory with knowledge distillation [11, 12, 20, 21]. They optimize a new student score model (\(r_{\mathbf{\theta}}\)) to align with the prediction of a pre-trained and fixed teacher score model (\(r_{\mathbf{\phi}}\)). Empirically, these learning-based approaches can achieve decent synthesis quality with a significantly fewer NFE (\(\approx 1\)) compared to those ODE solver-based approaches [11, 12, 23, 23]. In Figure 7, we draw a sketch to highlight the difference of typical examples. The rationale behind these approaches is that with the pre-defined teacher sampling trajectory and its backward ODE simulation to obtain \(\mathcal{T}_{\mathbf{\phi}}\left(\cdot\right)\), we can adjust the student sampling direction by making the initial point directly point to the final point. To achieve this goal, new noise-target pairs are built in an online [13, 23, 23] or offline fashion [11, 20]. We present the training objective of CD as follows
\[\mathbb{E}_{\mathbf{x}\sim p_{\mathbf{\theta}}}\mathbb{E}_{\mathbf{x}\sim N( \mathbf{0},s_{n+1}^{2}\mathbf{1})}\|r_{\mathbf{\theta}}\left(\hat{\mathbf{x}};s_{n +1}\right)-r_{\mathbf{\theta}^{-}}\left(\mathcal{T}_{\mathbf{\phi}}\left(\hat{ \mathbf{x}}\right);s_{n}\right)\|_{2}^{2},\quad\hat{\mathbf{x}}=\mathbf{x}+ \mathbf{z}, \tag{8}\]
where \(\mathcal{T}_{\mathbf{\phi}}\left(\hat{\mathbf{x}}\right)\) is implemented as an one-step Euler: \(\hat{\mathbf{x}}-\frac{\left(s_{n}-s_{n+1}\right)}{s_{n+1}}\left(r_{\mathbf{\phi} }\left(\hat{\mathbf{x}};s_{n+1}\right)-\hat{\mathbf{x}}\right)\). CD follows the time schedule of EDMs (see Section 3.2) [1], aside from removing the \(s_{0}\), and adopts pre-trained EDMs to initialize the student model. \(\mathbf{\theta}^{-}\) is the exponential moving average (EMA) of \(\mathbf{\theta}\).
Based on our geometric observations, we then provide an intuitive interpretation of the training objective Eq. (8). (1) The role of \(\mathcal{T}_{\mathbf{\phi}}\left(\hat{\mathbf{x}}\right)\) is to locate the sampling trajectory passing a given noisy sample \(\hat{\mathbf{x}}\) and make one numerical step along the trajectory to move \(\hat{\mathbf{x}}\) towards its converged point. (2) \(r_{\mathbf{\theta}^{-}}(\cdot)\) further projects \(\mathcal{T}_{\mathbf{\phi}}\left(\hat{\mathbf{x}}\right)\) into the corresponding denoising trajectory with the step size \(s_{n}\). It is closer to the converged point, compared with the denoising output of \(\hat{\mathbf{x}}\) with the step size \(s_{n+1}\) (see Observation 2 and Figure 7). (3) The student denoising output \(r_{\mathbf{\theta}}\left(\cdot\right)\) is then shifted to match its underlying target \(r_{\mathbf{\theta}^{-}}\left(\cdot\right)\) in the denoising trajectory. By iteratively fine-tuning denoising outputs until convergence, the student model is hopefully endowed with the ability to perform few-step sampling from those trained discrete time steps, and thus achieves excellent performance in practice [23].
## 6 Conclusion and Future Work
In this paper, we present a geometric perspective on (variance-exploding) diffusion models, aiming for a fundamental grasp of their sampling dynamics in a simple and intuitive way. We find that intriguingly, the data distribution and the noise distribution are smoothly bridged by a quasi-linear sampling trajectory, and another implicit denoising trajectory allowing faster convergence in terms of visual quality. We further characterize the asymptotic behavior of diffusion models by formulating a theoretical relationship between the optimal ODE-based sampling and the anneal mean-shift (global mode-seeking) algorithm. Additionally, some preliminary applications implied by our brand new geometric perspective are provided. We hope our theoretical understanding and empirical observations help to better harness the power of score/diffusion-based generative models and facilitate more rapid development in efficient training and fast sampling techniques.
One limitation is that our current observations are not fully underpinned by theoretical results, and thus require further investigation. In fact, the intensively used ODE-based sampling behaves as a typical non-autonomous non-linear system [1], which offers a potential approach to analyze the (asymptotic) stability of diffusion models with tools from control theory.
Figure 7: The comparison of distillation-based techniques. The _offline_ techniques first simulate a long ODE trajectory with the teacher score and then make the student score points to the final point (KD [11]) or also include intermediate points on the trajectory (DFNO [20]). The _online_ techniques iteratively fine-tune the student prediction to align with the target simulated by a few-step teacher model along the sampling trajectory (PD [13]) or the denoising trajectory (CD [23]). |
2301.13745 | Stick-slip in a stack: how slip dissonance reveals aging | We perform physical and numerical experiments to study the stick-slip
response of a stack of slabs in contact through dry frictional interfaces
driven in quasistatic shear. The ratio between the drive's stiffness and the
slab's shear stiffness controls the presence or absence of slip
synchronization. A sufficiently high stiffness ratio leads to synchronization,
comprising periodic slip events in which all interfaces slip simultaneously. A
lower stiffness ratio leads to asynchronous slips and, experimentally, to the
stick-slip amplitude becoming broadly distributed as the number of layers in
the stack increases. We interpret this broadening in light of the combined
effect of complex loading paths due to the asynchronous slips and creep.
Consequently, the aging rate of the interfaces can be readily extracted from
the stick-slip cycles, and it is found to be of the same order of magnitude as
existing experimental results on a similar material. Finally, we discuss the
emergence of slow slips and an increase in aging-rate variations when more
slabs are added to the stack. | Samuel Poincloux, Pedro M. Reis, Tom W. J. de Geus | 2023-01-31T16:26:25Z | http://arxiv.org/abs/2301.13745v3 | # Stick-slip synchronization in stack of elastically coupled frictional interfaces
###### Abstract
We perform physical and numerical experiments to study the stick-slip response of a stack of slabs in contact through dry frictional interfaces driven in quasistatic shear. The ratio between the drive's stiffness and the slab's shear stiffness controls the presence or absence of slip synchronization. A sufficiently high stiffness ratio leads to synchronization, comprising periodic slip events in which all interfaces slip simultaneously. A lower stiffness ratio leads to asynchronous slips and, experimentally, to the stick-slip amplitude being broadly distributed as the number of layers in the stack increases. We interpret this broadening in light of the combined effect of surface disorder, complex loading paths of the asynchronous slips, and creep. Consequently, the ageing rate can be readily extracted from the stick-slip cycle. The extracted aging rate is found to be of the same order of magnitude as existing experimental results on a similar material. Finally, we discuss the emergence of slow slips and an increase in creep-rate variations when more slabs are added to the stack.
## I Introduction
Multiple frictional interfaces coupled by elasticity are ubiquitous in everyday objects including books [1; 2], textiles [3; 4; 5], and multilayer composites [6; 7]. In geology, systems comprising multiple frictional interfaces are the norm rather than an exception. For example, layered rocks such as shale can show multiple sliding interfaces under shear [8; 9]. At the larger scales relevant for terrestrial faults, earthquakes produced when slipping are usually not isolated but embedded into complex fault networks [10]. The mechanical response of such assemblies of frictional interfaces involves the coupling between the elastic deformation of the layers and the barriers to sliding of the interfaces.
Predicting the onset of slipping is a long-standing problem even for a single frictional interface [11]. Physical insight and understanding of this class of problems have been driven primarily by high-precision experiments of sliding PMMA blocks whose optical transparency enabled the space-temporal tracking of the local contact area [12]. These pioneering experiments have elucidated that the onset of slip involves a rupture front that 'unzips' the interface. A correlation with strain measurements close to the interface showed that the stress field and dynamics of the front are well described by fracture mechanics with the fracture energy as the sole fitting parameter [13]. However, the mechanism underlying the nucleation of the rupture front remains elusive, primarily due to experimental limitations, for which novel protocols are being proposed [14].
From a theoretical perspective, the most common models for the onset of frictional slippage [15; 16; 17; 18] capture the phenomenology that sliding starts, and then continues in a steady state, when the shear force \(F\) balances the friction forces \(\mu N\), where \(N\) is the normal force and \(\mu\) the 'friction coefficient'. In these 'rate-and-state' models, \(\mu\) depends nonlinearly on the slip rate \(v\) and history \(\theta\): \(\mu=\mu(v,\theta)\). At intermediate values of \(v\), the friction coefficient is usually assumed to display slip weakening (\(\mu\) is a decreasing function of \(v\)) such that the interface is unstable. During slip nucleation, the elasticity and inertia of the bulk have a stabilizing effect [19; 20; 21; 22], such that there exists an effective flow curve \(\tilde{\mu}=\mu+G/(2c_{s})v\) (where \(G\) is the shear modulus of the material and \(c_{s}\) the shear-wave speed) whose steady-state displays a minimum at \(\mu_{c}=\tilde{\mu}(v_{c})\)[19]. Consequently, any perturbation decays and vanishes if the applied load is \(F/N<\mu_{c}\). At higher applied loads, a slip patch of linear size \(L\) becomes unstable if \(L>L_{c}\sim(F/N-\mu_{c}^{*})^{-\nu}\) (\(\mu_{c}^{*}\geq\mu_{c}\)[21]). Once the event grows to be larger than \(L_{c}\), its dynamics are well described by a sharp rupture front [19; 21; 23] that can be modeled by linear elastic fracture mechanics [13; 24], as discussed above. If the loading is performed under displacement control, the reaction force \(\mu N\) drops macroscopically only once the interface unzips fully.
A direct consequence of the phenomenology described above is that the interface can display _stick-slip_ behavior when driven quasistatically (at a rate \(V\ll v_{c}\)). Under the framework of the rate-and-state models, the stick-slip amplitude thus depends on the exponent \(\nu\), \(\mu_{c}^{*}\), and the mechanism leading to the slip patch at scales \(L<L_{c}\). All of these parameters are currently under debate; e.g. [25; 26; 27; 21; 22; 28; 14]. However, one of the authors of the present study recently proposed an encompassing theory that avalanches nucleate the instability such that \(\mu_{c}^{*}=\mu_{c}\) and \(\nu=1/(1-\zeta)\). Here, \(\zeta\) is the roughness exponent resulting from a competition between microscale disorder and elasticity [22] (not to be confused with a roughness exponent extracted from height-height correlations of the interface profile). Furthermore, it is a well-known experimental fact that the initial onset to sliding is history-dependent and increases with the time that the interfaces were at rest [29; 30; 17; 31]. This behavior
is associated with creep [32] and described by rate-and-state models where the variable \(\theta\) introduced above is regarded as time. Creeping of the interfaces must affect the stick-slip amplitude, but disentangling its contribution is challenging because slip events occur at a narrowly distributed interval.
Beyond a single frictional interfaces when multiple frictional interfaces are present, the elasticity of the bulk may potentially lead to a non-trivial coupling. For example, elastic interactions between faults may strongly affect their slip dynamics [33]. In addition, acoustic waves transmitted through the elastic bulk may lead to remote triggering of earthquakes [34; 35], though the large temporal separation suggests a complex coupling. Predicting the mechanical response of an assembly of elastic frictional interfaces is then a formidable but important challenge. In particular, identifying the key parameters coupling the layers together and elucidating the role of the number of interfaces are open questions.
Here, we report results from a combined experimental and numerical investigation on the quasistatic stick-slip response of a stack of elastic slabs in contact through frictional interfaces. Based on the ratio between the stiffness of the drive and the shear stiffness of the slab, we distinguish two regimes:'stiff' and 'compliant' driving. In the stiff-driving regime, our numerical results exhibit periodic slips involving all the layers leading to narrowly distributed force drops; this regime is not accessible in our model experiments. By contrast, in the compliant-driving regime, we observe both numerically and experimentally a decoupling of the slip events along the different layers, with interfaces sliding one by one. In the experiments, we find that this loss of periodicity is accompanied by a broadening of the distribution of the stick-slip amplitudes with the number of layers. The changes in the measured distributions are interpreted by a coupling between interface disorder and a broad distribution of waiting times between slips, exposing the role of creep. The remaining unsolved broadening of the distributions with the number of slabs, including an observed increasing fraction of'slow slip', raises the question of the interaction between mechanical noise and creep. Depending on the drive's stiffness, the stack of frictional interfaces then displays drastically different responses to shear. Overall, we find that, in the stiff-driving case, the stack acts as one layer with periodic slips, while in the compliant-driving case, a rich coupling between the layers makes slips much more unpredictable.
## II Definition of the problem
We assess the shear response of a model system comprising a stack of \(n\in[1,5]\) identical slabs of thickness \(h\) resting on a surface whose position is fixed. In Fig. 1, we present a schematic diagram of the system and a photograph of our experimental setup. Each slab, and its lowermost frictional interface, are numbered as \(i=1,2,\ldots,n\) from below. We impose homogeneous shear, with a set rate, between all the slabs by connecting each slab through identical springs of stiffness \(K\) to a lever that is driven to rotate around a fixed axis (Fig. 1a). The spring connecting to the \(i\)-th layer is attached to the lever at a distance \(ih\) from the rotation axis.
For the drive, we impose the lever's top horizontal displacement \(U(t)=Vt\), where \(t\) represents time, and \(V\) is the imposed velocity, taken to be small enough for the interfaces to display stick-slip (i.e., \(V<v_{c}\)[22; 30]). Thus, our drive imposes a shear rate \(\dot{\gamma}\equiv V/H\), where \(H\) is the height of the lever, driving each spring \(i\) at a velocity \(v_{i}=ih\dot{\gamma}\). During the periods in which the interfaces are'stuck', this drive causes a monotonically increasing shear stress at each of the interfaces.
Our study seeks to address the following questions: (1) What are the relevant parameters controlling the slip synchronization of the interfaces? (2) How does the shear response evolve with an increasing number of layers \(n\)?
## III Rigid vs. compliant driving
For a system with a single frictional interface, the stick-slip instability occurs only if the driving stiffness \(K\) is sufficiently low and depends on the flow properties of the interface and the applied driving rate [36; 15; 30]; a summary of the calculation for a rate-and-state model was
Figure 1: (a) Schematic of our model system, shown here with \(n=4\) active (driven) layers. The color code of the layers is used throughout the figures. (b) Photograph of the corresponding experimental apparatus.
provided in Ref [30]. With multiple frictional interfaces, the drive also controls the degree of synchronization as we argue later in the manuscript.
To gain insight into the effect of the drive on the shear response of a stack, we regard the driving'spring' as a parabolic potential energy imposing the mean position of each slab, such that the slab is free to build up shear. We now discuss what happens in the limit of rigid and compliant driving, defined next.
_Rigidity ratio \(\Phi\)._ We define the rigidity ratio \(\Phi\equiv K/K_{s}\), where \(K\) is the rigidity of the driving spring, and \(K_{s}=AG/h\) is the shear rigidity of the slabs, with \(G\) the shear modulus, \(A\) the surface area of the frictional interface, and \(h\) the slab's height. This ratio \(\Phi\) then quantifies the relative deformation of the driving springs in comparison to the shear deformation of the slabs. Below, we investigate and discuss two limit regimes: rigid (high-\(\Phi\)) and compliant (low-\(\Phi\)) driving. In the first, significant deformations occur within the layers (and the springs are rigid), whereas, in the second, the deformations occur in the springs (and the layers are rigid).
_Rigid driving (high-\(\Phi\))._ Suppose that the first interface (\(i=1\)) starts slipping. As the mean position of slab \(i=1\) is fixed, the slab can only react with a negative shear deformation. Consequently, the shear stress on the interface above, \(i=2\), is increased. This can trigger a slip at interface \(i=2\), which can cause a slip on the interface above for the same reason, which propagates to the slipping of other interfaces. The cascade results in a multi-slip event that erases all memory of the system. In this case, the multi-layered system thus acts as a system with a single interface with effective properties [37], showing periodic stick-slip cycles.
_Compliant driving (low-\(\Phi\))._ The system can respond to a slip at interface \(i=1\) by advancing the mean position of slabs \(i>1\). Therefore, the stress on the interfaces \(i>2\) relaxes, making a macroscopic multi-slip event unlikely. A sequence of single-slip events is thus to be expected. With an increasing number of layers, the slip sequence of the multiple interfaces may lose its periodicity.
## III Numerical simulations
### Numerical model
We implement the model system shown schematically in (Fig. 1a) into numerical simulations. The numerical model consists of \(n+1\) identical elastic layers separated by frictional interfaces. Following [26], we idealize the frictional contact problem in order to focus on the disorder in the shear response along the frictional interface. In particular, we consider a mesoscopic scale on which an effective 'block' of a finite width resists elastically to shear up to a threshold, after which it yields. The local slip then propagates until a new 'contact' is formed (i.e., it is again elastic but with a new threshold). In this framework, each block represents a frictional contact (or a patch of contacts that are so strongly coupled by elasticity that they act as an effective contact) that, upon yielding, forms a new contact with a new yielding threshold.
The details of the numerical model are as follows: each frictional interface consists of \(n_{x}\) equal-sized square blocks of linear size \(l_{0}\) that are completely elastic under volumetric deformation but yield under shear (deviatoric deformation) when a set yield stress is reached. Assuming that the yield threshold is isotropic in principal deviatoric strain space, this model now corresponds to a deviatoric potential energy that consists of a sequence of parabolic potentials in equivalent deviatoric strain space. The disorder arises from independently randomly drawing the yield strain sequence of each block. We assume that the blocks and the bulk have the same elastic moduli. Finally, differently from [26], we add a parabolic potential (with curvature \(K\)) to the mean position of each of the elastic slabs \(i>0\), thereby prescribing a homogeneous force density. The bottom layer is not driven through its mean position; instead, the position of the bottom edge is fixed. A key feature of the model is that shear can be applied according to the quasistatic protocol. In particular, because the elastic response is linear, we rotate the lever by a finite amount if no microscopic yielding takes place while infinitesimally rotating it to trigger a microscopic event, after which we minimize energy before loading again. Thus, we run an event-driven protocol, allowing us to separate events.
Geometrically we do not seek to precisely model Fig. 1a as its numerical treatment, together with the disorder, requires an intractably large number of blocks. Therefore, we consider periodic boundary conditions in the horizontal direction. Furthermore, we choose \(n_{x}=2\times 3^{6}\), which is still tractable to simulate, but of the minimal order not to be dominated by finite size effects, as we checked for a single frictional interface [22; 26]. Furthermore, we take \(h/\ell_{0}\approx n_{x}/4\) based on balancing \(h/(\ell_{0}n_{x})\) small enough to have acoustic interactions while avoiding driving the blocks in a fixed displacement such that collective effects are suppressed if \(h\ll n_{x}\ell_{0}\) (e.g. [38]).
The above model predicts stick-slip behavior [26] when full inertial dynamics are considered (using overdamped dynamics, this model predicts the abundantly studied depinning transition [39]). We consider such inertial dynamics by applying the finite element method to discretize space. Along the frictional interface(s), elements coincide with the mesoscopic blocks. In the elastic slabs away from the frictional interface, the elements are coarsened to gain numerical efficiency (such that the height \(h\) is only approximated as we fix the aspect ratio of elements to one, see SI, "Numerical model"). We use the velocity Verlet algorithm to integrate discrete time (with a time step significantly smaller than the time of a sin
gle oscillation in a well in one block). We remark that assuming periodicity requires us to add a small damping term to the inertial dynamics such that waves with a wavelength equal to the horizontal size of the system (equal to \(n_{x}l_{0}\)) are critically damped. Consequently, we must take \(h/\ell_{0}<n_{x}\) to have acoustic coupling between the interfaces.
### Numerical results
Our numerical model allows us to first illustrate the simple reflection above on the role of driving. We consider a driving rigidity such that \(\Phi\simeq 10^{-3}\) (rigid driving) and \(\Phi\simeq 10^{-6}\) (compliant driving). In the two-dimensional model, \(A=n_{x}\ell_{0}\), such that \(K_{s}=4G\) for our geometry; we use \(K=10^{-3}\) and \(K=10^{-6}\) and \(G=1/2\). In Fig. 2a and Fig. 2b, for rigid and compliant driving, respectively, we plot a typical macroscopic stress \(\Sigma\) (volume-averaged stress) as a function of applied lever rotation \(\gamma\). Note that the stress is shown in units of the typical yield stress of one block, and the rotation in units of the rotation needed to yield a typical block at \(i=1\).
Macroscopic slip events are defined when all blocks along one or more layers yield at least once. Below, we will refer to sliding interfaces and associated quantities by an index \(a\), while \(i\) will be kept as the running index for the layers. Slip events correspond to macroscopic stress drops in Figs. 2a and 2b, and we distinguish between'single-slip' events (all blocks on a single layer yield at least once) and'multi-slip' events (all blocks on more than one layer yield at least once). Stress drops produced by single-slip events are labeled following the layer color code introduced in Fig. 1, while muti-slip events are kept black. These slip events are separated by'stick' intervals during which only microscopic events are observed, where one or several blocks yield at least once, as indicated with markers (black dots).
The results confirm that rigid driving causes a periodic stick-slip sequence with slip events corresponding to multi-slip events (Fig. 2a) while compliant driving results in a seemingly less periodic sequence of single-slip events in Fig. 2b. Although not reported in Fig. 2a, for \(\Phi\simeq 10^{-3}\), we also observed periodic sequences on single-slip events for a finite fraction of the loading history. This finding is supported by plotting the fraction of slip events involving \(s=1,\dots,n\) interfaces in Fig. 2c for different \(n\) (see legend in Fig. 2d). On the one hand, rigid driving results in single- and multi-slip events for a comparable fraction of loading history (we discuss in the SI "Slip sequences - numerics" that sequences of single and multi-slip events alternate). On the other hand, compliant driving shows single slip in the large majority of slip events.
A direct measurement of the stress drop along the slipping interface \(a\), \(\Delta\mu_{a}\), displays no \(n\) dependence in Fig. 2d. The quantity \(\mu_{a}\) is defined as the volume average stress on the blocks corresponding to weak layer \(a\), also shown in units of the typical yield stress of one block. Given that, by construction, normal stress plays no role in our model, here \(\mu_{a}\) is akin to a friction coefficient.
As we have shown that, for high-\(\Phi\) (rigid driving), multilayer stick-slip is apparently similar to that of a single interface, next, we concentrate on the low-\(\Phi\) regime to explore a potential influence of the number of plates \(n\).
## III Experiments
We proceed by proposing an experimental realization of the sheared-multilayer model system of Fig. 1a, adapted to measure the effect of the number of sliding layers \(n\) on the slip synchronicity and amplitude; see Fig. 1b. Similarly to the numerical model, the position of each slab is driven by connecting it to the driving lever
Figure 2: Numerical results: (a, b) Typical steady-state global stress \(\Sigma\) response as a function of lever rotation \(\gamma\) for \(n=3\) for (a) rigid driving and (b) compliant driving. We indicate all microscopic yielding events with a black dot marker. Slip events on a single layer are indicated in color (see legend), while slip events in black involve more than one interface. (c) The fraction \(\rho(s)\) of macroscopic slip events involving \(s=1,\dots,n\) layers, for rigid (dashed) and compliant (solid) driving; see legend in (d) for color-map and markers. (d) Distribution of stress drops at the slipping interface for different \(n\) (for slip events on a single layer, for which \(s=1\) in (c)). See the main text for definitions and units.
through linear springs (see schematic in Fig. 1a). Naturally, connecting the spring to the edges of the slabs might introduce boundary effects, but our experimental system is effectively much larger than our numerical model (given that it presumably has much more local contact patches).
### Experimental apparatus
The experimental setup shown in Fig. 1b comprises a stack of frictional plates (color-coded from purple to orange), an actuating lever (green), and driving springs (pink), as detailed below.
The stack is made of a set of rectangular PMMA slabs (Snow WH10 DC by Rohm), each of dimensions \(h=10\,\mathrm{mm}\), \(L=150\,\mathrm{mm}\) and an out-of-plane width of \(80\,\mathrm{mm}\). A normal force \(N\) is applied on the topmost slab by a dead weight of \(5\,\mathrm{kg}\) (\(N=49\,\mathrm{N}\)). To ensure a spatially homogeneous contacting surface at this relatively low normal force (compared to other PMMA-PMMA friction experiments [13; 14; 40]), we use acrylic plates whose surface was pre-roughened with asperities of size \(\sim 25\,\mathrm{\mu m}\) that are larger than potential natural height variations of PMMA. We assume that the normal force is uniformly distributed and that it is the same for each layer (the weight of each slab is less than \(3\%\) of that of the dead weight).
The stack is sheared by imposing the displacement at the top of the lever (\(H=100\,\mathrm{mm}\)) at a constant speed \(V=10\,\mathrm{\mu m}\)/s (i.e. \(\dot{\gamma}=10^{-4}\,\mathrm{s}^{-1}\)), using a DC linear actuator (L-220.70DG, Physiks Instruments) that is attached via a steel link assumed rigid. The PMMA lever is sufficiently wide not to bend while pulling the slabs and rotates smoothly on ball bearings around its rotation axis.
The springs connecting the slabs to the lever are curved beams laser-cut from PMMA (colored in pink in Fig. 1b), with an equivalent stiffness of \(K=55\,\mathrm{N/mm}\) when pulled or compressed along the horizontal axis. The ends of the springs are attached to both the slabs and lever via ball bearings to ensure a free rotation and, thus, horizontal driving forces.
The experimental set-up, with its springs and PMMA slabs, corresponds to the compliant driving limit with \(\Phi\simeq 6\times 10^{-5}\), of the same order as for the compliant regime in the numerics. Indeed, \(\Phi\equiv K/K_{s}\), with \(K_{s}=AG/h\) the shear stiffness of the slabs and \(G\equiv E/(2(1+\nu))\), with, for PMMA, Young's modulus \(E=2\,\mathrm{GPa}\) and Poisson's ratio \(\nu=0.3\).
The total horizontal force \(F\) needed to rotate the lever (Fig. 1) is measured using a uniaxial force sensor (LRM200 25 lb, Futek) placed between the steel link and the actuator. The link is also attached via small ball bearings to remain horizontal at all times. We directly verify that we are imposing the expected kinematics by measuring the position of the lever and base slab.
In addition to the global force measurement \(F\), we also measure the local average horizontal position of the slabs \(x_{i}\), by tracking a red marker placed on their side (Fig. 1b), from photographs taken at a rate of \(5\,\mathrm{fps}\) using a digital camera (Flea3 FL3-U3-20E4C, Flir, linear pixel size: \(70\,\mathrm{\mu m}\)). After color filtering, \(x_{i}\) is measured with an accuracy of \(5\,\mathrm{\mu m}\). The relative displacement between slabs is \(R_{i}\equiv x_{i}-x_{i-1}\) (see Fig. 1a), which serves as a proxy for the total slip at the interface \(i\) (neglecting the shear deformation of the slabs).
To vary the number of sliding interfaces \(n\), we keep the same number of slabs (5) but remove \(5-n\) springs, starting from the top (see Fig. 1b where \(n=4\)). This procedure ensures robust image detection and reduces external contamination of the interfaces by keeping them in contact. Each time the slabs are disassembled to vary \(n\), the interfaces are cleaned with isopropanol and quickly dried using compressed air. For each value of \(n\), we perform 10 runs during which we drive over a range \(\Delta\gamma=0.6\,\mathrm{rad}\), starting at \(\gamma=-0.30\,\mathrm{rad}\), each time excluding \(\gamma\) between \(-0.3\) and \(-0.27\) (\(300\,\mathrm{s}\)) to ensure measuring in a steady state. After each run, the lever is reset back to \(\gamma=-0.30\). On average, each connected layer is forced to move by a total relative distance of \(\Delta R=h\Delta\gamma=6\,\mathrm{mm}\) during a run.
Further details on the apparatus and the experiments are given in the SI ("Experimental set-up").
### Experimental measurements
In Fig. 3a, for the slab system with \(n=4\), we present a typical time series extract of the force \(F(t)\) required to actuate the lever (top left plot), together with the corresponding relative position of the slabs \(R_{i}\)(t) (bottom left plot). The experiments exhibit stick-slip, with stick periods when the slabs are immobile (\(R_{i}\approx\) constant) and \(F\) increases monotonically, punctuated by macroscopic slip events. These slip events are identified by a sudden position jump, \(\Delta R_{a}\) (with \(a\) denoting the sliding interface), accompanied by an abrupt force drop \(\Delta F>0\), cf. Fig. 3a. On all occasions, we find that only one layer slips at a time, recovering similar dynamics as in the numerical model in the compliant driving regime with a similar value for \(\Phi\) (Fig. 2b). However, we note that during the stick periods, we observe what seem to be'slow slip' events where an interface moves gradually, leading to a non-linear force response. These are out of our primary focus but are discussed at the end of the section.
For each value of \(n\), we acquire an ensemble of at least 100 slip events per layer, such that the slip quantities associated can be represented as probability distributions. For example, in Fig. 3b, we show the probability distribution of force drops, \(P(\Delta F)\), occurring on the interface \(a=1\), for all cases of \(n\) considered. Starting from the peaked distribution for \(n=1\), as \(n\) increases, the distri
butions broaden and take higher average values.
In contrast with more classic stick-slip experiments with a single interface [29], the global measure \(\Delta F\) is not a direct quantification of the frictional properties of the interface but couples with the specific kinematic of the lever. Still, the fact that only one interface slips allows us to extract a jump in a friction-like quantity \(\Delta\mu_{a}\) (or stick-slip amplitude) from \(\Delta F\). We define the friction coefficient \(\mu_{a}\) of a slipping interface \(a\) as the horizontal force acting on this interface divided by the normal force. Considering the horizontal force balance on an interface \(a\), the interface has to resist the combined forces of the pulling springs of the slabs \(i\geq a\), such that
\[\mu_{a}=\sum_{i=a}^{n}\frac{f_{i}}{N}, \tag{1}\]
where \(f_{i}\) is the force due to the driving spring on slab \(i\) (see Fig. 1a for a visual representation of \(\mu_{a}\) and \(f_{i}\)). When the interface \(a\) slides by \(\Delta R_{a}\), the relative positions of the other interfaces remain unchanged:
\[\Delta R_{i}=\begin{cases}\Delta R_{a}>0&\text{if}\quad i=a\\ 0&\text{if}\quad i\neq a\end{cases}. \tag{2}\]
This sliding induces a drop in the spring forces: \(\Delta f_{i}=0\) for \(i<a\), and \(\Delta f_{i}=K\Delta R_{a}>0\) for \(i\geq a\). Indeed, even if no slip occurs for the interfaces \(i>a\), the absolute position of the slabs still moves by \(\Delta R_{a}\), reducing the extension of the corresponding springs by the same amount. Note that for consistency, we define \(\Delta f_{i}\) to be positive. From Eq. (1), we can then express \(\Delta\mu_{a}\) as a function of the slip distance \(\Delta R_{a}\):
\[\Delta\mu_{a}=\frac{K}{N}(n-a+1)\Delta R_{a}. \tag{3}\]
We proceed by linking \(\Delta R_{a}\) to the global force drop \(\Delta F\), using the fact that only one interface slips at a time. Through moment balance on the lever, we obtain:
\[F=\sum_{i=1}^{n}f_{i}\frac{ih}{H}. \tag{4}\]
Combining Eqs. (2) and (4), we obtain a relation between the global quantity \(\Delta F\) and the local one \(\Delta R_{a}\):
\[\Delta F=\sum_{i=a}^{n}K\Delta R_{a}\frac{ih}{H}=K\Delta R_{a}\frac{h}{H} \frac{(n+a)(n-a+1)}{2}. \tag{5}\]
(Note that \(i\) is the only varying term in the sum and \(\sum_{i=a}^{n}i=(n(n+1)-a(a+1))/2=(n+a)(n-a+1)/2\).) This result is verified in Fig. 3c. Indeed, a direct measurement of \(\Delta R_{a}\) is very close to the inversion of Eq. (5), in which \(\Delta R_{a}\) follows from the measured \(\Delta F\) without any fitting parameter.
Finally, we combine Eqs. (3) and (5) to obtain the sought relation between \(\Delta\mu_{a}\) and \(\Delta F\):
\[\Delta\mu_{a}=\frac{H}{h}\frac{2}{(n+a)}\frac{\Delta F}{N}. \tag{6}\]
We have thereby disentangled the friction properties of the interface from the kinematics of the lever. Using Eq. (6), we can now obtain a measure of the stick-slip amplitude of the interfaces \(\Delta\mu_{a}\), extracted directly from the global force \(\Delta F\). The measurement of \(\Delta R_{i}\) is used only to identify the slipping interface \(a\).
The central experimental result.Next, we assess the effect of having multiple sheared interfaces on their frictional properties. Fig. 4 shows the probability distributions \(P(\Delta\mu_{a})\) associated with the different sliding interfaces \(a\) (different panels) and the increasing number of total active interfaces \(n\) (different colors). Each interface is compared to its response when sliding individually (\(n=1\) in black, see SI "Individual sliding" for experimental protocol). For all the interfaces, the stack of slabs exhibits significantly enriched statistics when compared to a single sliding layer (\(n=1\)). For stacks with increasing \(n\)
Figure 3: (a) Top plot: extract of a time series of the macroscopic force \(F(t)\) for a system of \(n=4\) frictional interfaces. The color of the force drops \(\Delta F\) follows the color code in Fig. 1 and indicates the index \(a\) of the slipping interface. Bottom plot: corresponding relative displacement (total slip) \(R_{i}\) of each interface \(i\). Each slip event is characterized by \(\Delta R_{a}\). We denote \(T_{a}\), the time between subsequent slip events on the same interface. Note that we show \(i=5\) only for completeness, by definition, \(R_{5}=0\) if \(n=4\). (b) Probability distribution function \(P(\Delta F)\) for slip at interface \(a=1\), and increasing number of layers \(n\). (c) Comparison between a direct measurement of \(\Delta R_{a}\), and the computed \(\Delta R_{a}(\Delta F)\), obtained through Eq. (5), for each detected slip event.
the location of the peak of \(P(\Delta\mu_{a})\) shifts to lower values of \(\Delta\mu_{a}\). Moreover, the respective distributions become broader and secondary peaks emerge. These experimental results contrast the numerical predictions reported above (Fig. 2c), where \(\Delta\mu_{a}\) was independent of \(n\).
### Interpretation
We seek to interpret the above experimental findings evidencing a variation of the frictional properties with \(n\) (as more layers are added to the stack, Fig. 4), whereas they are independent of \(n\) in the numerics (Fig. 2d). First, we will attribute the finite width of the peaks in the \(P(\Delta\mu_{a})\) distributions to disorder (i.e., statistical fluctuations of the contacting interfaces). Then, we will argue that the shift of the main peaks and emergence of the secondary peaks in \(P(\Delta\mu_{a})\) for \(n>1\) are related to the presence of creep in the system. Finally, we speculate on the increase of an effective temperature with \(n\) and suggest a rationalization of the appearance of slow slip.
_Statistical fluctuations._ Even when sliding individually (\(n=1\)), the frictional properties of the interfaces are distributed: \(P(\Delta\mu_{a})\) has a finite width, see black curves with circles in Fig. 4. These underlying statistical fluctuations, also present in the numerical model (Fig. 2d), are considered to be related to the disorder of the contacting interfaces. The rough interface induces a broad distribution of barriers, such that there are collective events whose sizes are non-trivially distributed. These collective events nucleate the macroscopic slip [22, 26], such that the stress at which slip is nucleated is distributed.
Let us now verify experimentally that, in the case of individually sliding layers (\(n=1\)), the measured fluctuations of \(\Delta\mu_{a}\) correspond to distinct slipping loads. In the individual configuration, the spring drives the layer at a constant rate \(\dot{f}_{a}=Kh\dot{\gamma}\) (\(\dot{\gamma}\) is adapted to account for the difference in height, see SI "Individual sliding"). The shear applied to the interface then grows at a rate \(\dot{\mu}_{a}=\dot{f}_{a}/N=Kh\dot{\gamma}/N\). As such, we expect the stick-slip amplitude to be proportional to the time between slips \(T_{a}\), following \(\Delta\mu_{a}=T_{a}Kh\dot{\gamma}/N\). This expectation is consistent with our data in Fig. 5a, thus confirming that statistical fluctuations broaden \(P(\Delta\mu_{a})\).
However, these fluctuations do not account for the shifts of the peaks and the appearance of secondary peaks in the \(P(\Delta\mu_{a})\) distributions. In our stack of slabs (\(n>1\)), we anticipate that creep plays that role: with increasing \(n\), (I) interfaces experience a complex loading path such that (II) the creep of the frictional interface becomes significant.
_(I) Complex loading path._ First, without any events on other interfaces, the loading rate at a given interface increases with \(n\). In particular, using Eq. (1) and \(\dot{f}_{i}=Kih\dot{\gamma}\) while no interfaces are sliding, the interface \(a\) undergoes a loading rate of \(\dot{\mu}_{a}=K\dot{\gamma}h(n+a)(n-a+1)/(2N)\), which is an increasing function of \(n\). With this increased loading rate with \(n\), we thus expect the time between slips to typically decrease with \(n\) following \(T_{a}^{-1}\sim(n+a)(n-a+1)\). Second, for a given interface, a sliding event can occur on a layer below it. In that case, all the layers above the sliding one will undergo the same position shift, leading to a relaxation of their corresponding springs. Then, even without actual slip on this given interface, slips occurring below will induce drops in the shear force of this layer. In Fig. 5b, we plot the probability distribution function of \(T_{a}\) for \(a=1\) and increasing \(n\). Indeed, we find that the peaked distribution for \(n=1\) shifts to lower values of \(T_{a}\) with increasing \(n\) as the overall loading rate increases. Moreover, secondary peaks in \(P(T_{a})\) start to appear, which we interpret to be due to slip events on the other layers. These observations are robust for the other interfaces. The change in loading rate with \(n\), together with the complex loading path, allows our experimental system to probe a broad distribution of \(T_{a}\) on all its interfaces.
_(II) Creep._ It is a known experimental fact that the macroscopic stress required for the onset of sliding, characterized by \(\mu_{s}\) (the'static friction coefficient' in Amontons-Coulomb's terminology [11, 30]), depends on the duration \(T\) that the interface was static: \(\mu_{s}=B\ln T\)[17, 29, 30, 31], where the aging rate of the interface \(B\) is a constitutive parameter. Let us now consider \(\Delta\mu_{a}\) as a proxy for \(\mu_{s}\), assuming that a slip event unloads the interface to a well-defined and constant quantity (\(\mu_{d}\) the 'dynamic friction coefficient' in Amontons-Coulomb's terminology [11, 30]), as is supported by [29]. We expect to find, for each sliding interface \(a\) and over a wide range of \(T_{a}\), that the stick-slip amplitude follows the creep trend: \(\Delta\mu_{a}=B\ln T_{a}\). In Fig. 5c, we assess this expectation
Figure 4: Probability distribution functions of the stick slip amplitude \(P(\Delta\mu_{a})\) as a function of \(n\) for the different interfaces: (a) \(a=1\), (b) \(a=2\), (c) \(a=3\) and (d) \(a=4\).
experimentally by plotting \(\Delta\mu_{a}\) versus \(T_{a}\) in a semi-logarithmic scale. To capture the general trend, we bin logarithmically \(T_{a}\), corresponding to the black markers with corresponding error bars in Fig. 5c. These averaged values of \(\Delta\mu_{a}\) indeed follow a linear trend in the semi-log plot, and we extract the slope \(B=0.053\pm 0.005\), which is of the same order of magnitude as measured in classical stop-and-go experiments that are in the \(10^{-2}\) order [30], and of a direct surface observation on PMMA at room temperature that reports \(B=0.009\pm 0.001\)[40]. Creep then translates the large peak shifts and the emergence of new ones in the \(T_{a}\) distributions (Fig. 5b) into similar changes in the \(\Delta\mu_{a}\) distributions. The large fluctuations of \(\Delta\mu_{a}\) in Fig. 5c are then the combination of two effects: creep drives the long time trend, while disorder dominates in the narrow time range.
In Fig. 5d, we schematically represent the coupled role of (I), (II), and disorder, following the evolution of the interfacial stress with time, characterized by \(\mu_{a}\), starting from the last slip event. The layer slips when it reaches \(\mu_{a}=\mu_{s}(T_{a})\), whereby \(\mu_{s}\) is distributed in some way for fixed \(T_{a}\) because of disorder (illustrated as a red-shaded area, where for simplicity, we lump all fluctuations in the threshold to sliding) and increases logarithmically with time because of creep. For \(n=1\) (black line), \(\mu_{a}\) increases linearly at the same rate for all the events, thus exploring only a narrow region of \(T_{a}\) (the shaded red region due to disorder). In the case of multiple active interfaces (\(n>1\), green lines), \(\mu_{a}\) increases faster given that \(\dot{\mu}_{a}\) is an increasing function of \(n\); see point (I) above. In some cases, \(\mu_{a}\) directly reaches \(\mu_{s}\), resulting in a lower value of \(T_{a}\) and \(\Delta\mu_{a}\). However, if sliding events occur in one of the underneath layers, \(\mu_{a}\) will drop before linearly increasing again, delaying slip and thus increasing \(T_{a}\), and consequently \(\Delta\mu_{a}\), because of creep.
Effective temperature.During stick intervals, microscopic events occur on the interfaces, propagating elastic waves across the system [26]. As we increase the number of interfaces in the system, we can expect that the overall mechanical noise created by the microscopic events also increases. If we speculatively interpret this mechanical noise as an effective temperature, we would expect a change of aging rate \(B\) with \(n\). Let us define the aging rate for a single event as \(B_{a}\equiv\Delta\mu_{a}/\ln T_{a}\). Although the mean of \(P(B_{a})\) does not change with \(n\) (see SI "Aging rate distributions"), we do find that the width of the distribution \(P(B_{a})\) is an increasing function of \(n\) mainly for the lowermost interfaces (\(i\leq 2\)), as shown in Fig. 6a.
For the same interfaces (\(i\leq 2\)), we also observe distinctly different slip dynamics when \(n>1\). In particular, as \(n\) increases, we find that interfaces \(i=1\) and \(i=2\) are increasingly more subject to slow slip, defined as sliding significantly slower than the slip events (see SI "Smooth sliding" for precise definition). These events are not accompanied by a macroscopic stress drop but rather just lead to a lower \(\dot{F}>0\), see Fig. 3a. In Fig. 6b, we measure the proportion of slow slip compared to the total sliding distance in an experiment, \(R_{\text{slow}}\). It is computed by comparing the total sliding distance to the total slip accumulated during individual slip events (\(\Delta R_{a}\)), such that \(R_{\text{slow}}\equiv 1-\int\Delta R_{a}/\Delta R\). Once a slow slip event starts, it appears to be stopped only by slip events occurring either on the same interface or on any other interface. An
Figure 5: (a) For an interface sliding individually (\(n=1\)), stick-slip amplitude \(\Delta\mu_{a}\) as a function of the waiting time since the last slip event \(T_{a}\). The black line corresponds to the prediction that the stress of the interface increases at a constant rate in between slip events. For multiple sliding interfaces (\(n\geq 1\)): (b) Probability distribution of the waiting time \(T_{a}\) between two consecutive slip events at interface \(a=1\), for increasing \(n\). (c) For each detected slip event, correlation between \(\Delta\mu_{a}\), and its corresponding waiting time \(T_{a}\) (semig logarithmic scale). The black markers correspond to the mean values for a logarithmic binning of \(T_{a}\) (error bars indicate the standard deviation for that bin), and the dotted line a linear fit of \(\Delta\mu_{a}=B\ln T_{a}\), with \(B=0.053\pm 0.005\). (d) Schematic of the proposed mechanism leading to multimodal and wider distributions of \(T_{a}\) (and thus \(\Delta\mu_{a}\)) as \(n\) increases.
Figure 6: As a function of \(n\), for each interface \(a\) (different color and marker, see legend): (a) Standard deviation of the distribution of the aging rate \(B_{a}\equiv\Delta\mu_{a}/\ln T_{a}\) of individual events, normalized by that quantity for \(n=1\). (b) Fraction of slip that corresponds to slow slip.
increase in the effective temperature of the interface with \(n\) could also act as a potential destabilization factor of the contacts at the interfaces, increasing the occurrence of slow slips with \(n\).
## Discussion and conclusion
### Summary
We have explored the stick-slip response of a system with multiple interfaces by proposing a model system comprising \(n\) vertically stacked slabs, each connected to a lever whose rotation is imposed. The interfaces were driven in quasistatic (homogeneous) shear. We proposed a dimensionless quantity \(\Phi\) as the ratio between the driving stiffness and the elastic shear stiffness of the slabs. We have argued and demonstrated numerically that the system displays synchronization if \(\Phi\) is sufficiently large (\(\Phi\gtrsim 10^{-3}\)). In that case, the system acts close to a single frictional interface with effective properties. If \(\Phi\) is small (\(\Phi\sim 10^{-6}\)), interfaces slip one by one, as also confirmed experimentally.
We expect non-trivial collective effects with increasing \(n\) only in the low-\(\Phi\) limit, which we addressed through experiments. In the numerics, the stick-slip amplitude of the interfaces \(\Delta\mu_{a}\) display a distribution with finite width because of statistical fluctuations of the interfaces, but no measurable changes with \(n\). By contrast, we measured experimentally that the probability distribution of stick-slip amplitude \(\Delta\mu_{a}\) shows a general broadening with \(n\), with peaks shifting to lower values and secondary peaks appearing. The interfaces are coupled via the lever, exposing them to a complex loading path, and leading to a broad distribution of the waiting times \(T_{a}\) between two slip events on an interface. We find that \(T_{a}\) is now spanning two decades, such that the creep of the interfaces plays a crucial role in the broadening of \(\Delta\mu_{a}\). The complex distributions of \(\Delta\mu_{a}\) can then be interpreted as the combined effect of interface disorder and creep. For narrow waiting times \(T_{a}\), multiple slips explore the statistical fluctuations of the contacting interfaces, giving a distribution with finite width. In addition, this distribution follows a creep-induced general trend over widely distributed \(T_{a}\).
Furthermore, we observe that aging rate variations and the fraction of slow slip on the bottom layers is an increasing function of \(n\). We suggest that these additional consequences of adding more layers to the stack might be an evidence of an increase in an effective interface temperature due to the mechanical noise of microscopic events.
In conclusion, the relative rigidity of the drive against the layers dictates whether a stack of interfaces responds synchronously or not. When layers slide one by one, increasing their number lead to complex responses, making the prediction of the next slip more challenging.
### Limitations and outlook
It is pertinent to discuss some limitations of our model system and provide suggestions for future work.
_Stiffness ratio._ We have defined the rigid and compliant driving regimes, as characterized by the relative order-of-magnitude estimation of the respective stiffness ratio \(\Phi\). Identifying an equivalent of \(\Phi\) in systems with more intricate geometries, such as fault networks, might contribute to clarifying their dynamics and help slip predictions. Hence, a more systematic exploration of the response of stacks with varying \(\Phi\) would be of great interest.
However, we are currently restricted to a limited range of \(\Phi\). For our numerics, low-\(\Phi\) are challenging due to a combination of the assumption of finite rotations and finite machine precision. To be able to continue sliding indefinitely our model should be extended with the possibility to reset the local deformation along the frictional interface to a zero average while keeping the identical stress state. In contrast, high-\(\Phi\) are challenging experimentally. For too high driving stiffness the motor/lever system can no longer be considered rigid, invalidating the relation between global (\(\Delta F\)) and local (\(\Delta\mu_{a}\)) force drops (Eq. (5)). Instead, slab materials with low shear stiffness tend to be adhesive [41] corresponding to a different class of frictional properties.
_Creep._ Our proposed model system allowed us to measure the aging rate \(B\) of the interfaces thanks to complex stick-slip sequences without the need for stop-and-go experiments. However, while our measured value of \(B\) is compatible with previous experiments on PMMA [30; 40], it differs by a factor of five. Possible sources of differences are roughness, inter-realisation variations, and stress inhomogeneities. First, the PMMA plates used here have a much higher surface roughness than in [30; 40]. If our model is extended with thermal fluctuations it likely displays creep such that the relationship between the distribution of barriers, linked to surface roughness, and creep could be investigated. Second, we measure \(B\) on an ensemble of \(n=5\) interfaces. Between interfaces it is estimated that \(B\) differs by a factor of about two. Third, recent experimental observations find a relationship between the aging rate \(B\) and the applied shear load [42]. Our setup naturally imposes a broad range of shear loads on the interface. However, to measure the empirical law by [42], our setup would need to be augmented to also provide stress measurements per interface, by measuring individually the internal forces \(f_{i}\) of the driving springs. To study this effect numerically, on top of adding temperature to capture creep, the model would likely have to be made sensitive to pressure inhomogeneities that may arise from the normal load or
partial slip events.
Slow slip.The origin of the experimentally observed slow slip is unknown. A tempting hypothesis is that smooth sliding is the result of activated yielding events due to increasing mechanical noise on other interfaces. However, it is not clear why this interpretation would lead to slow slip occurring predominantly on the lower-most interfaces (which is qualitatively robust to changing the order of the slabs, see SI "Smooth sliding"). Furthermore, slow slip is not observed numerically. This could, however, in part be due to our small homogeneous background damping term (currently chosen to avoid non-physical periodic wave propagation). An alternative hypothesis is that, by increasing \(n\), the loading rate becomes sufficiently high to drive the interfaces away from the stick-slip regime. This second interpretation is consistent with slow slip being predominantly observed on the lowermost interfaces. Note that, different from the experiments, our numerical model drives infinitely slowly.
After-shocks.After-shocks appear if creep is added to the drive in a simple spring-block model [43, 44]. A creeping drive is often associated with the high temperatures in Earth's core [43]. Our experimental system displays slow sliding of 'deep' layers already at room temperature. A key question is if after-shocks appear in the top layers of our system as well. Answering this question experimentally would require exposing microscopic events, which would likely involve studying acoustic emissions (for which PMMA may not be the optimal choice).
###### Acknowledgements.
The authors thank Mathias Lebihain and Federica Paglialunga for fruitful discussions, and Lebo Molefe for providing the microscope images of the surface of the slabs. S.P. acknowledges financial support from the Japanese Society for the Promotion of Science as a JSPS International Research Fellow. T.G. acknowledges support from the Swiss National Science Foundation (SNSF) by the SNSF Ambizione Grant PZ00P2_185843.
|
2309.08221 | Exploring the Potential of ChatGPT in Automated Code Refinement: An
Empirical Study | Code review is an essential activity for ensuring the quality and
maintainability of software projects. However, it is a time-consuming and often
error-prone task that can significantly impact the development process.
Recently, ChatGPT, a cutting-edge language model, has demonstrated impressive
performance in various natural language processing tasks, suggesting its
potential to automate code review processes. However, it is still unclear how
well ChatGPT performs in code review tasks. To fill this gap, in this paper, we
conduct the first empirical study to understand the capabilities of ChatGPT in
code review tasks, specifically focusing on automated code refinement based on
given code reviews. To conduct the study, we select the existing benchmark
CodeReview and construct a new code review dataset with high quality. We use
CodeReviewer, a state-of-the-art code review tool, as a baseline for comparison
with ChatGPT. Our results show that ChatGPT outperforms CodeReviewer in code
refinement tasks. Specifically, our results show that ChatGPT achieves higher
EM and BLEU scores of 22.78 and 76.44 respectively, while the state-of-the-art
method achieves only 15.50 and 62.88 on a high-quality code review dataset. We
further identify the root causes for ChatGPT's underperformance and propose
several strategies to mitigate these challenges. Our study provides insights
into the potential of ChatGPT in automating the code review process, and
highlights the potential research directions. | Qi Guo, Junming Cao, Xiaofei Xie, Shangqing Liu, Xiaohong Li, Bihuan Chen, Xin Peng | 2023-09-15T07:41:33Z | http://arxiv.org/abs/2309.08221v1 | # Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study
###### Abstract.
Code review is an essential activity for ensuring the quality and maintainability of software projects. However, it is a time-consuming and often error-prone task that can significantly impact the development process. Recently, ChatGPT, a cutting-edge language model, has demonstrated impressive performance in various natural language processing tasks, suggesting its potential to automate code review processes. However, it is still unclear how well ChatGPT performs in code review tasks. To fill this gap, in this paper, we conduct the first empirical study to understand the capabilities of ChatGPT in code review tasks, specifically focusing on automated code refinement based on given code reviews. To conduct the study, we select the existing benchmark CodeReview and construct a new code review dataset with high quality. We use CodeReviewer, a state-of-the-art code review tool, as a baseline for comparison with ChatGPT. Our results show that ChatGPT outperforms CodeReviewer in code refinement tasks. Specifically, our results show that ChatGPT achieves higher EM and BLEU scores of 22.78 and 76.44 respectively, while the state-of-the-art method achieves only 15.50 and 62.88 on a high-quality code review dataset. We further identify the root causes for ChatGPT's underperformance and propose several strategies to mitigate these challenges. Our study provides insights into the potential of ChatGPT in automating the code review process, and highlights the potential research directions.
+
Footnote †: \({}^{\dagger}\)Corresponding author
+
Footnote †: \({}^{\dagger}\)Corresponding author
+
Footnote †: \({}^{\dagger}\)Corresponding author
+
Footnote †: \({}^{\dagger}\)Corresponding author
+
Footnote †: \({}^{\dagger}\)Corresponding author
+
Footnote †: \({}^{\dagger}\)Corresponding author
## 1. Introduction
Code review is a software quality assurance activity in software development and maintainance, which involves the systematic examination of source code to identify and rectify errors, improve code quality, and ensure compliance with coding standards. The code review process typically consists of writing code reviews and refining code based on the review comments received, with the ultimate goal of enhancing software quality. Code review has become an integral part of many software development projects, as it has been widely recognized for its effectiveness in improving the overall reliability and maintainability of software systems.
However, code review can be a time-consuming and resource-intensive process, requiring significant manual effort to review and refine code, especially in popular projects with numerous contributions. For example, Bosu et al. (Bosu et al., 2018) discovered that, on average, developers allocate approximately six hours per week preparing code for review or reviewing others' code. Moreover, the increasing complexity of modern software systems and the need for more frequent releases have made code review even more challenging. To address this issue, recent research (Zhou et al., 2019; Wang et al., 2019) has been conducted to automate various aspects of code review, such as generating review comments and refining code. In particular, the learning-based approaches (Zhou et al., 2019; Wang et al., 2019) that rely on Large Language Models (LLMs) such as CodeT5 (Wang et al., 2019) and CodeBERT (Chen et al., 2019) have demonstrated promising results in automating code review, reducing the manual effort required for code reviews.
Recently, OpenAI introduced ChatGPT (Chen et al., 2019), a revolutionary technology capable of transforming various sectors, including software engineering tasks. ChatGPT, an advanced version of GPT-3.5 (Zhou et al., 2019), is a fine-tuned model that excels at understanding and executing instructions. This capability distinguishes it from other pre-trained models and makes it a promising candidate for tasks that require prompts or instructions. The code refinement process, which is contingent upon code review and previous code versions, aligns well with strengths of ChatGPT. Since human reviews can serve as prompts for code refinement, it is natural to investigate the potential of using ChatGPT for this task.
In this paper, we take the first step towards investigating the potential of ChatGPT for code refinement based on the given review comments. Note that although code-to-code refinement (i.e., ChatGPT directly generates refined code from original code) is also a research problem, there are still major concerns regarding the quality of the refined code (Wang et al., 2019). Therefore, we focus on the refinement based on given review in this paper, which is different from code-to-code refinement. Specifically, we focus on three main problems: 1) How does ChatGPT perform compared to the state-of-the-art methods? 2) In which cases does ChatGPT underperform, and what are the underlying reasons? 3) How these challenges can be mitigated? By answering these questions, we can gain a deeper understanding of the potential and challenges of ChatGPT for automated code refinement tasks.
To answer the above questions, we conduct comprehensive experiments to evaluate ChatGPT's performance in code refinement tasks. Considering the sensitivity of ChatGPT to different settings, we first design the experiment to evaluate its performance on two main factors, i.e., different prompts and temperatures. Then we select the optimal configuration and compare ChatGPT with state-of-the-art techniques (Zhu et al., 2017) on standard benchmarks (Kang et al., 2017). To evaluate the generalizability of different techniques, we create a new dataset by collecting code reviews from repositories not included in the standard benchmarks and recent code reviews from the same repositories included in the standard benchmarks. Based on the evaluation results, we perform an in-depth analysis of the root causes and devise preliminary strategies for mitigating different challenges.
Overall, the results provide valuable insights into the performance of ChatGPT in code refinement tasks. Our findings demonstrate that different prompts and temperature settings can have a significant impact of up to 5% and 15% on ChatGPT's Exact Match (EM) scores in code refinement tasks. Lower temperature settings yield better and more stable results, and describing the code review scenario in the prompt helps enhance ChatGPT's performance. Compared to the state-of-the-art model CodeReviewer, ChatGPT demonstrates better generalization capabilities in our newly generated dataset. Specifically, ChatGPT achieves EM and BLEU scores of 22.78 and 76.44, respectively, on the new dataset, while CodeReviewer only reaches 15.50 and 62.88 for EM and BLEU scores, respectively. However, we also found that ChatGPT struggles on tasks involving refining documentation and functionalities, mainly due to a lack of domain knowledge, unclear location, and unclear changes in the review comments. These limitations could potentially be resolved by improving review quality and using more advanced large language models such as GPT-4. Our study highlights the potential of ChatGPT in code refinement tasks and identifies important directions for future research.
In summary, this paper makes the following contributions:
* We conduct the first empirical study on evaluating ChatGPT's potential in code refinement tasks based on review comments.
* We analyze the challenges of ChatGPT in code refinement tasks and propose potential mitigation strategies, laying the groundwork for future research on better incorporating ChatGPT.
* We release a new dataset that contains high-quality code reviews, which could be useful for future research in this area.
## 2. Background
### Code Review Process
During the code review process, a contributor submits code changes to implement new features, refactor code, or fix bugs. When the contributor believes the code changes are ready for review and to be merged into the main branch, he or she initiates a pull request and invite reviewers to examine the changes. After reviewing the code changes, a reviewer may provide review comments in natural language, represented as \(R\). Based on these review comments, the contributor makes modifications on the original code \(C_{1}\) and submits the revised code \(C_{2}\). The code difference between \(C_{1}\) and \(C_{2}\) is denoted as \(D:C_{1}\to C_{2}\). It is worth noting that the above process represents only one review cycle, while a complete pull request may involve multiple rounds of review cycles. In this work, without loss of generality, we focus solely on the single-round scenario, where to generate the revised submitted code \(C_{2}\) with models automatically, based on the a given review comment \(R\) and the original submitted code \(C_{1}\) within each pull request.
### ChatGPT
ChatGPT (Kang et al., 2017) is a famous example of Large language models (LLMs), unveiled by OpenAI. ChatGPT was developed by employing a GPT-3.5 series model and training it using reinforcement learning from human feedback (RLHF) (Srivastava et al., 2014; Wang et al., 2015). Owing to the RLHF training process, ChatGPT has exhibited remarkable proficiency across multiple dimensions, encompassing the generation of high-quality responses to human inputs, the refusal of inappropriate queries, and the capacity for self-correction of prior errors based on subsequent dialogues.
Considering the characteristics of ChatGPT usage (Wang et al., 2015), it is natural to explore its potential in automating code reviews (Kang et al., 2017). Specifically, we propose a conversational approach to delegate the code refinement task to ChatGPT, where the original code and review comment are provided as a task input in a coherent linguistic structure. ChatGPT will return the revised code along with the reasoning behind the modifications, precisely aligning with the desired output of the task. The performance of ChatGPT in this approach depends significantly on two parameters: prompt and temperature. The prompt serves as a cue for ChatGPT to understand the intended task, while temperature can be used to control the level of creativity and diversity in responses of ChatGPT.
## 3. Study Design
### Overview and Research Questions
The main focus of this paper is to evaluate and understand the capabilities of ChatGPT in code refinement tasks. Fig. 1 shows the overview of this paper. To conduct our study, we collect existing benchmarks, including the CodeReview dataset, and state-of-the-art code refinement tools such as CodeReviewer (Kang et al., 2017), for comparisons. However, given the potential risk that the dataset could be used to be trained in ChatGPT and CodeReviewer, we create a new code review dataset (named CodeReview-New) consisting of two parts: new code reviews from the same repositories as CodeReview dataset but collected more recently (i.e., CodeReview-NewTime), and code reviews from repositories using different languages that are not included in CodeReview dataset (i.e., CodeReview-NewLanguage). We next introduce the research questions we aim to investigate and their relationships.
**RQ1 Impact of ChatGPT Settings: How do different prompt and temperature settings affect ChatGPT's performance in the code refinement task?** As the effectiveness of ChatGPT highly depends on the prompts and temperatures used, we first evaluate the impact of different settings of ChatGPT on code refinement. We designed five prompts based on whether a concrete scene is provided and whether detailed requirements are given. We also selected five temperature settings ranging from 0 to 2, with intervals of 0.5 (i.e., 0, 0.5, 1, 1.5 and 2.0). We evaluated and compared the effects of 25 combinations of these five prompts and five temperature settings based on the CodeReview dataset. Our evaluation
of ChatGPT in the subsequent research questions is based on the optimal prompt and temperature settings obtained from RQ1.
**RQ2 Effectiveness of ChatGPT on Code Refinement: How does ChatGPT's performance compare to state-of-the-art methods?** We aim to investigate the effectiveness of ChatGPT in code refinement tasks compared to state-of-the-art methods. To answer this question, we compare ChatGPT's performance with that of the state-of-the-art code refinement tool, CodeReviewer (Kang et al., 2019). We replicated and fine-tuned the CodeReviewer model and evaluated its performance alongside ChatGPT on both the existing CodeReview dataset and the new dataset CodeReview-New we created.
**RQ3 Strengths and Weaknesses of ChatGPT: In which cases does ChatGPT perform well or not?** To address this question, we conduct a qualitative study based on the results obtained from RQ2. Specifically, we annotate 200 samples each from the CodeReviewer and CodeReview-New datasets manually, labeling the quality of reviews (i.e., relevance and information levels) and code change types. We then evaluate the performance of ChatGPT on data with various review qualities and code change categories.
**RQ4 Root Causes and Potential Mitigation Strategies for Underperforming Cases: What are the underlying causes for the underperformance of ChatGPT, and how can we mitigate these challenges?** Based on the analysis of RQ3, we aim to further understand the root causes of ChatGPT's underperforming cases and how to address this limitations. We investigated the 206 cases from the 400 annotated samples in RQ3 where ChatGPT failed to make accurate predictions and summarized the categories of root causes. Based on the root causes, we attempt to study the impact of improving review quality and enhancing models in mitigating the issues of ChatGPT.
### Experiment Settings
#### 3.2.1. Dataset
To conduct the study, we utilize two datasets: the CodeReview dataset (Kang et al., 2019) and a new dataset created by us, denoted as CodeReview-New.
**CodeReview (CR)**: We first select CodeReview (Kang et al., 2019) that is a widely used dataset in code review task. This dataset was crawled from the top 10,000 repositories from GitHub based on their star ranking, and includes nine programming languages, namely C, C++, C#, Go, Java, JavaScript, PHP, and Python. Repositories that do not have an explicit data redistribution license and fewer than 1,500 pull requests (PRs) are filtered out. The dataset consists of review comments \(R\) associated with their corresponding code diff \(D:C_{1}\to C_{2}\). To ensure a high-quality dataset, samples with the same review comment associated with multiple code diff or a single code diff associated with multiple comments are filtered out. Additionally, the dataset is divided into a pre-training dataset and multiple downstream task datasets, and we used the code refinement downstream task dataset in our study. This dataset comprises 829 repositories and 125,653 PRs. We follow the same partition method as CodeReviewer (Kang et al., 2019) for a fair comparison, and divide the dataset into training set, validation set and test set, with proportions of 85%, 7.5%, and 7.5%, respectively.
**CodeReview-New (CRN)**: Additionally, we create a new code review dataset, CodeReview-New, due to two reasons: 1) we observe that there are some low-quality code review data in CodeReview, which could affect the comparisons between ChatGPT and the baseline CodeReviewer; 2) the data distribution in the CodeReview test data could be very similar to that in the pre-train and fine-tuning dataset, and may even have been used by the selected models (i.e., ChatGPT (Kang et al., 2019) and CodeT5 (Kang et al., 2019)). The new dataset is constructed to better evaluate the generalization capabilities of models. To address these two concerns, we design more strict filtering rules to filter low-quality reviews; and select code reviews that is unlikey to be used in the pre-training process.
To ensure the quality of the CodeReview-New dataset, we implemented several strict rules based on our analysis of the quality issues present in CodeReview. Only code reviews that met these rules were retained in our dataset. Firstly, we ensured that the code changes are only about a single code hunk, which is necessary because the baseline CodeReviewer we select only accepts a single piece of code as input. Secondly, we filtered out changes that were unrelated to code, such as changes to README files. Finally, we ensured the relevance between the review comment \(R\) and the code changes \(D\) by collecting the original code piece \(C_{1}\) that contains the review comment \(R\).
To prevent ChatGPT from using CodeReview-New during the pre-training process, we only collected data from January 1, 2022, onwards, as ChatGPT's training data only extends up to 2021 (Kang et al., 2022). Furthermore, CodeReview dataset also does not contain data after January 1, 2022, which makes it fair to compare CodeReviewer model and ChatGPT. In addition to the repositories included in CodeReview, we crawled code reviews from additional 1,400 repositories (top 200 repositories for each language based on their star ranking) using seven programming languages: Swift, Objective-C,
Figure 1. Overview of our study
Kotlin, SQL, Perl, Scala, and R, which are not included in CodeReview. In total, we selected 2,029 repositories, with 829 from the CodeReview repository and 1,200 new repositories with different programming languages.
After applying the filtering rules and selecting pull requests based on time, we only have 467 repositories out of the initial 2,029 repositories. The exclusion of the other 1,562 repositories can be attributed to two main reasons: first, we used stricter filtering rules compared to the construction of the CodeReview dataset, and second, we only selected pull requests created on or after January 1, 2022, which resulted in the exclusion of some projects that had few PRs during this period. As shown in Table 1, the dataset consists of samples from two types of repositories: 9,117 samples from 232 repositories that are also included in the CodeReview dataset, denoted as CodeReview-NewTime (CRNT); 5,451 samples from 240 new repositories that have different programming languages with the repositories in CodeReview dataset, denoted as CodeReview-NewLanguage (CRNL). Some languages, such as SQL and Perl, have a smaller amount of data due to fewer pull requests or a smaller number of reviews.
#### 3.2.2. Evaluation Models
To compare the performance of ChatGPT with the state-of-the-art tool, we chose CodeReviewer (Kotlin, 2017), which is a recent state-of-the-art method for code refinement. In this paper, we apply ChatGPT in a similar way to CodeReviewer, by generating revised code \(C_{2}\) based on reviews \(R\) and original code \(C_{1}\). We chose CodeReviewer over other methods as it is demonstrated to be more effective than other methods such as AutoTransform (Zhu et al., 2017) and Trans-Review (Zhu et al., 2017). Based on our evaluation results, we believe that ChatGPT can also surpass other models. Furthermore, our main focus is to understand the strengths and weaknesses of ChatGPT and identify potential improvement directions for future research on the code review process.
**CodeReviewer:** It utilizes a T5 model architecture comprising 12 Transformer encoder layers and 12 decoder layers, amounting to 223 million parameters (Zhu et al., 2017). The model is initialized using the weight parameters of CodeT5 (Zhu et al., 2017). Subsequently, the pre-training is carried out with three objectives: Diff Tag Prediction, Denoising Objective, and Review Comment Generation. In this study, we employed the same pre-trained CodeReviewer model and fine-tuned it using the \(CodeReviewer_{train}\) and \(CodeReview_{valid}\) datasets.
**ChatGPT:** We accessed and evaluated ChatGPT with the default GPT-3.5-Turbo model using the OpenAI API (Zhu et al., 2017). Unlike CodeReviewer, we did not fine-tune ChatGPT and only performed a zero-shot style evaluation. The ChatGPT API was accessed in March 2023, at a total cost of 150 USD. When comparing T5 and GPT-3.5, both models are large language models, but they have some differences. T5 is a general-purpose language model that uses a denoising autoencoder objective, which involves predicting masked or corrupted tokens in the input text. In contrast, ChatGPT is trained on a large dataset of conversational text, making it better at generating responses appropriate for use in a chatbot context. One key difference between the two models is that ChatGPT is fine-tuned with Reinforcement Learning from Human Feedback (RLHF), which uses human feedback in the training loop to make it more effective in generating appropriate and coherent responses in various contexts. During the evaluation, we designed different prompts based on the original code and code review to obtain outputs from ChatGPT. In RQ4, we also employed GPT-4 in ChatGPT in order to mitigate the cases where GPT-3.5 made incorrect answers. GPT-4 (Kotlin, 2017) is the latest multi-modal model designed to process both textual and visual inputs, generating textual outputs.
#### 3.2.3. Evaluation Metrics
Exact Match (EM) and BLEU are the two widely adopted metrics in previous literature (Kotlin, 2017; Zhu et al., 2017; Zhu et al., 2017). However, we found that ChatGPT tends to generate more content including additional code or more explanations, which could largely affect the EM results and make the measurement less accurate. In the real world, a contributor can easily trim these additional information to obtain the correct. Hence, we propose two new variants of EM and BLEU, called EM-trim and BLEU-trim, which more accurately measures the results.
**Exact Match (EM).** A prediction is considered correct by EM only if the predicted revised code is identical to the ground truth revised code. The EM value is computed based on the percentage of generated outputs that exactly match the ground truth.
**Exact Match Trim (EM-trim)** is a variant of the EM metric that is more lenient in its measurement. EM-trim first performs a trim on the generated output (denoted as \(C_{2}^{\prime}\)) before calculating the EM score. Specifically, if the first line of the ground truth text can be found in the generated output \(C_{2}\), we trim the generated content before the first line of \(C_{2}\). Similarly, if the last line of the ground truth text can be found in the generated output \(C_{2}\), we trim the generated content after the last line of \(C_{2}\). After the trim process, the EM-trim score is calculated using the trimmed content \(C_{2}^{\prime}\) and the ground truth text. The EM-trim metric is more lenient than the traditional EM metric, as it ignore other irrelevant information.
**BLEU** is a common metric used to measure the quality of generated text in neural translation models (Zhu et al., 2017). We use the BLEU-4 variant, which calculates the overlap of 4-grams between \(C_{2}\) and the ground truth (Kotlin, 2017; Zhu et al., 2017; Zhu et al., 2017). The range of BLEU-4 scores lies between 0% and 100%, with 100% indicating a perfect match. The average BLEU-4 score of all test samples serves as the overall evaluation result. Similar to EM-trim, we also design BLEU-trim that calculates the BLEU-4 score between the trimmed output \(C_{2}^{\prime}\) and the ground truth text.
## 4. Evaluation Results
### RQ1 Impact of Prompts and Temperatures
#### 4.1.1. Setup
Prompts and temperatures are two crucial parameters that can significantly impact the performance of ChatGPT in code refinement tasks. To determine the optimal values for these parameters, we conducted an experiment to evaluate their impact on code refinement. Note that while temperatures and prompts are parameters utilized by ChatGPT, they are not applicable to run CodeReviewer. CodeReviewer solely relies on the concatenation of old code and code reviews as its input.
Specifically, temperature is a parameter that controls the level of randomness and creativity in the generated output of ChatGPT. Higher temperature settings tend to produce more diverse and innovative responses, but with a higher risk of generating nonsensical or irrelevant output. In order to explore the effects of different temperature settings in ChatGPT, which ranges from 0 to 2, we chose
five specific temperature values (i.e., 0, 0.5, 1.0, 1.5, and 2.0) due to the high cost of ChatGPT API.
To select the prompts, we followed the established best practices (Beng et al., 2017; Chen et al., 2018) which suggests that prompts could include four types of elements, i.e., _Instruction, Context, Input Data_ and _Output Indicator_. We have tried prompts with various combinations of these four elements. During our preliminary exploration stage, we experimented with a total of 14 prompts. Due to budget constraints, we selected the 5 best-performing and representative prompts:
1. [leftmargin=*]
2. **Prompt 1 (P1): the simplest prompt.** We only provided the basic requirement of generating new code based on the old code and review, without additional description.
3. **Prompt 2 (P2): P1 + Scenario Description.** P2 was designed based on Prompt 1 but included a scenario description that asked ChatGPT to act as a developer and modify the code based on review information from a pull request that is from the team leader.
4. **Prompt 3 (P3): P1 + Detailed Requirements.** P3 included detailed requirement information, such as keeping the original content and format of the code as much as possible and not completing any code snippets in the old code or modifying any code not mentioned in the review.
5. **Prompt 4 (P4): P1 + Concise Requirements.** Similar to P3, P4 also included requirement information that is more concise.
6. **Prompt 5 (P5): P4 + Scenario Description.** P5 was a combination of Prompts 2 and 4, containing both scenario description and requirement information.
Specifically, the instruction, context, and output indicator in P1 are all simplest. P2, building upon P1, provides a more detailed context description, while P3, also building upon P1, offers a more detailed output indicator (Chen et al., 2018). Figure 2 illustrates the construction strategies for Prompt 1 and Prompt 2. The details of the other prompts are available on our website (Chen et al., 2018).
To evaluate the effectiveness of ChatGPT under different parameters, we accessed the ChatGPT API and performed code refinement on the CodeReview dataset. Due to the cost of running the ChatGPT API, we randomly selected 500 data entries from the test set of the CodeReview dataset to reduce the number of API calls. To account for the randomness of ChatGPT predictions, we repeated each setting ten times, i.e., making ten ChatGPT API requests on each sample under each setting. We obtained the average of the ten repetitions as the final results.
#### 4.1.2. Results
Table 2 displays the results of our evaluation of ChatGPT under different temperature and prompt settings. Values in parentheses represent standard deviations. Notably, the evaluation results indicate that setting temperature to 0 achieves the best performance for each prompt. As the temperature increases, the performance of ChatGPT decreases significantly. For example, the temperature of 2.0 achieves the worst results. This phenomenon may be due to the fact that generating new code is a complex and precise task, and high temperature can result in unstable and random results, which are more creative but less reliable. Furthermore, we investigated the results of 500 sampled data with temperature set to 0 with P2, and found that most of the results remain consistent. Specifically, 309 of the data produced the same answers for all 10 runs, while 110 of the data produced only 2 different answers among 10 runs. This finding further underscores the strong stability of using temperature set to 0 for code generation tasks. Overall, the results suggest that using lower temperature settings tends to produce more stable and better output for code generation tasks.
Comparing the effects of different prompts under stable temperature settings (0, 0.5, and 1.0), we observed that P2 and P5 achieved significantly better results than others. Considering the comparative results between P1 and P2, as well as the results between P4 and P5, we can infer that the inclusion of additional scenario descriptions is beneficial in improving the understanding and performance of ChatGPT. Furthermore, we noticed that P3 performed worse than P4, despite both prompts containing more requirement information. Sometimes, P3 even performed worse than the simplest prompt, P1. For example, P1 achieved higher EM-trim scores than P3 in all three temperature settings, but P1 was generally worse than P4. This indicates that while providing additional requirement
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Language \\ \end{tabular} } & \multicolumn{6}{c}{_CRNT_} & \multicolumn{6}{c}{_CRNL_} \\ \cline{2-13} & Ruby & Go & Py & C\# & JS & C\({}_{+}\) & Java & C & PHP & Swift & Obj-C & Kit & SQL & PL & Scala & R \\ \hline \#Samples & 377 & 2,843 & 2,115 & 703 & 427 & 700 & 1,194 & 335 & 423 & 864 & 81 & 1,932 & 96 & 116 & 1,682 & 680 \\ \hline \multicolumn{13}{c}{Total} \\ \hline \multicolumn{13}{c}{} & \multicolumn{6}{c}{9,117} & \multicolumn{6}{c}{5,451} \\ \hline \hline \end{tabular}
\end{table}
Table 1. The statistics of CodeReview-New dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} P\({}_{k}\) \\ \end{tabular} } & \multicolumn{6}{c}{Temperature\({}_{w}\)=0} & \multicolumn{6}{c}{Temperature\({}_{w}\)=1.0} & \multicolumn{6}{c}{Temperature\({}_{w}\)=1.5} & \multicolumn{6}{c}{Temperature\({}_{w}\)=2.0} & \multicolumn{6}{c}{Avg (Tem.5,1.5)} \\ \cline{2-13} & EM-T & BLEU-T & BLEU-T & & EM-T & BLEU-T & BLEU-T & BLEU-T & EART-T & BLEU-T & EART-T & BLEU-T & BLEU-T & EART-T & BLEU-T \\ \hline P1 & 12.92 (0.22) & 73.58 (0.22) & 73.28 (0.34) & 72.82 (0.33) & 16.48 (0.77) & 7.125 (0.48) & 12.27 (1.63) & 64.42 (0.57) & 6.46 (0.57) & 5.69 (0.57) & 12.66 (1.21) & 16.57 & 70.54 \\ P2 & **21.48**(0.33) & **77.99** (0.27) & 19.76 (1.01) & 76.40 (0.95) & 16.66 (0.77) & 74.12 (0.29) & 11.69 (0.71) & 65.48 (0.10) & 3.59 (0.57) & 14.82 (0.24) & 17.40 & 73.37 \\ P3 & 16.40 (0.29) & 75.37 (0.17) & 15.76 (0.27) & 74.66 (0.41) & 13.02 (1.02) & 71.92 (1.33) & 9.06 (0.69) & 63.36 (0.85) & 3.39 (0.25) & 21.50 (0.37) & 13.56 & 71.33 \\ P4 & 19.22 (0.10) & 75.59 (0.16) & 18.62 (0.59) & 76.48 (0.42) & 16.98 (0.36) & 72.66 (0.81) & 11.83 (0.77) & 65.20 (0.22) & 6.39 (0.24) & 25.21 (0.93) & 16.66 & 72.06 \\ P5 & 21.16 (0.40) & 76.66 (0.29) & 19.93 (0.37) & 76.35 (0.43) & 16.29 (0.35) & 74.69 (0.78) & 10.48 (0.50) & 63.96 (1.08) & 1.78 (0.75) & 14.25 (0.29) & 17.11 & 72.92 \\ \hline Avg & 19.50 & 75.68 & 18.47 & 74.98 & 16.01 & 72.91 & 11.66 & 64.61 & 4.43 & 20.91 & 16.26 & 72.05 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Impact of different prompts and temperatures on performance of ChatGPT.
Figure 2. Construction strategies of Prompt 1 and Prompt 2
information could be helpful (compared to P1 and P4), too much complex information could harm the performance (P3). It could be because detailed requirement information is more complex to understand by ChatGPT, leading to unstable results.
To investigate whether the findings of prompts and temperatures also hold across the entire dataset, we conducted an additional experiment. We randomly selected 1,000 data points from the training sets and validation sets of CodeReview dataset, and replicated the experiment. Due to budget constraints, we repeated the experiments for temperatures greater than 1.5 only twice, whereas for other temperature settings, we repeated them 10 times. The results, presented in Table 3, align closely with the findings in Table 2. Overall, both the EM and BLEU metrics demonstrate comparable performance to that on the test data, further reinforcing the consistent conclusions drawn concerning the influence of temperature and prompt settings as mentioned above.
Table 4 shows the p-value regarding EM-T and BLUE-T between P2 and other prompts with t-test (Wang et al., 2019). We can observe that, expect for EM-T P-value (0.5320) between P2 and P5, all p-values are less than 0.005. It implies that P2 significantly outperforms P1, P3, and P4 in terms of both EM-T and BLEU-T scores. As for P5, in terms of EM-T, there is no significant difference between P2 and P5. However, considering the BLEU-T values, P2 is significantly better than P5. Taking into account these factors, we finally selected P2 for conducting the experiments in this paper.
In the case of unstable temperature settings (1.5 and 2.0), we observed that the overall performance decreased. Note that, we also tried the fine-grained temperature interval (i.e., 0, 0.1, 0.2,..., 0.9, 1.0) on P2, the results show the similar trend with the larger interval 0.5. The results can be found in the website. However, we still noticed that P1 and P4 outperformed other prompts in general. This could be because P1 and P4 are simpler and provide less information, resulting in more stable results under higher temperature settings. In contrast, prompts with more information may make ChatGPT more creative but also more unstable when set with a higher temperature.
**Answers to RQ1**: The configuration of parameters and temperatures has a significant impact on ChatGPT's performance on code refinement. In most cases, lower temperature settings tend to produce better and more stable results. Prompts involving concise scenario descriptions tend to produce better results.
### RQ2 Effectiveness of ChatGPT
Based on the best parameters from RQ1 (i.e., temperature = 0 and prompt 2), we then evaluate ChatGPT on the test dataset of CodeReview (CR) and CodeReview-New (CRN). Table 5 presents the comparative results between ChatGPT and CodeReviewer. The column #Samples show the number of samples. CodeReview-NewTime (CRNT) and CodeReview-NewLanguage (CRNL) represent the results of two new datasets we constructed (see Table 1), respectively, where CodeReview-NewTime refers to code reviews in the same repositories with code review and CodeReview-NewLanguage refers to code reviews in different repositories with new programming language. Note that we have also evaluated the performance of ChatGPT on the training and validation datasets of CodeReviewer. The detailed results of these evaluations are available on our website (Chen et al., 2020) due to space limitations. The results demonstrate similar performance to that observed on the test dataset and show the consistent conclusions drawn regarding the impact of temperature and prompt settings in RQ1. We can see that ChatGPT achieves stable results across different datasets. In particular, the evaluation results suggest that ChatGPT performs better on CodeReview-New compared to CodeReview due to the higher quality of reviews in CodeReview-New.
We further conducted an in-depth analysis to understand the lower performance of CodeReviewer compared to ChatGPT on the new dataset. We identified 2,283 cases from the new dataset where ChatGPT provided a correct response while CodeReviewer did not. We randomly selected 150 of them for the manual analysis. Through our analysis, we identified 4 main root causes:
* _(34) Inaccurate understanding of the review content_. We have observed that some code reviews contain unclear information, such as ambiguous location references, unclear changes, or requiring domain-specific knowledge, which is challenging for the CodeReviewer model to comprehend.
* _(62) Over deletion_. CodeReviewer model exhibits a tendency to inaccurately delete code snippets. Specifically, in 30 cases, the CodeReviewer model erroneously deleted correct code snippets that should have been preserved. Additionally, in 32 cases, the model deleted a significant portion of code snippets that required modifications, resulting in excessive deletions.
* _(10) Extra modification_. In some cases, CodeReviewer model may introduce unnecessary modifications to code snippets that do not require any changes.
* _(44) Hard to understand the ground truth provided in the code block_. Our analysis has revealed that, in some cases, reviewers have accurately suggested changes within the code block. However,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Prompts & P1 & P3 & P4 & P5 \\ \hline EM-T P-value (P2 is superior) & 4.20E-06 & 7.69E-09 & 2.24E-06 & 0.5320 \\ BLEU-T P-value (P2 is superior) & 9.44E-09 & 2.30E-09 & 1.26E-07 & 0.0039 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparisions between Prompt 2 and other prompts.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Pr.} & \multicolumn{2}{c}{Temperature=0} & \multicolumn{2}{c}{Temperature=0.5} & \multicolumn{2}{c}{Temperature=1} & \multicolumn{2}{c}{Temperature=1.5} & \multicolumn{2}{c}{Temperature=2} \\ \cline{2-10} & EM-T & BLEU-T & EM-T & BLEU-T & EM-T & BLEU-T & EM-T & BLEU-T & EM-T & BLEU-T \\ \hline P1 & 18.1 & 70.77 & 18.28 & 70.44 & 16.15 & 68.91 & 14.08 & 63.21 & 2.31 & 6.93 \\ P2 & 21.55 & 74.21 & 20.23 & 73.52 & 17.99 & 71.42 & 13.45 & 61.94 & 1.26 & 3.57 \\ P3 & 16.21 & 71.2 & 16.15 & 71.32 & 13.97 & 69.14 & 10.4 & 62.87 & 1.59 & 4.34 \\ P4 & 18.28 & 71.45 & 17.82 & 71.32 & 16.44 & 68.82 & 12.36 & 62.48 & 1.82 & 5.02 \\ P5 & 20.11 & 76.17 & 19.48 & 75.62 & 17.7 & 72.88 & 9.94 & 51.69 & 0.37 & 2.62 \\ \hline Avg & 18.85 & 72.76 & 18.39 & 72.44 & 16.45 & 70.23 & 12.05 & 60.44 & 1.47 & 4.50 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Impact on trainset and validset.
CodeReviewer fails to recognize that the code within these blocks represents the ground truth, leading to incorrect modifications.
In summary, the main root cause appears to be the different understanding ability of the models. The CodeReviewer model struggles with comprehending some unclear reviews, while ChatGPT demonstrates a stronger ability to capture the underlying semantics accurately. We have included examples that illustrate the root causes and the different performance of the models on our website (Beng et al., 2019).
Although ChatGPT outperforms CodeReviewer on the new dataset, the results are still not as good as expected, with an EM-trim score of only 22.78. This indicates that ChatGPT still requires significant improvement in code refinement tasks, motivating further exploration of its strengths and weaknesses in RQ3 and RQ4.
Furthermore, our observations indicate that ChatGPT often generates additional text that explains its code refinements. This extra text can offer both advantages and disadvantages. On one hand, it provides explanations that assist users in understanding the code refinements and assessing the reasonableness of the changes made. On the other hand, it may require users to make an additional effort to remove this extra text when submitting the refined code. However, we believe that automatic filtering out such extra text is relatively easier since ChatGPT frequently encloses the code with code blocks, typically denoted by three backticks.
**Answers to RQ2**: Overall, ChatGPT demonstrates better generalization capabilities than CodeReviewer when applied to unseen dataset. However, its effectiveness is still limited, with EM-trim and BLEU-trim scores of only 22.78 and 76.55, respectively.
### RQ3 Strengths and Weaknesses of ChatGPT
#### 4.3.1. Setup
To gain a deeper understanding of the strengths and weaknesses of ChatGPT, we conducted a qualitative analysis on the results of RQ2. Specifically, we randomly selected 400 samples, including 200 samples each from the CodeReview and CodeReviewer-New datasets, which achieved 90% confidence level and 5.8% confidence interval. Then we manually annotated them along three dimensions: the relevance of the review comment to the code refinement (_Comment Relevance_), the information provided by the review comment (_Comment Information_), and the categories of code changes (_Code Change Category_). Our aim was to identify the strengths and weaknesses of ChatGPT based on these three dimensions.
We employed a rigorous annotation process for the manual study of ChatGPT on the selected samples. To facilitate the annotation process, we developed a annotation website that allowed annotators to view the review comment, the original code \(C_{1}\), the ground truth revised code \(C_{2}\), and the original pull request link in a single page. The annotators were able to refer to the code, discussions, and commits in the original pull request if necessary to determine the annotation categories. Two co-authors independently annotated the samples along the three dimensions. When discrepancies occurred between the annotations of the two co-authors, a third author was consulted to resolve the issue through discussion. Conflicts were resolved every 50 samples, and annotation standards were aligned over eight rounds to ensure consistency and accuracy in the annotation process. It took 14 people days to perform the annotation in total. The final Cohen's Kappa coefficient (Kappa et al., 2016) for Comment Relevance, Comment Information, and Code Change Category was 0.675, 0.696 and 0.888 respectively, suggesting moderate, moderate and strong agreement between the two annotators.
**Comment Relevance** measures the degree of relevance between the review comments and the code changes in the test dataset, reflecting the quality of the dataset. The relevance of the comments is divided into three levels:
* **Level 1 (Not):** There is no apparent relationship between the code change and the review comment.
* **Level 2 (Partial):** The suggestions in the review comment are partially implemented in the code change, or some refinement in the code change is not present in the suggestions of the comment.
* **Level 3 (Perfect):** The code changes strictly follow the review comment, and there is a clear correspondence between them. In other words, the suggestion of the review comment is fully implemented in the code change, and the code refinement is entirely contained within the review comment.
**Comment Information** measures the sufficiency and clarity of the instructions contained in the comment regarding the code change, which reflects the difficulty for the contributor or a model to refine the code. For example, a comment like "There are spaces missing" is more informative than "This function name does not describe well what it does." We followed the definition of comment information from (Kappa et al., 2016), and divided the comment information into three levels:
* **Level 1 (Vague Question)**: The review comment only gives a general direction for modification (e.g., "we should maintain the consistency of variable naming") without clear suggestions for changes.
Figure 3. Data quality of CodeReview and CodeReview-New.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & Tool & \#Samples & EM & EM-T & BLEU & BLEU-T \\ \hline \multirow{2}{*}{\(CR\)} & CodeReviewer & \multirow{2}{*}{13,104} & **32.49** & **32.55** & **83.39** & **83.50** \\ & ChatGPT & & 16.70 & 19.47 & 68.26 & 75.12 \\ \hline \multirow{2}{*}{\(CRN\)} & CodeReviewer & \multirow{2}{*}{14,568} & 14.84 & 15.50 & 62.25 & 62.88 \\ & ChatGPT & & **19.52** & **22.78** & **72.56** & **76.44** \\ \hline \multirow{2}{*}{\(CRNT\)} & CodeReviewer & \multirow{2}{*}{9,117} & **15.75** & 16.31 & 62.01 & 62.47 \\ & ChatGPT & & **19.60** & **22.44** & **72.90** & **76.55** \\ \hline \multirow{2}{*}{\(CRNL\)} & CodeReviewer & \multirow{2}{*}{5,451} & 13.21 & 14.05 & 62.67 & 63.61 \\ & ChatGPT & & **19.39** & **23.40** & **71.97** & **76.25** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Quantitative evaluation results.
* **Level 2 (Vague Suggestion)**: The review comment provides specific suggestions for modification (e.g., "changing it with camel case style"), but does not directly specify the location of the code that should be modified.
* **Level 3 (Concrete Suggestion)**: The review comment includes explicit requests for adding or modifying code snippets (e.g., "changing the variable name 'testfile' to 'testFile") or explicitly identifies code snippets to be removed.
**Code Change Category** is used to measure the intention of the code changes. We followed the taxonomy in (Zhou et al., 2017) and defined the categories based on our annotations. There are 4 major categories, including _Documentation Category_, _Feature Category_, _Refactoring Category_, and _Documentation-and-Code Category_.
* **Documentation Category** represents code changes that only add, modify, or remove documentation. Modifications according to conventions (Documentation-conventions) may also involve additions, modifications, or deletions, but we separated it for easier analysis of the unique challenges it poses to the model's prediction of revised code.
* **Feature Category** represents code changes in terms of functional logic, such as adding, modifying, or removing code.
* **Refactoring Category** refers to non-functional code refactoring, including renaming code entities (Refactoring-rename), swapping two code snippets (Refactoring-swap), and updating code based on coding standards (Refactoring-conventions).
* **Documentation-and-Code Category** represents code changes that include both documentation and code modifications.
Figure 3 presents the results of the annotation on the CodeReview dataset and the CodeReview-New dataset, which measures comment relevance and comment information. The results show that, compared to the CodeReview dataset, CodeReview-New dataset, constructed with stricter filtering rules, has more samples with _perfect_ relevance levels (150 vs. 135) and fewer samples with _not_ relevance levels (21 vs. 36), indicating higher quality. Furthermore, the CodeReview-New dataset has fewer samples with _vague suggestion_ level (40 vs. 59) and more samples with _vague question_ level (65 vs. 46) than the CodeReview dataset.
Figure 4 illustrates the results of ChatGPT on different comment relevance and information levels. The figure highlights that ChatGPT performs the best when the comments are classified as _perfect_ relevance, outperforming both _partial_ and _not_ relevance levels. In addition, ChatGPT performs the best on reviews that contain _concrete suggestion_ information, while performing similarly for _vague suggestions_ and _vague questions_. The results imply that the quality of data significantly impacts ChatGPT's performance, as reviews with low relevance and low information do not provide enough context and information for ChatGPT to make accurate predictions.
Table 6 summarizes the results across different code change categories. It shows that ChatGPT performs best in the Refactor category with an EM-trim of 37.50% and a BLEU-trim of 83.58%, indicating that ChatGPT has a good understanding of how to perform code refactoring. However, the _Documentation-and-Code_ category is the weakest performing category, with an EM-trim of 0% and a BLEU-trim of 64.09%, which highlights the difficulty in making simultaneous changes to code and documentation while maintaining consistency. When comparing minor categories, ChatGPT is best at handling _remove_-type code changes, followed by _modify_ and _add_ categories. Additionally, we observed that some of predictions about updates and adds are actually correct, but do not strictly match the ground truth answers, which will be discussed in RQ4. The results also suggest that ChatGPT is skilled at updating code based on conventions, with EM-trim values of 23.08% and 44.12% for Documentation-convention and Refactor-convention samples, respectively, while the average EM-trim for the Documentation and Refactor categories is lower at 17.78% and 37.50%, respectively.
### RQ4 Root Causes Analysis and Mitigation
In RQ4, we aim to further understand the root causes of ChatGPT's underperforming cases and identify potential solutions for improvement. Specifically, we collect 206 underperforming cases that met
Figure 4. Qualitative results of ChatGPT on data with different review information levels.
Figure 5. An example of unclear location and the mitigation.
two criteria: 1) the reviews have _perfect relevance_ and \(2\)) the EM-trim scores calculated based on outputs of ChatGPT were 0.
#### 4.4.1. Root Cause Analysis
Table 7 presents the results of the root cause analysis, which includes two major categories of root causes: _inaccurate measurement_ and _incorrect prediction_.
**Inaccurate Measurement Category** refers to false positives where the predicted refinement by ChatGPT is correct based on our manual inspection, but the measurement metrics, such as EM or EM-trim, are low due to the strict matching. Four types of root causes were identified in this category: _Insignificant Omission (IO)_, where ChatGPT did not return unmodified code segments but correctly returned the modified parts; _Unexpected Grammar Fix (UGF)_, where ChatGPT fixed grammar errors in the documentation that were not present in the ground truth revised code; _Code Style Difference (CSD)_, where the predicted code by ChatGPT is semantically identical to the ground truth revised code, with differences only in whitespace, line breaks, and other code style aspects that do not affect code semantics, and the review comment did not explicitly prohibit the change of code style. _Reasonable Improvement (RI)_, refers to cases where ChatGPT's modifications are highly reasonable and represent an improvement over the original version.
**Incorrect Prediction Category** refers to true positive cases where ChatGPT made incorrect answers compared to the ground truth revised code. We identified three types of root causes in this category. _Need Domain Knowledge (NDK)_ refers to cases where the review comment does not provide the necessary repository-related domain knowledge to complete the modification (e.g., "change this as the style in _anotherFile_"). _Unclear Location (UL)_ refers to cases where the review comment does not provide a specific location for the code to be modified. For example, in Figure 5, the review does not clearly indicate the location of the changes, and ChatGPT (GPT-3.5) erroneously modifies the function name as well. Although contributors can see the specific location of the review comment on the GitHub pull request interface, such information is not provided to ChatGPT, following the same settings as CodeReviewer (2018). _Unclear Changes (UC)_ refers to cases where the review comment has a lower information level, causing ChatGPT to be unable to determine the specific modifications needed, resulting in underperformance. For example, in Figure 6, ChatGPT (GPT-3.5) mistakenly assumes that the review suggests returning the result of "data.apply..." to data itself due to the vague comment. _Model Fallacy (MF)_ refers to cases where the review is accurate and clear from the perspective of human, yet ChatGPT fails to handle them correctly. It suggests that the observed issues are more likely to be inherent to the model itself rather than solely stemming from the quality of the review. As an illustration, in Figure 7, ChatGPT (GPT-3.5) mistakenly believes that the review suggests changing default(1) to default(false).
As presented in Table 7, 51 (20.39%) of the underperforming cases were caused by inaccurate EM measurement. For the remaining 164 (79.61%) cases where ChatGPT outputs incorrect answers, the majority 107 (51.94%) cases were caused by the lack of domain knowledge required to complete the modification. Another 44 cases (21.36%) were due to unclear location information in the review comment, while 13 cases (6.31%) were caused by unclear instructions provided in the review comments.
#### 4.4.2. Mitigation Strategies
We further investigated potential mitigation to improve ChatGPT on the underperforming cases in the _Incorrect Prediction_ category as _Need Domain Knowledge_ requires more information. In general, mitigation can be conducted from two main directions: _improving the quality of review comments_ and _enhancing the models_ used for code refinement. Improving the review quality can be achieved through two avenues: _designing best practices for reviewers to provide high-quality reviews_ and _developing
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Doc-add} & Doc-rem & Doc-mod & Doc-con & Feat-add & Feat-rem & Feat-mod & Ref-ren & Ref-swap & Ref-con & Doc\&Code \\ \hline \#Sample & 14 & 8 & 55 & 13 & 21 & 52 & 153 & 24 & 6 & 34 & 20 \\ EM-T & 0.00 & 50.00 & 16.36 & 23.08 & 4.76 & 23.08 & 19.61 & 29.17 & 33.33 & 44.12 & 0.00 \\ BLEU-T & 52.65 & 87.24 & 81.16 & 67.45 & 75.40 & 73.27 & 79.43 & 85.88 & 82.14 & 82.22 & 64.09 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Results of ChatGPT on different code changes.
Figure 6. An example of unclear changes and the mitigation.
Figure 7. An example of model fallacy.
tools to assist in refining low-quality reviews_ if the reviewers cannot provide high-quality ones. In this study, we would like to investigate whether providing more precise reviews and using more advanced models can improve the performance of LLMs on the code refinement task. We leave the study of advanced mitigation strategies (e.g., automatic review refinement) as the future work.
For the cases related to _Unclear Location_ and _Unclear Changes_, we identified three strategies for improving the quality of reviews and models: incorporating specific location information in the review (abbreviated as Loc.), providing more explicit review comments (abbreviated as Exp.), and using more advanced GPT-4 model in ChatGPT. When utilizing GPT-4, in addition to employing the original review directly (abbreviated as Dir.), we can also add specific location information or provide more explicit review comments if needed. We aim to study whether the strategies could mitigate these challenges of ChatGPT.
Table 8 shows the results with different mitigation strategies. The rows UL and UC refer to the cases under _Unclear Location_ and _Unclear Changes_, respectively. The results show that GPT-3.5, combined with the corresponding mitigation techniques, can resolve 24/32 (75%) of _Unclear Location_ cases and 6/11 (54.54%) of _Unclear Changes_ cases. By simply switching to GPT-4 without using mitigation techniques, it can resolve cases very close to those addressed by GPT-3.5 with mitigation techniques. After applying the mitigation techniques, GPT-4 can resolve 31/32 (96.88%) of Unclear Location and 10/11 (90.91%) of Unclear Changes cases. Figure 5 and Figure 6 show two examples with different mitigations. By revising the original review (i.e., adding location information and making it more explicit), ChatGPT (GPT-3.5) can accurately refine the code. Another method is to use a more advanced LLM, i.e., GPT-4, which is capable of directly producing correct results without the need for review revision. In addition, we show part of explanations generated by GPT-4, which are clear and reasonable. Moreover, unlike GPT-3.5, GPT-4 often asks the reviewer for specific modification locations or content when it cannot infer them from the review comment. This is particularly useful when applied in real-world scenarios, as it allows for iteratively helping the reviewer refine their review comment until the model can better understand it, ultimately improving the accuracy of the predicted code changes.
**Answers to RQ4**: The main root causes identified in our analysis were the lack of domain knowledge, unclear location, and unclear changes. Two potential directions for mitigating these issues were identified: improving the large language model, such as using GPT-4 instead of GPT-3.5, and improving the quality of reviews, such as providing more clear information.
## 5. Implications
Our study provides implications for both developers seeking to automate code refinement and researchers working in the code review field.
**Developers:** Our findings show that ChatGPT has the potential to significantly aid developers in code refinement tasks. However, the results also suggest that developers must configure language models like ChatGPT carefully, ensure review quality, and validate output. Our study highlights the impact of temperature and prompt configuration on performance, suggesting that using lower temperatures and concise descriptions with scenario information can lead to better and more stable results. Developers should therefore carefully configure these parameters before using LLMs for code refinement tasks. Regarding the reviewers who create the code reviews, we have found that clearer reviews significantly aid ChatGPT in understanding modification suggestions. We suggest reviewers to write more specific and detailed review comments. Specifically, the reviewers should be careful in using specific syntax (e.g., code blocks) that may be difficult to be understood by ChatGPT. A safe solution could be that the reviewers can check the clarity of the review content with ChatGPT. For developers who utilize ChatGPT for automated code modification, we recommend conducting a careful manual review of ChatGPT's results. Especially for modifications requiring strong domain knowledge or cases where the review information is ambiguous, it is important to verify whether ChatGPT correctly understands the reviewer's intent and to check for any unnecessary modifications or deletions made by ChatGPT. One possible way is to read the ChatGPT's explanation carefully to check whether the model understands the review well. Furthermore, we recommend that users to choose advanced models if possible, such as GPT-4, which offer enhanced understanding capabilities.
**Researchers:** Our study demonstrates that ChatGPT achieves promising results but still has room for improvement. Specifically, we identify some root causes of the underperformance of ChatGPT and propose some strategies to mitigate these challenges. These findings provide important guidance for future research in improving the performance of LLMs and enhancing the quality of code reviews. Potential research directions include automatic generation of high-quality reviews, review refinement, and automatic low-quality review detection and filtering. Furthermore, our study highlights the limitations of existing metrics such as EM and BLEU, suggesting the need for more accurate and reliable metrics for evaluating the results of language models in code refinement tasks.
## 6. Threats to Validity
The selected baseline model and benchmark could be a threat to the validity of our results. We addressed this by selecting a state-of-the-art method as reference and creating a new test dataset, _CRN_, with stricter filtering rules. The randomness of ChatGPT predictions is another potential threat to the validity of our results. To mitigate this, we ran each setting ten times in RQ1, which provided us with more reliable and stable results. In RQ2, we did not run multiple times due to the high cost of accessing ChatGPT API. The prompts settings we used for ChatGPT could be a threat, as there may be
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Inaccurate Measurement} & \multicolumn{3}{c}{Incorrect Prediction} \\ \cline{2-9} Type & IO & UGF & CSD & RI & NDK & UL & UC & MF \\ \hline \#Samples & 13 & 2 & 19 & 8 & 107 & 32 & 11 & 14 \\ \hline \hline \end{tabular}
\end{table}
Table 7. Results of root cause analysis.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Strategy} & \multirow{2}{*}{\#Samples} & \multicolumn{4}{c}{GPT-3.5} & \multicolumn{4}{c}{GPT-4} \\ \cline{3-8} & & Loc. & Exp. & Total & Dir. & Loc. & Exp. & Total \\ \hline UL & 32 & 24 & - & 24 & 22 & 9 & - & 31 \\ UC & 11 & - & 6 & 6 & 6 & - & 4 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 8. Results of mitigation strategies.
other optimal prompts for code refinement tasks. Moreover, the different wording of the prompts could also impact the results. We try to address this by following the existing best practices and selecting a range of prompts with varying levels of complexity and specificity, which allowed us to study which types of prompts worked best in different contexts. Another potential threat arises from the comparison between ChatGPT and CodeReviewer, which involve different settings. Specifically, in RQ1, we empirically determined the optimal parameters for temperature and prompts in ChatGPT. We assume that CodeReviewer also achieves its best performance with its hyper-parameter settings.
The randomness of the selection of samples for the manual annotation process could also be a threat. However, we believe that this would not affect the overall conclusions drawn from our results, especially on the performance of ChatGPT on different categories in RQ3. The subjective nature of human decisions in the manual annotation process is another potential threat to the validity of our results. To address this, we obeyed a rigorous annotation process with two co-authors independently annotating each sample and a third author resolving any inconsistencies or conflicts through discussion. Moreover, the final Cohen's Kappa coefficient indicates relatively high agreement between the two annotators.
## 7. Related Work
**Pre-trained Models for SE:** Large-scale pre-trained models has revolutionized the field of natural language processing (Dong et al., 2018; Chen et al., 2018), and its application in the software engineering domain has shown promising results (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Currently, pre-trained model architectures are mainly divided into encoder-only, decoder-only, and encoder-decoder models (Chen et al., 2018; Chen et al., 2018).
Encoder-only models pre-train a bidirectional Transformer, which can access token information before and after the current token when training (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Decoder-only models allow the model to access only the tokens preceding the current token during the training process (Chen et al., 2018; Chen et al., 2018). GPT-3 (Chen et al., 2018) also employs decoder-only architectures and has a significantly larger parameter size (175 billion, 10x more than any previous LLMs). Additionally, GPT-3.5-Turbo (Li et al., 2019), the default model of ChatGPT, adopt Reinforced Learning with Human Feedback (RLHF) to enhance GPT3's ability to understand instructions and generate content aligned with human expectations. CodeT5 (Chen et al., 2018) is a typical pretraining model for code utilizing an encoder-decoder architecture. It adopts the T5 (Chen et al., 2018) model and considers crucial token type information from identifiers during pretraining. CommitBART (Chen et al., 2018) also employs an encoder-decoder architecture and is specially trained for commit representation. There are also some works focusing on exploring the learned program semantics for these pre-trained models in SE (Chen et al., 2018; Chen et al., 2018) and analyzing the robustness (Chen et al., 2018) and security (Li et al., 2019) of these models.
**Automating Code Review Activities:** Studies have presented evidence that developer spend a considerable amount of time on code review activities (Chen et al., 2018; Chen et al., 2018), both writing review comments for other's code and performing code changes according to other's comments (Chen et al., 2018; Chen et al., 2018). Consequently, numerous studies (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018) have been carried out on automating the code review (ACR) activities, emphasizing their significance and potential impact (Chen et al., 2018).
According to the stages of code review, prior studies on ACR can be categorized into three tasks (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018): (1) _Code Change Recommendation (Chen et al., 2018)_: Before the contributor submits the original code for review, the ACR model provides potential code changes that the reviewer might suggest. (2) _Review Comment Generation (Chen et al., 2018)_: After the contributor submits the code for review, the model provides possible review comments for the reviewer, serving as a draft for review comments. (3) _Code Refinement (Chen et al., 2018; Chen et al., 2018)_: After the reviewer provides review comments, the model suggests code changes for the contributor by considering both the review comments and submitted code. In this paper, we focus on the last task, _Code Refinement_, as it is the final and most crucial step in code review activities.
Tufano et al. (Tufano et al., 2018) introduced a Recurrent Neural Network (RNN) based Neural Machine Translation (NMT) model for the code refinement task. CodeReviewer (Chen et al., 2018) utilized the CodeT5 model and designed four pre-training tasks related to code review. Recently, Zhou et al. (Zhou et al., 2018) compared existing ACR techniques, including Trans-Review (Chen et al., 2018), AutoTransform (Chen et al., 2018), and T5-Review (Chen et al., 2018). They discovered that CodeT5 outperformed existing ACR techniques in both code change recommendation and code refinement tasks.
Although, they evaluated large language models for code, such as CodeT5 and CodeBERT, ChatGPT is significantly different from these LLMs with RLHF and emergent abilities due to a much larger number of parameters (Li et al., 2019), thus need further evaluation. Despite that ChatGPT have been evaluated on numerous NLP tasks (Chen et al., 2018) and several software engineering tasks (Chen et al., 2018; Chen et al., 2018), this paper presents the first comprehensive empirical study exploring ChatGPT's capabilities in the code refinement task, to the best of our knowledge.
## 8. Conclusion
In this paper, we conduct an empirical study to investigate the potential of ChatGPT in automating code review tasks, with a focus on code refinement based on code reviews. We assess the impact of various ChatGPT configurations and examine its effectiveness on both standard code review benchmarks and a new dataset collected by us. Our findings highlight the promising potential of ChatGPT for code refinement, unveil the root causes of its underperformance, and suggest potential strategies to overcome these challenges.
## 9. Acknowledgment
This work was partially supported by the National Key R&D Project (2021YFF1201102), the National Key R&D Program of China (2021ZD 0112903), the National Natural Science Foundation of China (Grant No. 61872262), the National Research Foundation, Singapore, and the Cyber Security Agency under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Cyber Security Agency of Singapore. |
2309.07112 | A statistical mechanics framework for constructing non-equilibrium
thermodynamic models | Far-from-equilibrium phenomena are critical to all natural and engineered
systems, and essential to biological processes responsible for life. For over a
century and a half, since Carnot, Clausius, Maxwell, Boltzmann, and Gibbs,
among many others, laid the foundation for our understanding of equilibrium
processes, scientists and engineers have dreamed of an analogous treatment of
non-equilibrium systems. But despite tremendous efforts, a universal theory of
non-equilibrium behavior akin to equilibrium statistical mechanics and
thermodynamics has evaded description. Several methodologies have proved their
ability to accurately describe complex non-equilibrium systems at the
macroscopic scale, but their accuracy and predictive capacity is predicated on
either phenomenological kinetic equations fit to microscopic data, or on
running concurrent simulations at the particle level. Instead, we provide a
framework for deriving stand-alone macroscopic thermodynamics models directly
from microscopic physics without fitting in overdamped Langevin systems. The
only necessary ingredient is a functional form for a parameterized, approximate
density of states, in analogy to the assumption of a uniform density of states
in the equilibrium microcanonical ensemble. We highlight this framework's
effectiveness by deriving analytical approximations for evolving mechanical and
thermodynamic quantities in a model of coiled-coil proteins and double stranded
DNA, thus producing, to the authors' knowledge, the first derivation of the
governing equations for a phase propagating system under general loading
conditions without appeal to phenomenology. The generality of our treatment
allows for application to any system described by Langevin dynamics with
arbitrary interaction energies and external driving, including colloidal
macromolecules, hydrogels, and biopolymers. | Travis Leadbetter, Prashant K. Purohit, Celia Reina | 2023-09-13T17:34:58Z | http://arxiv.org/abs/2309.07112v1 | # A statistical mechanics framework for constructing non-equilibrium thermodynamic models
###### Abstract
Far-from-equilibrium phenomena are critical to all natural and engineered systems, and essential to biological processes responsible for life. For over a century and a half, since Carnot, Clausius, Maxwell, Boltzmann, and Gibbs, among many others, laid the foundation for our understanding of equilibrium processes, scientists and engineers have dreamed of an analogous treatment of non-equilibrium systems. But despite tremendous efforts, a universal theory of non-equilibrium behavior akin to equilibrium statistical mechanics and thermodynamics has evaded description. Several methodologies have proved their ability to accurately describe complex non-equilibrium systems at the macroscopic scale, but their accuracy and predictive capacity is predicated on either phenomenological kinetic equations fit to microscopic data, or on running concurrent simulations at the particle level. Instead, we provide a framework for deriving stand-alone macroscopic thermodynamics models directly from microscopic physics without fitting in overdamped Langevin systems. The only necessary ingredient is a functional form for a parameterized, approximate density of states, in analogy to the assumption of a uniform density of states in the equilibrium microcanonical ensemble. We highlight this framework's effectiveness by deriving analytical approximations for evolving mechanical and thermodynamic quantities in a model of coiled-coil proteins and double stranded DNA, thus producing, to the authors' knowledge, the first derivation of the governing equations for a phase propagating system under general loading conditions without appeal to phenomenology. The generality of our treatment allows for application to any system described by Langevin dynamics with arbitrary interaction energies and external driving, including colloidal macromolecules, hydrogels, and biopolymers.
## Significance
The beautiful connection between statistical mechanics and equilibrium thermodynamics is one of the crowning achievements in modern physics. Significant efforts have extended this connection into the non-equilibrium regime. Impactful, and in some cases surprising, progress has been achieved at both the macroscopic and microscopic scales, but a key challenge of bridging these scales remains. In this work, we provide a framework for constructing macroscopic non-equilibrium thermodynamic models from microscopic physics without relying on phenomenology, fitting to data, or concurrent particle simulations. We demonstrate this methodology on a model of coiled-coil proteins and double stranded DNA, producing the first analytical approximations to the governing equations for a phase transforming system without phenomenological assumptions.
## Introduction
indestanding and predicting far-from-equilibrium behavior is of critical importance for advancing a wide range of research and technological areas including dynamic behavior of materials, [18, 24], complex energy systems [15], as well as geological and living matter [3, 9]. Although our understanding of each of these diverse fields continues to grow, a universal theory of non-equilibrium processes has remained elusive. The past century, however, has seen numerous significant breakthroughs towards this ultimate goal, of which we detail only a few below. At the macroscopic scale, classical irreversible thermodynamics leverages the local equilibrium assumption to allow classical thermodynamic quantities to vary over space and time, enabling one to describe well known linear transport equations such as Fourier's and Fick's laws [25]. Extended irreversible thermodynamics further promotes the fluxes of these quantities to the level of independent variables in order to capture more general transport laws [20]. Further extensions to allow for arbitrary state variables (not just fluxes), or history dependence take the names of thermodynamics with internal variables (TIV) or rational thermodynamics, respectively [28, 29, 2, 47]. More recently, the General Equation for Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) framework and Onsager's variational formalism have proven to be successful enhancements of the more classical methods [11, 34, 5, 30]. On the other hand, linear response theory and fluctuation dissipation relations constitute the first steps towards a theory of statistical physics away from equilibrium. In the last few decades, interest in microscopic far-from-equilibrium processes has flourished due to the unforeseen discovery of the Jarzynski equality and other fluctuation theorems, as well as the advent of stochastic thermodynamics [19, 4, 40, 42, 16], and the application of large deviation theory to statistical physics [8, 39, 31].
These advances have changed the way scientists view thermodynamics, entropy, and the second law particularly at small scales.
More specific to this work is the challenge of uniting scales. Given the success of the aforementioned macroscopic thermodynamic theories, how can one derive and inform the models within them using microscopic physics? Describing this connection constitutes the key challenge in formulating a unified far-from-equilibrium theory. As of yet, the GENERIC framework possesses the strongest microscopic foundation. Starting from a Hamiltonian system, one can either coarse grain using the projection operator formalism [36] or a statistical lack-of-fit optimization method [49, 38] in order to derive the GENERIC equations. However, these methods are either challenging to implement, analytically or numerically, or contain fitting parameters which must be approximated from data. Alternatively, one can begin from a special class of stochastic Markov processes and use fluctuation-dissipation relations or large deviation theory to the same effect [27, 32]. So far, numerical implementations of these methods have only been formulated for purely dissipative systems, with no reversible component.
For this work, we shall utilize the less stringent framework of TIV, but recover GENERIC in an important case utilized in the examples. We will show how to leverage a variational method proposed by Eyink [7] for evolving approximate non-equilibrium probability distributions to derive the governing equations of TIV for systems whose microscopic physics is well described by Langevin dynamics. Furthermore, in the approach proposed here, the variational parameters of the probability density are interpreted as macroscopic internal variables, with dynamical equations fully determined through the variational method. Once the approximate density is inserted into the stochastic thermodynamics framework, the equations for the classical macroscopic thermodynamics quantities including work rate, heat rate, and entropy production appear naturally, and possess the TIV structure. For example, the internal variables do not explicitly appear in the equation for the work rate, and the entropy production factors into a product of fluxes and their conjugate affinities, which themselves are given by the gradient of a non-equilibrium free energy. Moreover, we show that when the approximating density is assumed to be Gaussian, the internal variables obey a gradient flow dynamics with respect to the non-equilibrium free energy, and so the resulting rate of entropy production is guaranteed to be non-negative. This direct link between microscopic physics and TIV has not been elaborated elsewhere, and we refer to this method as stochastic thermodynamics with internal variables (STIV).
To illustrate and highlight the effectiveness of this method, we provide the results of two examples. The first is a paradigmatic example from stochastic thermodynamics: a single colloidal particle acted on by a linear external force, mimicking a macromolecule in an optical trap. It demonstrates all of the key features of the method while being simple enough to allow for comparison to exact solutions. The second example features a model system for studying phase transitions of bio-molecules, for example in coiled-coil proteins [22, 46] (depicted in Fig. 1) or double stranded DNA [10, 50]: a colloidal mass-spring-chain system with double-well interactions between neighboring masses. By comparing to Langevin simulations, we show that STIV not only produces accurate analytical approximations to relevant thermodynamic quantities, but also predicts the speed of a traveling phase front induced by external driving.
Figure 1: The stochastic thermodynamics with internal variables (STIV) framework proposed here provides kinetic and thermodynamic equations for a broad class of systems described by Langevin dynamics, including the coiled-coil protein depicted in these snapshots. Taken from molecular dynamics simulations, atomic level structures are depicted in (A) and (B), while the unfolding due to an externally applied load becomes clear in the secondary structures shown in (C) and (D). Vital for the coiled-coil protein’s function, we study the dynamics of this transition from folded to unfolded configuration as a demonstration of the power of the STIV framework. Reproduced from [46] Fig. 1 with permission from the Royal Society of Chemistry.
## Theory
### Stochastic thermodynamics
We begin by outlining the key ideas of stochastic thermodynamics which defines classical thermodynamic quantities at the trajectory level for systems obeying Langevin dynamics, such as those embedded in an aqueous solution. These quantities include work, heat flow, and entropy production among others, and these new definitions allow for an expanded study of far-from-equilibrium behavior at the level of individual, fluctuating trajectories. Stochastic thermodynamics is a highly active area of study, and has been developed far beyond what is detailed here, as we have limited our presentation to only what we need for introducing STIV. See [42] and the references therein for further details.
The paradigmatic example within stochastic thermodynamics is a colloidal particle in a viscous fluid at constant temperature, \(T\), acted on by an external driving (we present the theory for a single particle in one dimension as the generalization to many particles in multiple dimensions is straightforward). This system is well described by an overdamped Langevin equation, which can be written as a stochastic differential equation of the form
\[\mathrm{d}x(t)=-\frac{1}{\eta}\frac{\partial e}{\partial x}(x,\lambda)\, \mathrm{d}t+\sqrt{2}d\,\mathrm{d}b(t),\]
where \(x(t)\) denotes the particle's position at time \(t\in[t_{i},t_{d}]\), \(\eta\) is the drag coefficient of the particle in the fluid, \(-\frac{\partial e}{\partial x}(x,\lambda)\) is the force acting on the particle coming from a potential energy, \(e\), \(\lambda(t)\) is a prescribed external control protocol, \(d=\frac{1}{\eta\beta}\) is the diffusion coefficient, \(\beta=1/k_{B}T\) the inverse absolute temperature in energy units, and \(b(t)\) is a standard Brownian motion.
Given this system, stochastic thermodynamics enables one to define the internal energy, work, heat, and entropy at the level of the trajectory. Naturally, \(e(x(t),\lambda(t))\) defines the internal energy of the system. One does work on the system by changing \(e\) via the external control, \(\lambda\). Thus, the incremental work reads
\[\mathrm{d}w=\frac{\partial e}{\partial\lambda}\ \dot{\lambda}\,\mathrm{d}t. \tag{1}\]
Using the first law of thermodynamics, we conclude that the incremental heat flowing out of the system is
\[\mathrm{d}q=\mathrm{d}w-\mathrm{d}e.\]
An additional important quantity is the total entropy, \(s^{\mathrm{tot}}\). From the second law of thermodynamics, its macroscopic counterpart, \(S^{\mathrm{tot}}\) (to be defined), should be non-decreasing and describe the level of irreversiblity of the trajectory. To that end, the change in total entropy is defined using the log of the (Raydon-Nikodym) derivative of the probability of observing the given trajectory, \(\mathbb{P}[x(t)\mid\lambda]\), with respect to the probability of observing the reversed trajectory under the time reversed external protocol, \(\tilde{\mathbb{P}}[\tilde{x}(t)\mid\tilde{\lambda}]\)
\[\Delta s^{\mathrm{tot}}[x(t)]=k_{B}\log\!\left(\frac{\mathrm{d}\mathbb{P}[x(t )\mid\lambda]}{\mathrm{d}\tilde{\mathbb{P}}[\tilde{x}(t)\mid\tilde{\lambda}]}\right)\]
where \(\tilde{x}(t)=x(t_{\mathrm{f}}-t)\) and likewise for \(\tilde{\lambda}\). Upon taking the expectation with respect to all possible trajectories (and any probabilistic initial conditions),
\[\Delta S^{\mathrm{tot}}=\left\langle\Delta s^{\mathrm{tot}}\right\rangle_{ \mathrm{paths}}=\int\Delta s^{\mathrm{tot}}[x(t)]\,\mathrm{d}\mathbb{P}[x(t) \mid\lambda]\]
is recognized as \(k_{B}\) times the Kullback-Leibler divergence between the distributions of forward and backwards trajectories. As such, \(\Delta S^{\mathrm{tot}}\) must be non-negative. It is also useful to break up the total entropy change into the change in the entropy of the system,
\[\Delta s[x(t)]=-k_{B}\log\!\left(\frac{p(x(t_{\mathrm{i}}),t_{\mathrm{f}}\mid \lambda)}{p(x(t_{\mathrm{i}}),t_{\mathrm{i}}\mid\lambda)}\right)\!,\]
where \(p(x,t\mid\lambda)\) is the probability density of observing the particle at position \(x\) at time \(t\), and the change in the entropy of the medium
\[\Delta s^{\mathrm{m}}=\Delta s^{\mathrm{tot}}-\Delta s. \tag{2}\]
Finally, one defines the microscopic non-equilibrium free energy in terms of the potential and entropy as \(a^{\mathrm{neq}}=e-Ts\)[45]. Using the path integral representation of \(\mathbb{P}[x(t)\mid\lambda]\) and \(\tilde{\mathbb{P}}[\tilde{x}(t)\mid\tilde{\lambda}]\), one finds that the incremental heat dissipated into the medium equals the incremental entropy change in the medium \(T\mathrm{d}s^{\mathrm{m}}=\mathrm{d}q\)[41]. This allows one to relate the change in non-equilibrium free energy to the work done and the change in total entropy
\[\mathrm{d}a^{\mathrm{neq}} =\mathrm{d}e-T\mathrm{d}s\] \[=\mathrm{d}w-\mathrm{d}q-T\mathrm{d}s\] \[=\mathrm{d}w-T\mathrm{d}s^{\mathrm{tot}}. \tag{3}\]
As we saw with \(\Delta S^{\mathrm{tot}}\), each microscopic quantity has a macroscopic counterpart defined by taking the expectation with respect to all possible paths. Throughout, we use the convention that macroscopic (averaged) quantities are written in capital, and microscopic quantities are written in lower case, e.g., \(A^{\mathrm{neq}}=\left\langle a^{\mathrm{neq}}\right\rangle_{\mathrm{paths}}\).
### Thermodynamics with internal variables
Now we turn to the macroscopic description, and give a brief overview of Thermodynamics with internal variables (TIV). TIV has enjoyed decades of application as an important tool of study for irreversible processes in solids, fluids, granular media, and viscoelastic materials [35, 33, 43, 13, 6]. Originally formulated as an extension to the theory of irreversible processes, TIV posits that non-equilibrium description without history dependence requires further state variables beyond the classical temperature, number of particles, and applied strain (in the canonical ensemble, for example) in order to determine the system's evolution [28, 17]. These additional variables, the internal variables, encode the effects of the microscopic degrees of freedom on the observable macrostate. Thus, the relevant state functions take both classical and internal variables as input. The flexibility of the theory is apparent from the wide range of material behavior it can describe. The challenge, however, is in selecting descriptive internal variables, and in defining their kinetic equations in a way which is consistent with microscopic physics. Here, we take on the latter challenge.
### Variational method of Eyink
The key mathematical tool we utilize for connecting TIV to stochastic thermodynamics is a variational method for approximating non-equilibrium systems laid out by Eyink [7]. This method generalizes the Rayleigh-Ritz variational method of quantum mechanics to non-Hermitian operators. The method assumes the system in question can be described by a probability density function governed by an equation of the form \(\frac{\partial}{\partial t}p=\mathcal{L}p\) (e.g., a Fokker-Planck equation associated with Langevin particle dynamics). Since the operator \(\mathcal{L}\) is not Hermitian, \(\mathcal{L}\neq\mathcal{L}^{\dagger}\), one must define a variational method over both probability densities \(p\) and test functions \(\psi\). Begin by defining the non-equilibrium action functional
\[\Gamma[\psi,p]=\int_{0}^{\infty}\int_{X}\psi(\frac{\partial}{\partial t}- \mathcal{L})p\,\mathrm{d}x\,\mathrm{d}t.\]
Under the constraint that
\[\int_{X}\psi\ p\,\mathrm{d}x\Big{|}_{t=\infty}=\int_{X}\psi\ p\,\mathrm{d}x \Big{|}_{t=0},\]
this action is stationary, \(\delta\Gamma[\psi^{*},p^{*}]=0\), if and only if \((\frac{\partial}{\partial t}-\mathcal{L})p^{*}=0\) and \((\frac{\partial}{\partial t}+\mathcal{L}^{\dagger})\psi^{*}=0\). By defining the non-equilibrium "Hamiltonian" \(\mathcal{H}[\psi,p]=\int_{X}\psi\ \mathcal{L}p\ dx\), one can recast the variational equation \(\delta\Gamma[\psi^{*},p^{*}]=0\) in Hamiltonian form
\[\frac{\partial}{\partial t}p^{*} =\frac{\delta}{\delta\psi}\mathcal{H}[\psi^{*},p^{*}] \tag{4}\] \[\frac{\partial}{\partial t}\psi^{*} =-\frac{\delta}{\delta p}\mathcal{H}[\psi^{*},p^{*}]. \tag{5}\]
As it stands, the variation is taken over two infinite dimensional function spaces, and as such, it is only possible to find exact solutions in a handful of systems. However, one can still make use of these dynamical equations to find a variational approximation to the true solution which lies within some fixed subspace. To do so, one begins by assuming the true density, \(p^{*}(x,t)\), and test function \(\psi^{*}(x,t)\), can be approximated by a parameterized density \(\hat{p}(x,\alpha(t))\) and test function \(\hat{\psi}(x,\alpha(t))\) respectively, so that all of the time dependence is captured by the variables \(\alpha(t)=(\alpha_{1}(t),...,\alpha_{N}(t))\). For example, a standard method for choosing a parameterization is to pick an exponential family [1], or specifically a collection of quasi-equilibrium distributions [49]. In this case, one selects a finite number of linearly independent functions of the state \(\{\phi_{i}(x)\}_{i=1}^{N}\) to serve as observables describing the system. The parameterized densities \(\hat{p}(x,\alpha(t))\) are defined as (for time dependent "natural" parameters \(\alpha(t)\))
\[\hat{p}(x,\alpha(t))=\exp(\sum_{i=1}^{N}\alpha_{i}(t)\phi_{i}(x)+\mathcal{F} (\alpha(t)))\]
where \(\mathcal{F}(\alpha)=-\log(\int\exp(\sum_{i=1}^{N}\alpha_{i}\phi_{i}(x)\Big{)} \,\mathrm{d}x)\) is a log-normalizing constant. The primary reason for using this parameterization is that for each \(\alpha\), this \(\hat{p}(x,\alpha)\) has maximum Shannon entropy with respect to all other probability densities subject to the constraint that the averages \(\left\langle\phi_{i}(x)\right\rangle_{\hat{p}}\) take on prescribed values. In the quasi-equilibrium case, \(\phi_{1}(x)\) is almost always taken as the system energy, and hence \(\alpha_{1}(t)\) becomes \(\beta\).
Given any parameterization, quasi-equilibrium or otherwise, the dynamical equations Eq. 4 and Eq. 5 reduce to a coupled system of ordinary differential equations (ode)
\[\big{\{}\alpha_{i},\alpha_{j}\big{\}}\frac{\mathrm{d}\alpha_{j}}{\mathrm{d}t}= \frac{\partial\mathcal{H}}{\partial\alpha_{i}} \tag{6}\]
where
\[\big{\{}\alpha_{i},\alpha_{j}\big{\}}=\int_{X}\frac{\partial\hat{\psi}}{ \partial\alpha_{i}}\frac{\partial\hat{p}}{\partial\alpha_{j}}-\frac{\partial \hat{\psi}}{\partial\alpha_{j}}\frac{\partial\hat{p}}{\partial\alpha_{i}}\, \mathrm{d}x.\]
The solution to Eq. 6, \(\alpha^{*}(t)\), offers the best approximations to the true solution \(p^{*}(x,t)\approx\hat{p}(x,\alpha^{*}(t))\), \(\psi^{*}(x,t)\approx\hat{\psi}(x,\alpha^{*}(t))\), lying within the parameterized subspace.
### Stochastic thermodynamics with internal variables
Finally, we fuse stochastic thermodynamics with this variational framework to provide a general method for constructing TIV models. Stochastic thermodynamics provides the appropriate thermodynamic definitions, while the variational formalism of Eyink will allow us to derive dynamical equations for the internal variables consistent with the microscopic physics.
We return to the colloidal particle system with governing stochastic differential equation
\[\mathrm{d}x(t)=-\frac{1}{\eta}\frac{\partial e}{\partial x}(x,\lambda)\ \mathrm{d}t+\sqrt{2d}\ \mathrm{d}b(t).\]
If \(p(x,t\mid\lambda)\) is the probability density of observing the system in state \(x\) at time \(t\) given a prespecified external protocol, \(\lambda(t)\), then \(p(x,t\mid\lambda)\) obeys the Fokker-Planck equation
\[\frac{\partial p}{\partial t}=\mathcal{L}\ p=\frac{1}{\eta}\frac{\partial}{ \partial x}\cdot(\frac{\partial e}{\partial x}\ p)+d\Delta_{x}p.\]
When \(\lambda(t)\) is held constant, the true density tends towards the equilibrium Boltzmann distribution, \(p^{*}(x,t\mid\lambda)\propto\exp(-\beta e(x,\lambda))\). Away from equilibrium, \(p^{*}(x,t\mid\lambda)\) may be highly complex, and in that case we would like to find a low dimensional representation which captures the physical phenomena of interest. To do so, we choose a class of parameterized densities \(\hat{p}(x,\alpha)\) to use in the variational method of Eyink, keeping in mind that the variables \(\alpha(t)\) are to become the internal variables in the macroscopic description. This is in direct analogy with the assumption of a uniform probability density in the microcanonical ensemble, or the Maxwellian distribution in the canonical ensemble. Note, also that in keeping with ensembles in which volume or strain is controlled rather than force or stress, we assume no explicit dependence on the external protocol \(\lambda\) in \(\hat{p}(x,\alpha)\). This will prove necessary mathematically in what follows. Finally, we do not explicitly consider the dependence of \(\hat{p}\) on \(\beta\), as we have assumed that temperature is constant.
We next define the approximate entropy \(\hat{s}(x,\alpha)=-k_{B}\log(\hat{p}(x,\alpha))\) and use its derivatives with respect to the internal variables to define the test functions in the variational formalism
\[\hat{\psi}(x,\alpha,\gamma)=1+\gamma\cdot\frac{\partial\hat{s}}{\partial \alpha}.\]
Since the true solution to the adjoint equation \(\frac{\partial\hat{\psi}^{*}}{\partial t}=-\mathcal{L}^{\dagger}\psi^{*}\) is \(\psi^{*}\equiv\mathrm{const.}\), the variables \(\gamma\) serve as expansion coefficients about the true solution \(\psi^{*}\equiv 1\). In the SI Appendix, we show that they essentially function as dummy variables, as the variational solution fixes \(\gamma(t)\equiv 0\) for all time. Hence, the vector \(\alpha(t)\) will be the only relevant variable. Assuming this choice of density and test functions, the variational formalism of Eyink yields the dynamical equation
\[\left\langle\frac{\partial\hat{s}}{\partial\alpha}\frac{\partial\hat{s}^{ \ T}}{\partial\alpha}\right\rangle_{\hat{p}}\cdot\dot{\alpha}=-k_{B}\left\langle \mathcal{L}^{\dagger}\frac{\partial\hat{s}}{\partial\alpha}\right\rangle_{\hat{ p}} \tag{7}\]
where \(\langle g\rangle_{\hat{p}}=\int g(x)\hat{p}(x,\alpha)dx\) denotes averaging with respect to \(\hat{p}\). This equation reveals the utility of our choice of \(\hat{\psi}\). The matrix on the left hand side \(\mathbb{F}_{ij}=\left\langle\frac{\partial\hat{s}}{\partial\alpha_{i}}\frac{ \partial\hat{s}}{\partial\alpha_{j}}\right\rangle_{\hat{p}}\) is \(k_{B}^{2}\) times the Fisher information matrix of the density \(\hat{p}(x,\alpha)\)[49]. This matrix is always symmetric, and is positive definite so long as the functions \(\{\frac{\partial\hat{s}}{\partial\alpha_{i}}(x,\alpha)\}_{i=1}^{N}\) are linearly independent as functions of \(x\) for all \(\alpha\). Picking \(\alpha(0)\) such that \(\hat{p}(x,\alpha(0))\approx p^{*}(x,0\mid\lambda)\), and using Eq. 7 to solve for \(\alpha(t)\) gives us the variational solution for \(\hat{p}(x,\alpha(t))\approx p^{*}(x,t\mid\lambda)\) for all time.
Having approximated the density using the internal variables, we turn to stochastic thermodynamics to impose the thermodynamic structure. In order to make use of the approximate density, \(\hat{p}\), we simply use the stochastic thermodynamics definitions of thermodynamic quantities at the macroscale, but make the substitution \(p^{*}(x,t\mid\lambda)\rightarrow\hat{p}(x,\alpha(t))\). Following this rule, we generate the thermodynamic quantities as
\[\hat{E}(\alpha,\lambda) =\langle e\rangle_{\hat{p}}\] \[\hat{S}(\alpha) =-k_{B}\left(\log(\hat{p})\right)_{\hat{p}}\] \[\hat{A}^{\mathrm{neq}}(\alpha,\lambda) =\hat{E}-T\hat{S}\] \[\frac{d}{dt}\hat{W}(\alpha,\lambda) =\left\langle\frac{\partial e}{\partial\lambda}\dot{\lambda} \right\rangle_{\hat{p}} \tag{8}\] \[T\frac{d}{dt}\hat{S}^{\mathrm{tot}}(\alpha,\lambda) =\frac{d}{dt}\hat{W}-\frac{d}{dt}\hat{A}^{\mathrm{neq}}\] (9) \[\frac{d}{dt}\hat{S}^{\mathrm{m}}(\alpha,\lambda) =\frac{d}{dt}\hat{S}^{\mathrm{tot}}-\frac{d}{dt}\hat{S} \tag{10}\]
where Eq. 8, 9, and 10 are derived from Eq. 1, 3, and 2 respectively, as shown in the SI Appendix. Since we have assumed a constant bath temperature for the governing Langevin equation, we do not explicitly write the dependence of the quantities
above on \(\beta\). Recall, a key assumption is that the approximate density should be independent of \(\lambda\) for fixed \(\alpha\). Hence, the approximate entropy, \(\hat{S}\), is a function of \(\alpha\) alone. This means that the partial derivative with respect to \(\lambda\) can be factored out of the expectation in Eq. 8. Since \(\hat{S}\) does not depend on \(\lambda\), we may write
\[\frac{d}{dt}\hat{W}=\frac{\partial\hat{E}}{\partial\lambda}\dot{\lambda}=\frac{ \partial}{\partial\lambda}\left(\hat{E}-T\hat{S}\right)\dot{\lambda}=\frac{ \partial\hat{A}^{\text{neq}}}{\partial\lambda}\dot{\lambda}\equiv\hat{F}^{ \text{ex}}\dot{\lambda},\]
so that the approximate external force is given by the gradient of \(\hat{A}^{\text{neq}}\) with respect to the external protocol, \(\hat{F}^{\text{ex}}\equiv\frac{\partial\hat{A}^{\text{neq}}}{\partial\lambda}\). Moreover, Eq. 9 and Eq. 10 simplify to
\[T\frac{d}{dt}\hat{S}^{\text{tot}}=-\frac{\partial\hat{A}^{\text{neq}}}{ \partial\alpha}\dot{\alpha}\qquad\qquad\frac{d}{dt}\hat{Q}=-\frac{\partial \hat{E}}{\partial\alpha}\dot{\alpha}.\]
Thus, the approximate work rate and the approximate rate of entropy production of the medium are given by the derivatives of \(\hat{E}\) and the approximate work rate and the approximate rate of total entropy production are given by the derivatives of \(\hat{A}^{\text{neq}}\). In particular, the rate of total entropy production takes the form of a product of fluxes, \(\dot{\alpha}\), and affinities, \(\mathcal{A}_{\alpha}=-\frac{\partial\hat{A}^{\text{neq}}}{\partial\alpha}\). Likewise, the internal variables do not explicitly enter into the equation for the work rate, just as in TIV. Moreover, in the SI Appendix, we prove that for an arbitrary interaction energy \(e(x,\lambda)\), internal variables obey the stronger GENERIC structure [37], obeying a gradient flow equation with respect to the non-equilibrium free energy, whenever the approximate probability density is assumed to be Gaussian. In this case, the internal variables are the mean and inverse covariance (\(\alpha=(\mu,\Sigma^{-1})\)) of the probability density of the state, \(x\in\mathbb{R}^{N}\). Symbolically, we define
\[\hat{p}(x,\mu,\Sigma^{-1})=\sqrt{\det\!\left(\frac{\Sigma^{-1}}{2\pi}\right) }\exp\!\left(-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)\right)\!. \tag{11}\]
This choice of form for the approximate density is a standard choice in popular approximation methods including Gaussian phase packets [14, 12] and diffusive molecular dynamics [23, 26] primarily for its tractable nature.
As mentioned, the dynamics of \(\mu\) and \(\Sigma^{-1}\) are given in terms of gradients with respect to the non-equilibrium free energy
\[\dot{\mu}=-\frac{1}{\eta}\frac{\partial\hat{A}^{\text{neq}}}{\partial\mu}, \qquad\dot{\Sigma}^{-1}=-M(\Sigma^{-1}):\frac{\partial\hat{A}^{\text{neq}}}{ \partial\Sigma^{-1}} \tag{12}\]
for a positive semi-definite dissipation tensor \(M(\Sigma^{-1})\), and hence, the total rate of entropy production is guaranteed to be non-negative
\[T\frac{d}{dt}\hat{S}^{\text{tot}}=\frac{1}{\eta}\!\left\|\frac{\partial\hat{ A}^{\text{neq}}}{\partial\mu}\right\|^{2}+\frac{\partial\hat{A}^{\text{neq}}}{ \partial\Sigma^{-1}}:M:\frac{\partial\hat{A}^{\text{neq}}}{\partial\Sigma^{-1 }}. \tag{13}\]
Thus, we see that the thermodynamic structure emerges naturally by utilizing the variational method of Eyink within the context of stochastic thermodynamics, and that we are not forced to postulate phenomenological equations for \(\alpha(t)\). They emerge directly from the variational structure.
## Results
### A single colloidal particle
To illustrate the STIV framework we apply it to a toy model: an overdamped, colloidal particle acted on by an external force that is linear in the extension of a spring connected to the particle. Despite its simplicity, this model is often used to describe a molecule caught in an optical trap. In one dimension, the governing Langevin equation for the particle's position is given by \(\text{d}x=-\frac{1}{\eta}\frac{\partial\hat{E}}{\partial x}(x,\lambda)\text{ d}t+\sqrt{2d}\,\text{d}b\), where \(e(x,\lambda)=\frac{k}{2}(\lambda-x)^{2}\) is the energy of the spring or the trapping potential, and \(\lambda(t)\) is an arbitrary external protocol. The corresponding Fokker-Planck operator is \(\mathcal{L}\ p=\frac{1}{\eta}\frac{\partial}{\partial x}\left(\frac{\partial a }{\partial x}p\right)+d\frac{\partial^{2}}{\partial x}p\). The true solution is an Ornstein-Uhlenbeck (O.U.) process, thus, providing an exactly solvable model for comparison [44]. Since the probability density of the O.U. process is Gaussian for all time (assuming a Gaussian initial distribution), we use a Gaussian approximate distribution with mean \(\mu\) and standard deviation \(\sigma\) as internal variables (Eq. 11 with \(\Sigma^{-1}=1/\sigma^{2}\)). It is straightforward to input this density into the variational formalism of Eyink and compute the dynamics. The details of the derivation are written out in the SI Appendix. The resulting dynamical equations recover those of the O.U. process
\[\dot{\mu}=-\frac{k}{\eta}(\mu-\lambda),\qquad\dot{\sigma}=-\frac{k}{\eta} \sigma\left(1-\frac{1}{k\beta\sigma^{2}}\right).\]
Now that we have the dynamics, we turn to computing the thermodynamics quantities. Of particular interest is the fact that the fluxes of the internal variables are linear in the affinities, \(-\frac{\partial\hat{A}^{\text{neq}}}{\partial\mu}=\eta\dot{\mu}\), \(-\frac{\partial\hat{A}^{\text{neq}}}{\partial\sigma}=\eta\dot{\sigma}\), hence ensuring a non-negative entropy production. We can also find the approximate work rate, heat rate, and rate of total entropy production explicitly
\[\frac{\text{d}}{\text{d}t}\hat{W}=\eta\dot{\mu}\dot{\lambda},\qquad\frac{\text{ d}}{\text{d}t}\hat{Q}=\eta\dot{\mu}^{2}-k\sigma\dot{\sigma},\qquad T\frac{\text{d}}{ \text{d}t}\hat{S}^{\text{tot}}=\eta\dot{\mu}^{2}+\eta\dot{\sigma}^{2}.\]
Although a toy system, this example highlights the fact that when the true solution to the governing PDE for the probability density lies in the subspace spanned by the trial density, the true solution is recovered and relevant thermodynamic quantities can be exactly computed via the non-equilibrium free energy, as can be seen in Fig. 2.
### Double-well colloidal mass-spring-chain
For our primary example, we study a colloidal mass-spring-chain system with double-well interaction between masses. Depicted in the inset of Fig. 4 E, this model of phase front propagation in coiled-coil proteins and double stranded DNA contains several metastable configurations corresponding to the different springs occupying one of the two minima in the interaction energy, and exhibits phase transitions between them. A key test for the STIV framework is whether or not the phase can accurately be predicted, and more importantly, whether the kinetics and thermodynamics of phase transitions can be captured without phenomenological kinetic equations. An almost identical model to the one studied here is considered in [48], but in a Hamiltonian setting rather than as a colloidal system. Here, the authors make use of the piece-wise linearity of the force, \(-\frac{\partial e}{\partial x}\), to derive an exact solution for the strain in the presence of a phase front traveling at constant velocity, and the kinetic relation for this phase front without the use of phenomenological assumptions. Our solution, on the other hand, is inherently approximate (though accurate), but does not depend on either the assumptions of constant velocity of the phase front, or the specific piece-wise linear form of the force. The choice of interaction potential is simply convenience, and the STIV method could be easily applied to quartic or other double-well interaction potentials.
We assume each spring has internal energy described by the following double well potential:
\[u(z)=\begin{cases}\frac{k_{1}}{2}(z+l_{1})^{2}&x\leq 0\\ \frac{k_{2}}{2}(z-l_{2})^{2}+h_{2}&x>0\end{cases}\]
where \(h_{2}\) is chosen so that \(u(z)\) is continuous (i.e., \(h_{2}=(k_{1}l_{1}^{2}-k_{2}l_{2}^{2})/2\)). For simplicity, we have placed one well on each side of the origin so that the transition point falls at \(z=0\). Letting \(x=(x_{1},...,x_{N})\) be the positions of the \(N\) interior masses, the total energy, given an external protocol \(\lambda\), is \(e(x,\lambda)=\sum_{i=1}^{N}u(x_{i}-x_{i-1})+u(\lambda-x_{N})\) where \(x_{0}\equiv 0\).
We begin by assuming that the positions of the masses can be well described using a multivariate Gaussian distribution, and set the internal variables to be the mean \(\mu\) and the inverse covariance \(\Sigma^{-1}\) as in Eq. 11. The exact form of the dynamical equations for the internal variables induced by the STIV framework can be found in the SI Appendix. As expected, the equations obey the gradient flow structure given by Eq. 12, where in this case we have \(M_{ij,kl}=\frac{1}{n}(\Sigma_{ik}^{-1}\Sigma_{jl}^{-2}+\Sigma_{ik}^{-2}\Sigma_ {jl}^{-1}+\Sigma_{il}^{-1}\Sigma_{jk}^{-2}+\Sigma_{il}^{-2}\Sigma_{jk}^{-1})\). The rate of total entropy production, given by Eq. 13, is thus non-negative. It is interesting to note that the dynamical equations for \(\mu\) and \(\Sigma^{-1}\) are coupled through an approximation of the phase fraction of springs occupying the right well
\[\hat{\Phi}_{i}(x,t)\equiv\int_{-\infty}^{\infty}\mathds{1}_{(x_{i}-x_{i-1}>0 )}\,\hat{p}(x,\mu(t),\Sigma^{-1}(t))\mathrm{d}x.\]
As an important special case, fixing the interaction parameters to produce a quadratic interaction, \(l_{1}=-l_{2}\) and \(k_{1}=k_{2}=k\), causes the dependence on \(\hat{\Phi}\) to drop out, and the equations from \(\mu\) and \(\Sigma^{-1}\) decouple.
In Fig. 3, we show a comparison of the probability densities produced by the STIV framework for a two mass system to those obtained from Langevin simulations of the governing stochastic differential equation. Although fine details of the
Figure 2: A comparison of the STIV method (black solid line) to Langevin simulations (red dashes, 100,000 simulations) for a single colloidal particle in a harmonic optical trap. (A) The mean mass position, \(\mu\approx\langle x\rangle\), as well as the external pulling protocol, \(\lambda(t)\), in blue. (B) The standard deviation, \(\sigma\approx\sqrt{\langle(x-\langle x\rangle)^{2}\rangle}\), of mass positions. (C) The external force on the optical trap. (D) The total rate of entropy production.
multimodal structure are missed (as is to be expected when using a Gaussian model), the size and location of the dominant region of non-zero probability is captured, making it possible to compute the relevant macroscopic thermodynamic quantities, as we discuss next.
Since the exact form of the true solution \(p^{*}(x,t\mid\lambda)\) is unknown, we compare the results of the framework to simulations of the Langevin dynamics of a system with S free masses in Fig. 4. Despite the fact that the true solution is multimodal due to the existence of several metastable configurations, it's clear that the approximations of the mean mass position (A), phase fraction (B), external force (\(\frac{\partial E}{\partial\lambda}\approx\frac{\partial\hat{E}}{\partial\lambda} =\frac{\partial\hat{E}^{\text{max}}}{\partial\lambda}\)) (C), and total rate of entropy production (D) are all highly accurate. This holds true for a variety of pulling protocols including linear (1), sinusoidal (2), and a step displacement (3,4), as well as for symmetric (1,2,3) and asymmetric (4) interaction potentials. Returning to (B), we see that for a system with an initial configuration in which all the springs begin in the left well we can observe a propagating phase front as the springs, one by one, transition from the left to the right well. This transition is captured by the internal variable model with high accuracy allowing one to directly approximate the velocity of the phase front. We note, however, that the quantitative accuracy of the method appears to hold most strongly in the case that the thermal energy is significantly larger or smaller than the scale of the energy barrier separating the two potential energy wells in the spring interaction. When the thermal energy and potential energy barriers are at the same scale, the true density of states is highly multimodal, and not well approximated by a multivariate Gaussian, see Movie S2. In this case, the STIV approximation captures the behavior of only the dominant mode. When the thermal energy is large relative to the barrier, the thermal vibrations cause the modes to collapse into a single "basin" which can be well approximated by the STIV density, see Movie S1. Finally, when the thermal energy is small, the true density is unimodal, and undergoes rapid jumps between the different energy minima. The Gaussian STIV density, again, becomes an effective choice for approximation.
The dynamical equations for the internal variables take the form of a discretized partial differential equation (pde). Assuming we properly rescale the parameters of the interaction potential, the viscosity, and temperature so that the equilibrium system length, energy, entropy, and quasistatic viscous dissipation are independent of the number of masses (\(l_{i}=l_{i}^{0}/N\), \(k_{i}=Nk_{i}^{0}\), \(\eta=\eta^{0}/N\), \(\beta=N\beta^{0}\) (\(i\in\{1,2\}\))) then, in the limit as the number of masses tends to infinity the internal variables \(\mu_{i}\) and \(\Sigma_{ij}^{-1}\) become functions of continuous variables \(x\in[0,1]\) and \(x,y\in[0,1]\times[0,1]\), respectively. Since it is challenging to invert a continuum function \(\Sigma^{-1}(x,y,t)\), we make use of the identity \(\hat{\Sigma}_{ij}=-(\Sigma\hat{\Sigma}^{-1}\Sigma)_{ij}\) to derive the following limiting pde
Figure 3: A comparison of the probability density for the spring lengths for a two mass mass-spring-chain system with double-well spring energies. The colored histograms depict densities collected from 100,000 Langevin simulations of the solution to the governing stochastic differential equation, while the grey-scale contour lines show the approximation using STIV. On each panel, the horizontal axis gives the length of the first spring, \(x_{1}\), and the vertical axis gives the length of second, \(x_{2}-x_{1}\). Panels from left to right show equal increments in time. We see that despite missing the details of the multi-modal behavior apparent in the Langevin simulations, the STIV approximation successfully tracks the location and size of the dominant region of non-zero probability.
for \(\mu(x,t)\), \(\Sigma(x,y,t)\), the strain, \(\epsilon(x,t)\equiv\frac{\partial\mu}{\partial x}(x,t)\), and the covariance of the strain, \(\mathcal{E}(x,y,t)\equiv\frac{\partial^{2}\Sigma}{\partial x\partial y}(x,y,t)\)
\[\frac{\partial\mu}{\partial t} =\frac{1}{\eta_{0}}\frac{\partial}{\partial x}\bigg{\{}k_{1}^{0 }\left(\epsilon+l_{1}^{0}\right)(1-\hat{\Phi})+k_{2}^{0}\left(\epsilon-l_{2}^ {0}\right)\hat{\Phi}+(k_{2}^{0}-k_{1}^{0})\mathcal{E}\frac{\partial\hat{\Phi}} {\partial\epsilon}\bigg{\}}\] \[\frac{\partial\Sigma}{\partial t} =2\Delta^{w}\Sigma\]
\[\mu(x=0,t)=0,\qquad\mu(x=l_{0},t)=\lambda(t)\] \[\Sigma(x=0,y,t) =\Sigma(x=l_{0},y,t)=0\] \[\Sigma(x,y=0,t) =\Sigma(x,y=l_{0},t)=0\]
with the approximate phase fraction defined through
\[\hat{\Phi}(x,t)=\hat{\Phi}(\epsilon,\mathcal{E})=\Phi\left(\frac{\epsilon(x, t)}{\sqrt{\mathcal{E}(x,x,t)}}\right).\]
Here, \(\Delta^{w}=\frac{\partial}{\partial x}w(x,t)\frac{\partial}{\partial x}+ \frac{\partial}{\partial y}w(y,t)\frac{\partial}{\partial y}\), \(w(x,t)=\frac{k_{1}^{0}}{\eta^{0}}(1-\hat{\Phi})+\frac{k_{2}^{0}}{\eta^{0}} \hat{\Phi}-\frac{1}{\eta^{0}}(k_{1}^{0}l_{1}^{0}+k_{2}^{0}l_{2}^{0})\frac{ \partial\hat{\Phi}}{\partial\epsilon}\), and \(\Phi(\xi)\) is the cumulative distribution function of a standard Gaussian (mean zero, variance one). Both equations for \(\frac{\partial\mu}{\partial t}\) and \(\frac{\partial\Sigma}{\partial t}\) contain contributions from the left well (the terms multiplying \((1-\hat{\Phi})\)), the right well (the terms multiplying \(\hat{\Phi}\)), and the phase boundary (the terms multiplying \(\frac{\partial\hat{\Phi}}{\partial\epsilon}\)), and in the SI Appendix we give assumptions on the continuum limit for \(\Sigma(x,y,t)\) such that these dynamical
Figure 4: (A) A comparison of the predicted mean mass locations using STIV (black lines) and empirical mean of 100,000 Langevin simulations (red dashes) for the 8 mass colloidal mass-spring-chain with double-well interactions and a linear external protocol (external protocol shown in blue throughout). Except in (C4) and (D4), the parameters of the symmetric interaction potential are \(k_{1}=k_{2}=l_{1}=l_{2}=1\). (B) The predicted and simulated phase fractions of springs in the right well for the same system as (A). (C) The predicted versus simulated external force for four different pulling protocols: (1) linear, (2) sinusoidal, (3) step, (4) step with an asymmetric interaction potential between masses (\(k_{1}=1,l_{1}=1,k_{2}=2,l_{2}=1/2\)). (D) The predicted versus simulated rate of total entropy production for the same four pulling protocols as in (C). The external protocols used are shown in the insets of (C,D). (E) Cartoon of the mass-spring-chain configuration. One side is held fixed while the other is controlled by the external protocol.
equation maintain the gradient flow structure
\[\frac{\partial\mu}{\partial t} =-\frac{1}{\eta}\frac{\delta\dot{A}^{\text{neq}}}{\delta\mu}\] \[\frac{\partial\sigma}{\partial t} =-\int_{0}^{1}\int_{0}^{1}M(x,y,z,w,t)\frac{\delta\dot{A}^{\text{neq} }}{\delta\Sigma}(z,w,t)\text{d}z\text{d}w.\]
In Fig. 5 (A), we demonstrate that the continuum response of the system can be well approximated through the STIV framework with finitely many masses. We see agreement between the mean mass positions observed in Langevin simulations and those predicted using the STIV framework for both 17 and 62 masses, verifying that both discretizations capture the continuum response. This allows us to use the 17 mass system to accurately predict important continuum level quantities such as the external force as a function of extension, \(\lambda\), 5 (B), the phase front speed, 5 (C), for different applied strain rates, and finally the rate of entropy production due to the phase front, 5 (D), as a function of the system extension for each of the strain rates shown in (C). Methods for computing the front speed and the rate of entropy production due to the phase front can be found in the SI Appendix.
Finally, in the continuum limit, one can differentiate in time the defining equation for the location of the phase front in the reference configuration, \(\hat{\Phi}(\tilde{I}(t),t)\equiv\frac{1}{2}\) to yield the following ordinary differential equation for the location of the phase front
\[\frac{\text{d}}{\text{d}t}\tilde{I}(t)=-\frac{\frac{\partial^{2}}{\partial x^ {2}}\frac{\delta\dot{A}^{\text{tan}}}{\delta\epsilon}(x,t)}{\eta\frac{\partial \dot{B}t}{\partial x^{2}}(x,t)}\Big{|}_{x=I(t)}.\]
This equation reveals that the phase front is directly proportional to the ratio of the curvature of the thermodynamic affinity conjugate to the strain \(\mathcal{A}_{\epsilon}\equiv-\frac{\delta\dot{A}^{\text{neq}}}{\delta\epsilon}\) and the curvature of \(\mu\) at the location of the phase front.
## Discussion
Our results demonstrate the utility and accuracy of the STIV framework as a method for constructing TIV models which are consistent with microscopic physics. After assuming a functional form for a set of parameterized probability densities which serve to approximate the true density of states, inserting this approximation into the thermodynamic definitions taken from stochastic thermodynamics directly yields the internal variables structure, and the dynamics of these internal variables are fully determined by the variational method of Eyink. The resulting macroscopic model encodes the microscopic features of the system to the degree allowed within the provided probability density without any need for further reference back to smaller scales. Moreover, in the important case of a Gaussian form for the approximate probability density, \(\hat{p}(x,\alpha)\), we recover the gradient flow dynamics and the GENERIC structure which is commonly assumed without direct microscopic justification. In this work, we have focused on examples yielding analytically tractable approximations. However, it is equally possible to extend the method beyond such constraints by creating a numerical implementation based on sampling techniques
Figure 5: (A) Mean mass positions for Langevin and STIV approximations to an 17 mass (Langevin: red dashes, STIV: solid black) and a 62 mass (Langevin: pink short dashes, STIV: grey long dashes) double-well mass-spring-chain system, with parameters rescaled for the same effective behavior. For both systems, only the 8 masses expected to overlap are plotted. Throughout (B,C,D), darker colors, dashed lines, and + scatter points denote results from Langevin simulations, whereas lighter colors, solid lines, and x scatter points denote results from the STIV approximation. (B) The external force as a function of extension for the 17 mass system at ten different strain rates (shown in (C)). (C) The phase front speed as a function of strain rate in the 17 mass system. (D) The rate of entropy production due to the phase front as a function of extension for each of the strain rates shown in (C).
using modern statistical and machine learning techniques. Furthermore, extensions to Hamiltonian systems, active noise and models exhibiting significant coarse graining constitute important future steps for the STIV framework.
## Acknowledgment
T.L. acknowledges that this project was supported in part by a fellowship award under contract FA9550-21-F-0003 through the National Defense Science and Engineering Grauate (NDSEG) Fellowship Program, sponsored by the Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR) and the Army Research Office (ARO). P.K.P. acknowledges support from ACS, USA grant number PRF-61793 ND10. C.R. gratefully acknowledges support from NSF CAREER Award, CMMI-2047506.
|
2306.17584 | Flexible and Accurate Methods for Estimation and Inference of Gaussian
Graphical Models with Applications | The Gaussian graphical model (GGM) incorporates an undirected graph to
represent the conditional dependence between variables, with the precision
matrix encoding partial correlation between pair of variables given the others.
To achieve flexible and accurate estimation and inference of GGM, we propose
the novel method FLAG, which utilizes the random effects model for pairwise
conditional regression to estimate the precision matrix and applies statistical
tests to recover the graph. Compared with existing methods, FLAG has several
unique advantages: (i) it provides accurate estimation without sparsity
assumptions on the precision matrix, (ii) it allows for element-wise inference
of the precision matrix, (iii) it achieves computational efficiency by
developing an efficient PX-EM algorithm and a MM algorithm accelerated with
low-rank updates, and (iv) it enables joint estimation of multiple graphs using
FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various
simulation settings and real data applications, including gene expression in
the human brain, term association in university websites, and stock prices in
the U.S. financial market. The results demonstrate that FLAG and its extensions
provide accurate precision estimation and graph recovery. | Yueqi Qian, Xianghong Hu, Can Yang | 2023-06-30T12:06:10Z | http://arxiv.org/abs/2306.17584v1 | Flexible and Accurate Methods for Estimation and Inference of Gaussian Graphical Models with Applications
###### Abstract
The Gaussian graphical model (GGM) incorporates an undirected graph to represent the conditional dependence between variables, with the precision matrix encoding partial correlation between pair of variables given the others. To achieve flexible and accurate estimation and inference of GGM, we propose the novel method FLAG, which utilizes the random effects model for pairwise conditional regression to estimate the precision matrix and applies statistical tests to recover the graph. Compared with existing methods, FLAG has several unique advantages: (i) it provides accurate estimation without sparsity assumptions on the precision matrix, (ii) it allows for element-wise inference of the precision matrix, (iii) it achieves computational efficiency by developing an efficient PX-EM algorithm and a MM algorithm accelerated with low-rank updates, and (iv) it enables joint estimation of multiple graphs using FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various simulation settings and real data applications, including gene expression in the human brain, term association in university websites, and stock prices in the U.S. financial market. The results demonstrate that FLAG and its extensions provide accurate precision estimation and graph recovery.
## 1 Introduction
Quantifying the relationships among components in a complex system based on observations is a fascinating yet challenging problem. Graphical models utilize probability models to represent relationships as a graph, where the nodes are random variables and the edges denote their dependencies. Graphical models have wide real-world applications in various research fields, including genetics Feng and Ning (2019); Zhao and Duan (2019); Yi et al. (2022), economics Anufriev and Panchenko (2015); Bernardini et al. (2022), psychology Epskamp et al. (2018); Williams (2021), and environmental science Engelke and Hitz (2020).
To model the components in the system, we consider a \(p\)-dimensional random vector distributed in a multivariate normal distribution with zero mean, without loss of generality, as \(z\sim\mathcal{N}(0,\Sigma)\), with \(\Theta\coloneqq\Sigma^{-1}\). Then, we have \(p(z_{i}|z_{-i})\propto\exp(-\frac{1}{2}\Theta_{ii}z_{i}^{2}-\Sigma_{j\neq i}z _{i}\Theta_{ij}z_{j})\) which implies that the probability of the \(i\)-th component only depends on the components with nonzero entries in the \(i\)-th column of the precision matrix \(\Theta\). Also, \(p(z_{i},z_{j}|z_{-ij})\propto\exp(-\frac{1}{2}\begin{bmatrix}z_{i}&z_{j} \end{bmatrix}\begin{bmatrix}\Theta_{ii}&\Theta_{ij}\\ \Theta_{ji}&\Theta_{jj}\end{bmatrix}\)
###### Abstract
We propose a flexible and accurate method for the estimation and inference of Gaussian graphical models. The proposed method is based on the estimation and inference of Gaussian graphical models. The proposed method is based on the estimation and inference of Gaussian graphical models.
Section 4. Finally, we conclude this manuscript with a brief discussion in the Section 5.
## 2 Methods
The proposed method does not depend on any explicit structural assumptions on the precision matrix, and it does not introduce bias into the estimates. Instead, it utilizes the conditional Gaussian property and rewrites the estimation of each entry in the precision matrix as the estimation of the covariance of residuals obtained by regressing two variables on the remaining \(p-2\) variables. Unlike the asymptotic normal thresholding (ANT) method from Ren et al. (2015), we neither impose sparsity assumptions on the regression coefficients nor assume column-wise sparsity in the precision matrix, as the shrinkage on parameters may introduce bias to the residuals of regressions and the precision entries. In addition to estimation, the proposed method enables inference and quantification of the uncertainty of each entry in the precision matrix and the corresponding edge in the graph.
### Model Setting
Utilizing the conditional Gaussian property, the proposed method estimates a two-by-two submatrix of the precision matrix each time by taking the inverse of the residual oebatined from a two-versus-rest bivariate regression. To achieve an unbiased estimation of covariance of residuals and the precision entries, each regression is solved by random effect model.
Consider a pair of random variables \(a=\{i,j\}\) versus the other \(p-2\) variables each time. Take the \(i\)-th and \(j\)-th elements from the \(p\)-dimensional random vector \(z\) as responses \(y=[z_{i},z_{j}]\), while the remaining \(p-2\)-dimensional random vector \(x=[z_{i},z_{j}]^{c}\) indicates the explanatory variables. The conditional probability \(y|x\sim\mathcal{N}(x^{T}\beta,\Theta_{aa}^{-1})\) can be expressed as \(y=x^{T}\beta+\epsilon\), where \(\epsilon\sim\mathcal{N}(0,\Gamma_{\epsilon})\) and \(\Theta_{aa}=\Gamma_{\epsilon}^{-1}\).
Let \(Z\in\mathbb{R}^{n\times p}\) denote a collection of \(n\) realizations of the random vector \(z\), the \(i\)-th and \(j\)-th columns from observation \(Z\) as responses \(Y=[Z_{\cdot i},Z_{\cdot j}]\in\mathbb{R}^{n\times 2}\), while the remaining columns \(X=[Z_{\cdot i},Z_{\cdot j}]^{c}\in\mathbb{R}^{n\times(p-2)}\) indicate the explanatory variables. Subsequently, a bivariate regression model is constructed based on the conditional Gaussian property, as \(Y=X\boldsymbol{\beta}+\boldsymbol{\epsilon}\), where the coefficient matrix \(\boldsymbol{\beta}\in\mathbb{R}^{(p-2)\times 2}\), and the covariance of each row \(\epsilon_{k\cdot}\) in the \(\epsilon\in\mathbb{R}^{n\times 2}\) is a 2-by-2 positive definite matrix \(\Gamma_{\epsilon}\), satisfying \(\mathrm{cov}(\epsilon_{k\cdot}^{T})=\Gamma_{\epsilon}=\Theta_{aa}^{-1}\).
To solve this bivariate regression, we consider a random effects model
\[\begin{split}& Y=X\boldsymbol{\beta}+\boldsymbol{\epsilon},\\ &\beta_{k\cdot}^{T}\sim\mathcal{N}(0,\Gamma_{\beta}),\epsilon_{k \cdot}^{T}\sim\mathcal{N}(0,\Gamma_{\epsilon}),\end{split} \tag{1}\]
where \(\beta\) is treated as random effects, and the \(k-\)th row \(\beta_{k\cdot}\) is assumed to be distributed in a normal distribution with zero mean and covariance as \(\Gamma_{\beta}\).
After vectorizing \(Y\) and \(X\boldsymbol{\beta}\), we obtain \(\mathrm{vec}Y|X\sim\mathcal{N}(\mathrm{vec}(X\boldsymbol{\beta}),\Gamma_{ \epsilon}\otimes I_{n})\). By integrating over \(\beta\), the random effects model can be expressed as
\[\begin{split}&\mathrm{vec}Y\sim N(0,\Omega^{-1}),\\ &\Omega=\Gamma_{\beta}\otimes XX^{T}+\Gamma_{\epsilon}\otimes I_{ n}.\end{split} \tag{2}\]
The parameters in this model are denoted as \(\Gamma_{\beta}=\begin{bmatrix}\sigma_{1}^{2}&\tau\\ \tau&\sigma_{2}^{2}\end{bmatrix},\Gamma_{\epsilon}=\begin{bmatrix} \sigma_{3}^{2}&\eta\\ \eta&\sigma_{4}^{2}\end{bmatrix},\) where \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) are symmetric and positive semi-definite matrices.
Firstly, the variance components \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) are estimated for the pair \((i,j)\), using efficient algorithms designed in the random effects model. Based on the conditional Gaussian property, the submatrix of the precision matrix with respect to this pair can be estimated by \(\Theta_{aa}=\Gamma_{\epsilon}^{-1}\). Furthermore, to quantify the uncertainty of each entry in the precision matrix, inference can be performed based on the proposed model. In addition, edges in the graph are detected through hypothesis testing, rather than relying solely on point estimates.
### Algorithms
To estimate the variance component \(\Gamma_{\epsilon}\), two approaches based on maximum likelihood estimation (MLE) are provided: the minorize-maximization (MM, Hunter and Lange [2000]) algorithm and the parameter-expanded expectation-maximization (PX-EM, Liu et al. [1998]) algorithm.
According to the random effects model as shown in the formula 2, the log-likelihood function with respect to the random components \(\Gamma=\{\Gamma_{\beta},\Gamma_{\epsilon}\}\) in a two-versus-rest conditional regression for each pair is
\[\ell(\Gamma)=\ln\mathbb{P}(Y|X;\Gamma_{\beta},\Gamma_{\epsilon})=-\frac{1}{2} \ln\det\Omega-\frac{1}{2}\text{vec}Y^{T}\Omega^{-1}\text{vec}Y+c, \tag{3}\]
where \(c\) is a trivial constant. Two MLE-based algorithms have been developed for estimating the variance components in order to achieve unbiased estimation and statistical inference.
#### 2.2.1 MM Algorithm
Direct maximum likelihood estimation of variance components models is numerically challenging. The minorize-maximization (MM) algorithm first finds a surrogate function \(g\) that minorizes the log-likelihood function 3, such that \(g(\Gamma|\Gamma^{(m)})\leq\mathcal{L}(\Gamma)\). Then, the optimization variable is updated according to the current surrogate function, i.e., \(\Gamma^{(m+1)}=\operatorname*{argmax}_{\Gamma}g(\Gamma|\Gamma^{(m)})\).
The surrogate function for the log-likelihood function with respect to variance components is constructed using two minorizations based on two inequalities Zhou et al. [2019]. The convexity of the negative log determinant function implies \(-\ln\det\Omega\geq-\ln\det\Omega^{(m)}-\operatorname{tr}[\Omega^{-(m)}( \Omega-\Omega^{(m)})]\). Since the variance components \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) are positive definite matrices, \(\Omega\) is also positive definite, then we have \(\Omega^{-1}\preceq\Omega^{-(m)}[(\Gamma_{\beta}^{(m)}\Gamma_{\beta}^{-1} \Gamma_{\beta}^{(m)})\otimes XX^{T}+(\Gamma_{\epsilon}^{(m)}\Gamma_{\epsilon} ^{-1}\Gamma_{\epsilon}^{(m)})\otimes I_{n}]\Omega^{-(m)}.\) The surrogate function for the MM algorithm is then given by
\[\begin{split} g(\Gamma|\Gamma^{(m)}):=&-\operatorname {tr}[\Omega^{-(m)}(\Gamma_{\beta}\otimes XX^{T})]-\operatorname{tr}[\Gamma_{ \beta}^{(m)}R^{(m)T}XX^{T}R^{(m)}\Gamma_{\beta}^{(m)}\Gamma_{\beta}^{-1}]\\ &-\operatorname{tr}[\Omega^{-(m)}(\Gamma_{\epsilon}\otimes I_{n} )]-\operatorname{tr}[\Gamma_{\epsilon}^{(m)}R^{(m)T}R^{(m)}\Gamma_{\epsilon}^ {(m)}\Gamma_{\epsilon}^{-1}]+c^{(m)},\end{split} \tag{4}\]
where \(c^{(m)}\) is a constant in the \(m\)-th iteration, and the matrix \(R\in\mathbb{R}^{n\times 2}\) satisfies \(\text{vec}(R^{(m)})=\Omega^{-(m)}\text{vec}Y\) in all iterations.
In each iteration, the parameters in \(\Gamma\) are updated by setting the derivative of \(g(\Gamma|\Gamma^{(m)})\) to be zero, as \(\Gamma_{\beta}\) is updated by \(\nabla_{\Gamma_{\beta}}g(\Gamma|\Gamma^{(m)})=0\) and \(\Gamma_{\epsilon}\) is updated by \(\nabla_{\Gamma_{\epsilon}}g(\Gamma|\Gamma^{(m)})=0\). The log-likelihood is then calculated after update. Once the change in log-likelihood becomes arbitrarily small, the MM algorithm is considered to have converged.
Due to the high computational cost of inverting the large matrix \(\Omega\in\mathbb{R}^{(2n)\times(2n)}\) in each iteration, eigen-decomposition is used to reduce such consumption on frequent matrix inverting. Let the eigen-decomposition of \(XX^{T}\) be \(U^{T}XX^{T}U=D=\text{diag}(d)\), where \(D\) is a diagonal matrix with its diagonal elements denoted by the vector \(d\in\mathbb{R}^{n}\). The simultaneous congruence decomposition of \((\Gamma_{\beta},\Gamma_{\epsilon})\) is \((\Lambda,\Phi)\), such that \(\Phi^{T}\Gamma_{\beta}\Phi=\Lambda,\Phi^{T}\Gamma_{\epsilon}\Phi=I_{2}\). Then,
\(\Gamma_{\beta}=\Phi^{-T}\Lambda\Phi^{-1},\Gamma_{\epsilon}=\Phi^{-T}I_{2}\Phi^{-1}\). The inverse of \(\Omega\) can be efficiently calculated in each iteration according to the following equations
\[\begin{split}\Omega^{(m)}=(\Phi^{-(m)}\otimes U^{-1})^{T}(\Lambda ^{(m)}\otimes D+I_{2}\otimes I_{n})(\Phi^{-(m)}\otimes U^{-1}),\\ \Omega^{-(m)}=(\Phi^{(m)}\otimes U)(\Lambda^{(m)}\otimes D+I_{2} \otimes I_{n})^{-1}(\Phi^{(m)}\otimes U)^{T}.\end{split} \tag{5}\]
Additionally, the determinant of \(\Omega\) can be calculated accordingly as \(|\Omega^{(m)}|=|\Lambda^{(m)}\otimes D+I_{n}\otimes I_{n}||\Gamma_{\epsilon}^{ (m)}|^{n}\).
In each iteration, \(\Gamma_{\beta}\) is updated by setting the derivative of \(g(\Gamma|\Gamma^{(m)})\) with respect to \(\Gamma_{\beta}\) to zero, and \(\Gamma_{\epsilon}\) is updated similarly. The former trace term in 4 is linear to \(\Gamma\), with the coefficients collected in the \((2n)\times(2n)\) matrices \(M_{\beta}\) and \(M_{\epsilon}\) as
\[\begin{split} M_{\beta}&=\Phi^{(m)}\operatorname{ diag}\{\operatorname{tr}[D(\lambda_{l}^{(m)}D+I_{n})^{-1}]\}\Phi^{(m)T},\\ M_{\epsilon}&=\Phi^{(m)}\operatorname{diag}\{ \operatorname{tr}[(\lambda_{l}^{(m)}D+I_{n})^{-1}]\}\Phi^{(m)T}.\end{split} \tag{6}\]
The latter trace term, which involves the inverse of \(\Gamma\) can be rewritten in the general form \(-\operatorname{tr}[A\Gamma^{-1}]\). Its derivative with respect to \(\Gamma\) is \(\Gamma^{-1}A\Gamma^{-1}\). For positive definite matrices \(A\) and \(M\), the unique positive definite solution for \(\Gamma\) with respect to the Riccati equation \(M=\Gamma^{-1}A\Gamma^{-1}\) is given by \(L^{-T}(L^{T}AL)^{\frac{1}{2}}L^{-1}\), where \(L\) is the Cholesky factor of \(M\).
After bypassing the computational cost of matrix inversion and solving the updated \(\Gamma\) in the Riccati equation, we further reduce the computational cost by simplying the coefficients of the latter trace term in the surrogate function that were generalized as \(A\) before. Specifically, \(A_{\beta}=\Gamma_{\beta}^{(m)}R^{(m)T}XX^{T}R^{(m)}\Gamma_{\beta}^{(m)}\) and \(A_{\epsilon}=\Gamma_{\epsilon}^{(m)}R^{(m)T}R^{(m)}\Gamma_{\epsilon}^{(m)}\) are \(2\times 2\) symmetric matrices, but the inner calculation involves matrix multiplication of a large matrix with dimension \(n\) which is repeated in each iteration. To moderate this computational cost, the coefficients of inverse terms are denoted by matrices \(N_{\beta}^{T}N_{\beta}\) and \(N_{\epsilon}^{T}N_{\epsilon}\) as
\[\begin{split}\Gamma_{\beta}^{(m)}R^{(m)T}XX^{T}R^{(m)}\Gamma_{ \beta}^{(m)}&=N_{\beta}^{T}N_{\beta},\\ \Gamma_{\epsilon}^{(m)}R^{(m)T}R^{(m)}\Gamma_{\epsilon}^{(m)}& =N_{\epsilon}^{T}N_{\epsilon}.\end{split} \tag{7}\]
Taking all the aforementioned techniques into consideration to solve the optimization problem of maximizing the surrogate function 4 and further speed it up, the MM algorithm can be summarized as follows,
Note that \(\oslash\) denotes the Hadamard quotient. The MM algorithm estimates two variance component matrices, \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\), and the estimate of the corresponding \(2\times 2\) submatrix of the precision matrix can be estimated using the inverse of \(\hat{\Gamma_{\epsilon}}\).
#### 2.2.2 PX-EM Algorithm
The parameter-expanded expectation-maximization (PX-EM) algorithm Liu et al. (1998) is an accelerated version of the EM algorithm that is fast and stable in estimating variance-covariance components in linear mixture models Foulley and Van Dyk (2000).
The linear model 1 is reconstructed for a parameter expanded version as
\[\begin{split}& Y=\delta X\boldsymbol{\beta}+\boldsymbol{\epsilon}, \\ &\beta_{k}^{T}\sim\mathcal{N}(0,\Gamma_{\beta}),\epsilon_{k}^{T} \sim\mathcal{N}(0,\Gamma_{\epsilon}),\end{split} \tag{8}\]
where \(\delta\in\mathbb{R}^{1}\) is the expanded parameter. The data and parameters are vectorized as follows, \(\bar{X}=I_{2}\otimes X\in\mathbb{R}^{2n\times 2(p-2)},\bar{\beta}=\text{vec} \beta\in\mathbb{R}^{2(p-2)}\) with \(\bar{\beta}\sim\mathcal{N}(0,\Gamma_{\beta}\otimes I_{p-2}),\bar{\epsilon}= \text{vec}\in\mathbb{R}^{2n}\) with \(\bar{\epsilon}\sim\mathcal{N}(0,\Gamma_{\epsilon}\otimes I_{n})\), and \(\bar{Y}=\text{vec}Y=\delta\bar{X}\bar{\beta}+\bar{\epsilon}\in\mathbb{R}^{2n}\).
The complete data log-likelihood is
\[\begin{split}\ell(\Gamma)=&\text{logPr}(\bar{Y},\bar{ \beta}|\Gamma_{\beta},\Gamma_{\epsilon};\bar{X})\\ =&-\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{1} {2}(\bar{Y}-\delta\bar{X}\bar{\beta})^{T}(\Gamma_{\epsilon}^{-1}\otimes I_{n} )(\bar{Y}-\delta\bar{X}\bar{\beta})\\ &-\frac{p-2}{2}\text{log}|\Gamma_{\beta}|-\frac{1}{2}\bar{\beta} ^{T}(\Gamma_{\beta}^{-1}\otimes I_{p-2})\bar{\beta}.\end{split} \tag{9}\]
The terms involving \(\bar{\beta}\) are in a quadratic form given by \(\bar{\beta}^{T}(-\frac{\delta^{2}}{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X- \frac{1}{2}\Gamma_{\beta}^{-1}\otimes I_{p-2})\bar{\beta}+\delta\bar{Y}^{T}( \Gamma_{\epsilon}\otimes X)\bar{\beta}\). The posterior distribution of \(\bar{\beta}\) is \(N(\bar{\beta}|\mu_{\bar{\beta}},\Sigma_{\bar{\beta}})\), where
\[\Sigma_{\bar{\beta}}^{-1}=\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+ \Gamma_{\beta}^{-1}\otimes I_{p-2},\]
\[\mu_{\bar{\beta}}=(\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+\Gamma_{ \beta}^{-1}\otimes I_{p-2})^{-1}\delta(\Gamma_{\epsilon}^{-1}\otimes X^{T}) \bar{Y}.\]
During the E-step of the PX-EM algorithm, the \(\mathcal{Q}\)-function is evaluated by taking the expectation of the complete data log-likelihood with respect to the posterior \(N(\bar{\beta}|\mu_{\bar{\beta}},\Sigma_{\bar{\beta}})\). The quadratic terms involving \(\bar{\beta}\) are taken as expectation values:
\[\begin{split}&\mathbb{E}[(\bar{Y}-\delta\bar{X}\bar{\beta})^{T}( \Gamma_{\epsilon}^{-1}\otimes I_{n})(\bar{Y}-\delta\bar{X}\bar{\beta})]\\ =&(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}})^{T}( \Gamma_{\epsilon}^{-1}\otimes I_{n})(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}}) +\delta^{2}\text{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T}X)\Sigma_{\bar{\beta }}].\\ &\mathbb{E}[\bar{\beta}^{T}(\Gamma_{\beta}^{-1}\otimes I_{p-2}) \bar{\beta}]=\mu_{\bar{\beta}}^{T}(\Gamma_{\beta}^{-1}\otimes I_{p-2})\mu_{ \bar{\beta}}+\text{tr}[(\Gamma_{\beta}^{-1}\otimes I_{p-2})\Sigma_{\bar{ \beta}}]\end{split} \tag{10}\]
The \(\mathcal{Q}\)-function given the estimated parameter in the previous iteration as \(\theta_{old}\), is expressed
as follows,
\[\begin{split}\mathcal{Q}(\theta|\theta_{old})=&-\frac{n}{2 }\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log}|\Gamma_{\beta}|\\ &-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\epsilon}^{-1}\otimes I_{n} )[(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta }})^{T}+\delta^{2}\bar{X}\Sigma_{\bar{\beta}}\bar{X}^{T}]\Big{]}\\ &-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\beta}^{-1}\otimes I_{p-2} )(\mu_{\beta}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{\beta}})\Big{]}\\ =&-\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p- 2}{2}\text{log}|\Gamma_{\beta}|\\ &-\frac{1}{2}\text{tr}\Big{[}\Gamma_{\epsilon}^{-1}\begin{pmatrix} \text{tr}[S_{11}]&\text{tr}[S_{12}]\\ \text{tr}[S_{21}]&\text{tr}[S_{22}]\end{pmatrix}\Big{]}-\frac{1}{2}\text{tr} \Big{[}\Gamma_{\beta}^{-1}\begin{pmatrix}\text{tr}[W_{11}]&\text{tr}[W_{12}] \\ \text{tr}[W_{21}]&\text{tr}[W_{22}]\end{pmatrix}\Big{]},\end{split} \tag{11}\]
where \(S=\begin{pmatrix}S_{11}&S_{12}\\ S_{21}&S_{22}\end{pmatrix}=(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}- \delta\bar{X}\mu_{\bar{\beta}})^{T}+\delta^{2}\bar{X}\Sigma_{\bar{\beta}}\bar {X}^{T}\), \(W=\begin{pmatrix}W_{11}&W_{12}\\ W_{21}&W_{22}\end{pmatrix}=\mu_{\bar{\beta}}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{ \beta}}\). Denote \(\bar{Y}=\begin{pmatrix}\bar{Y}_{1}\\ \bar{Y}_{2}\end{pmatrix}\), \(\mu_{\bar{\beta}}=\begin{pmatrix}\bar{\mu}_{1}\\ \bar{\mu}_{2}\end{pmatrix}\), \(\Sigma_{\bar{\beta}}=\begin{pmatrix}\bar{\Sigma}_{11}&\bar{\Sigma}_{12}\\ \bar{\Sigma}_{21}&\bar{\Sigma}_{22}\end{pmatrix}\). Then, for \(i=1,2;j=1,2\),
\[\begin{split}&\text{tr}[S_{ij}]=\text{tr}[(\bar{Y}_{i}-\delta X \bar{\mu}_{i})(\bar{Y}_{j}-\delta X\bar{\mu}_{j})^{T}+\delta^{2}X\bar{\Sigma} _{ij}X^{T}],\\ &\text{tr}[W_{ij}]=\text{tr}[\bar{\mu}_{i}\bar{\mu}_{j}^{T}+\bar{ \Sigma}_{ij}].\end{split} \tag{12}\]
In the subsequent M-step, the new estimates of the parameters are obtained by setting the derivative of the \(\mathcal{Q}\)-function to be zero. From the detailed calculations in the supplementary materials 5, the updated parameters are \(\Gamma_{\epsilon}=\frac{1}{n}\begin{pmatrix}\text{tr}[S_{11}]&\text{tr}[S_{12} ]\\ \text{tr}[S_{21}]&\text{tr}[S_{22}]\end{pmatrix}\), \(\Gamma_{\beta}=\frac{1}{p-2}\begin{pmatrix}\text{tr}[W_{11}]&\text{tr}[W_{12}] \\ \text{tr}[W_{21}]&\text{tr}[W_{22}]\end{pmatrix}\), \(\delta=\frac{\bar{Y}^{T}(\Gamma_{\epsilon}^{-1}\otimes X)\mu_{\bar{\beta}}}{ \text{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T})(\mu_{\bar{\beta}}\mu_{\bar{ \beta}}^{T}+\Sigma_{\bar{\beta}})]}\).
To avoid frequent inversion the \(2(p-2)\times 2(p-2)\) matrix \(\Sigma_{\bar{\beta}}^{-1}\) in the iterations, an eigen-decomposition \(X^{T}X=VQV^{T}\) is performed, where \(Q\in\mathbb{R}^{(p-2)\times(p-2)}\) is a diagonal matrix with diagonal elements given by the vector \(q\) of eigenvalues. Hence, matrix \(\Sigma_{\bar{\beta}}^{-1}\) can be written as
\[\begin{split}&\Sigma_{\bar{\beta}}^{-1}=\begin{pmatrix}\delta^{2 }(\Gamma_{\epsilon}^{-1})_{11}X^{T}X+(\Gamma_{\beta}^{-1})_{11}I_{p-2}&\delta^{2 }(\Gamma_{\epsilon}^{-1})_{12}X^{T}X+(\Gamma_{\beta}^{-1})_{12}I_{p-2}\\ &\delta^{2}(\Gamma_{\epsilon}^{-1})_{21}X^{T}X+(\Gamma_{\beta}^{-1})_{21}I_{p-2 }&\delta^{2}(\Gamma_{\epsilon}^{-1})_{22}X^{T}X+(\Gamma_{\beta}^{-1})_{22}I_{ p-2}\end{pmatrix}\\ &=\begin{pmatrix}V&0\\ 0&V\end{pmatrix}\underbrace{\begin{pmatrix}\delta^{2}(\Gamma_{\epsilon}^{-1})_{1 1}Q+(\Gamma_{\beta}^{-1})_{11}I_{p-2}&\delta^{2}(\Gamma_{\epsilon}^{-1})_{12} Q+(\Gamma_{\beta}^{-1})_{12}I_{p-2}\\ \delta^{2}(\Gamma_{\epsilon}^{-1})_{21}Q+(\Gamma_{\beta}^{-1})_{21}I_{p-2}& \delta^{2}(\Gamma_{\epsilon}^{-1})_{22}Q+(\Gamma_{\beta}^{-1})_{22}I_{p-2} \end{pmatrix}}_{\begin{pmatrix}V&0\\ 0&V\end{pmatrix}^{T}.}\\ \begin{pmatrix}A&B\\ C&H\end{pmatrix}\hskip-2.845276pt=\hskip-2.845276pt\begin{pmatrix}\text{diag}(a)& \text{diag}(b)\\ \text{diag}(c)&\text{diag}(h)\end{pmatrix}\end{split} \tag{13}\]
Since \(X^{T}X\) is a real symmetric matrix, \(V\) is an orthogonal matrix, as is the block matrix \(\begin{pmatrix}V&0\\ 0&V\end{pmatrix}\), whose inverse is trivial to obtain. The matrix \(\begin{pmatrix}A&B\\ C&H\end{pmatrix}\) in the middle consists of blocks with diagonal matrices, which makes it easier to calculate the inverse. Specifically, the inverse of the middle matrix is \(\begin{pmatrix}A&B\\ C&H\end{pmatrix}^{-1}=\begin{pmatrix}\text{diag}(h\odot(a\odot h-c\odot b))& \text{diag}(-b\oslash(a\odot h-c\odot b))\\ \text{diag}(-c\oslash(a\odot h-c\odot b))&\text{diag}(a\oslash(a\odot h-c\odot b ))\end{pmatrix},\) and then \(\Sigma_{\bar{\beta}}=\begin{pmatrix}V&0\\ 0&V\end{pmatrix}\begin{pmatrix}A&B\\ C&H\end{pmatrix}^{-1}\begin{pmatrix}V^{T}&0\\ 0&V^{T}\end{pmatrix}\). The PX-EM algorithm with the eigen-decomposition of \(X^{T}X\) is summarized as follows,
```
1: Initialization: \(\Gamma_{\beta}=\Gamma_{\epsilon}=\frac{\mathrm{cov}(Y)}{2}\).
2: Eigen-decomposition: \(X^{T}X=VQV^{T}\).
3:repeat
4: E-step: set \(\delta^{(m)}=1\), \[\begin{split}\Sigma_{\bar{\beta}}=&\begin{pmatrix}V^{T} &0\\ 0&V^{T}\end{pmatrix}\\ &\begin{pmatrix}\mathrm{diag}(\delta^{2}(\Gamma_{\epsilon}^{-1})_{11}q+( \Gamma_{\beta}^{-1})_{11}\mathbb{1}_{p-2})&\mathrm{diag}(\delta^{2}(\Gamma_{ \epsilon}^{-1})_{12}q+(\Gamma_{\beta}^{-1})_{12}\mathbb{1}_{p-2})\\ \mathrm{diag}(\delta^{2}(\Gamma_{\epsilon}^{-1})_{21}q+(\Gamma_{\beta}^{-1})_{2 1}\mathbb{1}_{p-2})&\mathrm{diag}(\delta^{2}(\Gamma_{\epsilon}^{-1})_{22}q+( \Gamma_{\beta}^{-1})_{22}\mathbb{1}_{p-2})\end{pmatrix}^{-1}\\ &\begin{pmatrix}V&0\\ 0&V\end{pmatrix},\\ &\mu_{\bar{\beta}}=\Sigma_{\bar{\beta}}\delta(\Gamma_{\epsilon}^{-1}\otimes X ^{T})\bar{Y},\\ &ELBO^{(m)}=Q(\Omega^{(m)})+\frac{1}{2}\mathrm{log}|\Sigma_{\bar{\beta}}|.\end{split}\]
5: M-step: Update the model parameters by \[\delta^{(t+1)}=\frac{\bar{Y}^{T}(\Gamma_{\epsilon}^{-1}\otimes X)\mu_{\bar{ \beta}}}{\mu_{\bar{\beta}}^{T}(\Gamma_{\epsilon}^{-1}\otimes X^{T}X)\mu_{\bar{ \beta}}+\mathrm{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T}X)\Sigma_{\bar{\beta}} ]},\] \[\Gamma_{\epsilon}^{(t+1)}=\frac{1}{n}\begin{pmatrix}\mathrm{tr}[S_{11} ]&\mathrm{tr}[S_{12}]\\ \mathrm{tr}[S_{21}]&\mathrm{tr}[S_{22}]\end{pmatrix},\] \[\Gamma_{\beta}^{(t+1)}=\frac{1}{p-2}\begin{pmatrix}\mathrm{tr}[W_ {11}]&\mathrm{tr}[W_{12}]\\ \mathrm{tr}[W_{21}]&\mathrm{tr}[W_{22}]\end{pmatrix}.\]
6: Reduction-step: Rescale \(\Gamma_{\beta}^{(t+1)}=(\delta^{(t+1)})^{2}\Gamma_{\beta}^{(t+1)}\) and reset \(\delta^{(t+1)}=1\).
7:until the incomplete data log-likelihood \(ELBO^{(m)}\) stop increasing
```
**Algorithm 2** PX-EM algorithm with eigen-decomposition
#### 2.2.3 Initialization
In the previous algorithm design, we simply used the covariance of \(Y\) to initialize the parameters, setting \(\Gamma_{\beta}=\Gamma_{\epsilon}=\frac{1}{2}\text{cov}(Y)\). Although The method of moments (MoM) estimators may not be optimal, they are easy to compute and can be used to calculate an initial value of parameters for MLE-based iterative methods Wasserman (2004) like the MM algorithm and PX-EM algorithm.
The parameters in the variance component set \(\Gamma=\{\Gamma_{\beta},\Gamma_{\epsilon}\}\) are denoted by \(\gamma=[\sigma_{1}^{2},\sigma_{3}^{2},\sigma_{2}^{2},\)\(\sigma_{4}^{2},\tau,\eta]^{T}\). The MoM estimator is obtained by solving the ordinary least squares (OLS) problem
\[\operatorname*{argmin}_{\gamma}\left\|\text{vec}Y\text{vec}Y^{T}-(\Gamma_{ \beta}\otimes XX^{T}+\Gamma_{\epsilon}\otimes I_{n})\right\|_{F}^{2}.\]
Denote \(Y=[y_{1},y_{2}]\), the MoM estimate of parameter \(\gamma\) is
\[\hat{\gamma}=\begin{bmatrix}\frac{1}{2}S_{0}^{-1}&0&0\\ 0&\frac{1}{2}S_{0}^{-1}&0\\ 0&0&\frac{1}{2}S_{0}^{-1}\end{bmatrix}\begin{bmatrix}2y_{1}^{T}XX^{T}y_{1}\\ 2y_{1}^{T}y_{1}\\ 2y_{2}^{T}XX^{T}y_{2}\\ 2y_{2}^{T}y_{2}\\ 4y_{2}^{T}XX^{T}y_{1}\\ 4y_{2}^{T}y_{1}\end{bmatrix}, \tag{15}\]
where \(S_{0}=\begin{bmatrix}\text{tr}[(XX^{T})^{2}]&\text{tr}[XX^{T}]\\ \text{tr}[XX^{T}]&n\end{bmatrix}\).
### Inference
For these maximum likelihood-based methods, as the MM algorithm with respect to the incomplete data log-likelihood function in Formula 3 and the PX-EM algorithm for the complete data log-likelihood function in Formula 9, the difference between maximum likelihood estimate and the true parameter converges in distribution to a normal distribution with a mean of zero and a covariance matrix equal to the inverse of the Fisher information matrix as \(\sqrt{n}(\hat{\Gamma}-\Gamma^{*})\xrightarrow{d}\mathcal{N}(0,I^{-1})\). The maximum likelihood estimator is \(\sqrt{n}\)-consistent and asymptotically efficient, with the smallest variance.
In addition to estimating of precision matrix, we further quantify the uncertainty of each entry in the precision matrix, and the existence and weight of the corresponding edge in the graph.
The parameters in the variance component set \(\Gamma=\{\Gamma_{\beta},\Gamma_{\epsilon}\}\) are denoted by \(\gamma=[\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\)\(\gamma_{5},\gamma_{6}]^{T}:=[\sigma_{1}^{2},\sigma_{3}^{2},\sigma_{2}^{2}, \sigma_{4}^{2},\tau,\eta]^{T}\). The covariance matrix of the maximum likelihood estimates can be calculated using the inverse of the Fisher Information Matrix (FIM), where the FIM is \(I(\gamma)=-E[\frac{\partial^{2}}{\partial\gamma^{2}}\log\Pr(\text{vec}Y|X; \Gamma)]\). Denote \(M_{1}=\begin{bmatrix}XX^{T}&0\\ 0&0\end{bmatrix}\), \(M_{2}=\begin{bmatrix}I_{n}&0\\ 0&0\end{bmatrix}\), \(M_{3}=\begin{bmatrix}0&0\\ 0&XX^{T}\end{bmatrix}\), \(M_{4}=\begin{bmatrix}0&0\\ 0&I_{n}\end{bmatrix}\), \(M_{5}=\begin{bmatrix}XX^{T}&0\\ 0&XX^{T}\end{bmatrix}\), \(M_{6}=\begin{bmatrix}I_{n}&0\\ 0&I_{n}\end{bmatrix}\), then we have
\[\frac{\partial^{2}}{\partial\gamma_{i}\partial\gamma_{j}}\ln P(\text{vec}Y|X; \Gamma) =\text{tr}[(\frac{1}{2}I_{2n}-\Omega^{-1}\text{vec}Y\text{vec}Y^{ T})(\Omega^{-1}M_{i}\Omega^{-1}M_{j})]. \tag{16}\]
For MLE-based methods, the covariance matrix of the estimated parameters \(\gamma\) is equal to the inverse of the fisher information matrix, denoted as \(\text{cov}(\gamma)=I(\gamma)^{-1}\). Using this, the variance of \(\eta\) and its standard error can be obtained.
Recall that the non-zero precision entries correspond to edges in the graph and a zero off-diagonal entry \(\Theta_{ij}\) are equivalent with a zero \(\eta\) in \(\Gamma_{\epsilon}\) for the pair \((i,j)\) since zero off-diagonal entries remain after a 2-by-2 matrix inverse operation.
A null hypothesis is set as \(H_{0}:\eta=0\), and the Wald test can be applied with a test statistic given by \(W=\frac{(\eta-\eta_{0})^{2}}{\mathrm{var}(\eta)}\), where \(\eta_{0}=0\). The p-value of the test for the existence of an edge between the pair \((i,j)\) is collected.
Alternatively, the likelihood ratio test can be applied alternatively by calculating the difference between the log-likelihoods of the original parameter space \(\gamma\) and the restricted parameter space where \(\eta\) in \(\Gamma_{\epsilon}\) is constrained to zero. The test statistic is given by \(-2[\mathcal{L}(\Gamma_{0})-\mathcal{L}(\Gamma)]\) where the two parameter sets are optimized separately with respect to the log-likelihood function \(\mathcal{L}\) in Formula 3, and \(\Gamma_{0}\) denotes the parameters when \(\eta\) in \(\Gamma_{\epsilon}\) is set to zero.
FLAG not only calculates the point estimates of the precision matrix but also computes standard errors and performs hypothesis testing on the precision entries, while many existing methods can only provide point estimates without efficient element-wise inference. After collecting the p-value of each entry in the precision matrix, large-scale hypothesis testing is considered to control the false discovery rate (FDR) based on the Benjamini-Hochberg procedure Benjamini and Hochberg (1995). Alternatively, the Bonferroni correction can be applied to control the family-wise error rate (FWER), which is relatively conservative Hastie et al. (2009). This inference on the precision matrix can be used to extend the usage of FLAG when utilizing meta-analysis to jointly estimate multiple graphs dealing with data from various groups.
## 3 Accelerated Algorithms and Extended Model
### Low-rank Update for Multiple Pairs
The most computationally intensive part of Algorithm 1 designed to estimate the variance components is the eigen-decomposition with a computational complexity of \(\mathcal{O}(n^{3})\), which becomes increasingly burdensome as \(n\) grows. Although the eigen-decomposition is performed only once when estimating the precision of each pair of variables, a total of \(\frac{p(p-1)}{2}\) eigen-decompositions are required to estimate the entire precision matrix for all pairs. It is worth noticing that the eigen-decomposition is calculated with respect to \(XX^{T}\), where each \(X\) for one pair of variables \((z_{i},z_{j})\) is the matrix \(Z\) with the \(i-\)th and \(j-\)th columns removed, denoted as \(X=Z_{-\{ij\}}\). To improve the computational efficiency, the eigen-decomposition of \(ZZ^{T}\) is performed first, followed by the eigen-decomposition of \(XX^{T}=ZZ^{T}-Z_{-\{ij\}}(Z_{-\{ij\}})^{T}\) replaced by a low-rank update based on that of \(ZZ^{T}\).
Denote the eigen-decomposition of symmetric matrix \(ZZ^{T}\) is \(ZZ^{T}=UDU^{T}\), then \(XX^{T}=UDU^{T}-Z_{\{ij\}}(Z_{\{ij\}})^{T}\), and the variance-covariance matrix \(\Omega=\Gamma_{\beta}\otimes XX^{T}+\Gamma_{\epsilon}\otimes I_{n}\) in the random effects model 2 can be written as
\[\begin{split}\Omega&=\Gamma_{\beta}\otimes(UDU^{T} )+\Gamma_{\epsilon}\otimes I_{n}-\Gamma_{\beta}\otimes(Z_{\{ij\}}Z_{\{ij\}}^ {T})\\ &=(\Phi^{-T}\otimes U)[\Lambda\otimes D+I_{2}\otimes I_{n}- \Lambda\otimes(U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T})](\Phi^{-1}\otimes U^{T} ).\end{split} \tag{17}\]
In the MM algorithm, the log-likelihood function 3 involves both the log-determinant and inverse terms with respect to \(\Omega\), which need to be revised based on the low-rank update of the eigen-decomposition of \(ZZ^{T}=UDU^{T}\).
Using the matrix determinant lemma, we have
\((U^{T}Z_{\{ij\}})^{T}(\lambda_{l}D+I_{n})^{-1}U^{T}Z_{\{ij\}}|)\). The inverse term is
\[\begin{split}\Omega^{-1}=&(\Phi\otimes U)[\Lambda \otimes D+I_{2}\otimes I_{n}-\Lambda\otimes(U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T })]^{-1}(\Phi\otimes U)^{T}\\ =&\begin{bmatrix}\Phi_{11}U&\Phi_{12}U\\ \Phi_{21}U&\Phi_{22}U\end{bmatrix}\\ &\begin{bmatrix}(\lambda_{1}D+I_{n}-\lambda_{1}U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}}) ^{T})^{-1}&0\\ 0&(\lambda_{2}D+I_{n}-\lambda_{2}U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T})^{-1} \end{bmatrix}\\ &\begin{bmatrix}\Phi_{11}U^{T}&\Phi_{21}U^{T}\\ \Phi_{12}U^{T}&\Phi_{22}U^{T},\end{bmatrix}\end{split} \tag{18}\]
where the block matrix \([\lambda_{l}D+I_{n}-\lambda_{l}(U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T})]^{-1},l= 1,2\) in the diagonal of the center matrix is the inverse of a diagonal matrix with rank-2 correction. This inversion can be calculated efficiently using the Woodbury matrix identity, then we have
\[\begin{split}&[\lambda_{l}D+I_{n}-\lambda_{l}(U^{T}Z_{\{ij\}}(U^ {T}Z_{\{ij\}})^{T})]^{-1}\\ =&[(\lambda_{l}D+I_{n})^{-1}+(\lambda_{l}D+I_{n})^{-1}U^{T }Z_{\{ij\}}(\frac{1}{\lambda_{l}}I_{2}-(U^{T}Z_{\{ij\}})^{T}(\lambda_{l}D+I_{n })^{-1}U^{T}Z_{\{ij\}})^{-1},\end{split} \tag{19}\]
for l=1,2.
Then the log-likelihood function 3 can be rewritten as
\[\begin{split}&\ell(\Gamma)=-\frac{1}{2}\ln\det\Omega-\frac{1}{2 }\text{vec}(\tilde{Y})^{T}\\ &\begin{bmatrix}[\lambda_{1}D+I_{n}-\lambda_{1}U^{T}Z_{\{ij\}}(U^ {T}Z_{\{ij\}})^{T}]^{-1}&0\\ 0&[\lambda_{2}D+I_{n}-\lambda_{2}U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T}]^{- 1}\end{bmatrix}\text{vec}(\tilde{Y}),\end{split} \tag{20}\]
where \(\text{vec}(\tilde{Y})=(\Phi\otimes U)^{T}\text{vec}Y=\text{vec}(U^{T}Y\Phi)\) is calculated only once for each pair before the iteration.
The coefficients of the parameters \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) in the gradient of the surrogate function 4, which are collected in the matrices \(M_{\beta}\) and \(M_{\epsilon}\), are revised accordingly, with the details in Supplementary 5.
Similarly, the coefficients of the inverse terms \(\Gamma_{\beta}^{-1}\) and \(\Gamma_{\epsilon}^{-1}\) in the gradient of the surrogate function 4, which are collected in the matrices \(N_{\beta}^{T}N_{\beta}\) and \(N_{\epsilon}^{T}N_{\epsilon}\), are also revised based on the low-rank update as \(N_{\beta}^{T}N_{\beta}=(R^{(m)}\Gamma_{\beta}^{(m)})^{T}[U(D-U^{T}Z_{\{ij\}}(U^ {T}Z_{\{ij\}})^{T})U^{T}]R^{(m)}\Gamma_{\beta}^{(m)}\), where the term in the middle can be further simplified as \(EE^{T}=D-U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T}=D^{\frac{1}{2}}[I_{n}-D^{-\frac{1 }{2}}U^{T}Z_{\{ij\}}(D^{-\frac{1}{2}}U^{T}Z_{\{ij\}})^{T}]D^{\frac{1}{2}}=(D^{ \frac{1}{2}}F^{\frac{1}{2}})(D^{\frac{1}{2}}F^{\frac{1}{2}})^{T}\), then we have \(E=D^{\frac{1}{2}}F^{\frac{1}{2}}\).
Let \(J=D^{-\frac{1}{2}}U^{T}Z_{\{ij\}}\in\mathbb{R}^{n\times 2}\), then \(F^{\frac{1}{2}}=I_{n}+J(J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2}}-I_{2}]J^{T}\). According to the simultaneous congruence decomposition, we have \(\Gamma_{\beta}=\Phi^{-(t)T}\Lambda\Phi^{-(t)},\Gamma_{\epsilon}=\Phi^{-(t)T} \Phi^{-(t)}\). Then the matrices \(N_{\beta}\) and \(N_{\epsilon}\) can be obtained by
\[\begin{split} N_{\beta}=E^{T}U^{T}R^{(t)}\Phi^{-(t)T}\Lambda\Phi^ {-(t)}\\ N_{\epsilon}=U^{T}R^{(t)}\Phi^{-(t)T}\Phi^{-(t)}\end{split}\]
To further simplify the matrix \(N_{\beta}\), we can vectorize it to obtain
\[\begin{split}\text{vec}(N_{\beta})=(\Phi^{-T}\Lambda)\otimes E^{T }\text{vec}(G)=\text{vec}(E^{T}G\Lambda\Phi^{-1}),\\ \text{vec}(N_{\epsilon})=(\Phi^{-T})\otimes I_{n}\text{vec}(G)= \text{vec}(G\Phi^{-1}),\end{split} \tag{21}\]
with the details shown in Supplementary 5.
Hence, the compact equation is \(N_{\beta}=E^{T}G\Lambda\Phi^{-1},N_{\epsilon}=G\Phi^{-1}\), and the expanded form of \(N_{\beta}\) is
\[\begin{split} N_{\beta}=& E^{T}G\Lambda\Phi^{-1}=F^{ \frac{1}{2}}D^{\frac{1}{2}}G\Lambda\Phi^{-1}\\ =&\{I_{n}+J(J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2 }}-I_{2}]J^{T}\}D^{\frac{1}{2}}G\Lambda\Phi^{-1}\\ =&\Big{(}(D^{\frac{1}{2}}G)(\Lambda\Phi^{-1})\Big{)} +\Big{(}J(J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2}}-I_{2}]\Big{)}\Big{(}J^{T}( D^{\frac{1}{2}}G\Lambda\Phi^{-1})\Big{)},\end{split} \tag{22}\]
where the matrix \(J\) and the term \((J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2}}-I_{2}]\) remain the same in all iterations, and thus they are only calculated once before the iterations.
### Meta-analysis for Multiple Groups
A graph can be inferred individually for each group. Nevertheless, the limited samples size, particularly in high-dimensional setting, raises the follow-up research question of how to leverage data from different groups. For instance, there are university websites for students, faculty, and courses, which may share many common phrases in websites such as "email address" and "home page" with steady relationships between words. The goal is to leverage the universality across groups to estimate commonly shared pairs more accurately while maintaining the differences in the same pair across different groups, thus preserving the individuality.
#### 3.2.1 One-to-one Meta-analysis
Denote \(\Gamma_{\epsilon}=\begin{bmatrix}\sigma_{3}^{2}&\eta\\ \eta&\sigma_{4}^{2}\end{bmatrix}=\begin{bmatrix}\sigma_{3}^{2}&\rho\sigma_{3} \sigma_{4}\\ \rho\sigma_{3}\sigma_{4}&\sigma_{4}^{2}\end{bmatrix}\), and the partial correlation is \(\rho=\frac{\eta}{\sigma_{3}\sigma_{4}}\). The partial correlations from two groups A and B are denoted as \(\rho^{(A)}=\frac{\eta^{(A)}}{\sigma_{3}^{(A)}\sigma_{4}^{(A)}},\rho^{(B)}= \frac{\eta^{(B)}}{\sigma_{3}^{(B)}\sigma_{4}^{(B)}}\).
The first step is to test whether the partial correlation of a pair of variables across two groups, A and B, is the same or not. The null hypothesis is \(H_{0}:\rho^{(A)}-\rho^{(B)}=0\), and the test statistic is given by \(\frac{\rho^{(A)}-\rho^{(B)}}{\sqrt{\text{se}(\rho^{(A)})^{2}+\text{se}(\rho^{ (B)})^{2}}}\). The standard error of partial correlation \(\rho\) can be obtained using the delta method, as
\[\text{se}(\rho)^{2}=\begin{bmatrix}-\frac{1}{2}\sigma_{3}^{-3}\sigma_{4}^{-1 }\eta&-\frac{1}{2}\sigma_{3}^{-1}\sigma_{4}^{-3}\eta&\sigma_{3}^{-1}\sigma_{4} ^{-1}\end{bmatrix}\Sigma_{\Gamma_{\epsilon}}\begin{bmatrix}-\frac{1}{2} \sigma_{3}^{-3}\sigma_{4}^{-1}\eta\\ -\frac{1}{2}\sigma_{3}^{-1}\sigma_{4}^{-3}\eta\\ \sigma_{3}^{-1}\sigma_{4}^{-1}\end{bmatrix},\]
where \(\Sigma_{\Gamma_{\epsilon}}\) is the covariance matrix of parameters in \(\Gamma_{\epsilon}=\begin{bmatrix}\sigma_{3}^{2}&\sigma_{4}^{2}&\eta\end{bmatrix} ^{T}\), which is a submatrix of the inverse of the Fisher information matrix. Specifically, the rows and columns that correspond to these three parameters in \(\Gamma_{\epsilon}\) from the inverse of the Fisher information matrix are taken.
If the hypothesis is not rejected in this test, assume that \(\rho^{(k)}=\rho+e^{(k)}\), where \(k\in\{A,B\}\) and \(e\) is random noise. Then, we use inverse-variance weighting to aggregate \(\rho\) from different groups that share similar underlying \(\rho\) as \(\rho=\frac{\Sigma_{k}w^{(k)}\rho^{(k)}}{\Sigma_{k}w^{(k)}}\), with \(w^{(k)}=\frac{1}{\text{se}(\rho^{(k)})^{2}}\) as weights. The standard error of the shared underlying \(\rho\) is \(\text{se}(\rho)=\frac{1}{\sqrt{\Sigma_{k}w^{(k)}}}\). Then, we can adjust the parameter \(\eta\) in different groups by \(\eta^{(A,meta)}=\rho\sigma_{3}^{(A)}\sigma_{4}^{(A)},\eta^{(B,meta)}=\rho \sigma_{3}^{(B)}\sigma_{4}^{(B)}\), and the precision will change accordingly.
FLAG-Meta provides a comprehensive analysis of both the similarities and differences between graphs from different groups by adaptively applying hypothesis testing on each edge across groups. Unlike other methods, such as PNJGL for differences and CNJGL for common parts
from Mohan et al. (2012, 2014), FLAG-Meta does not require any extra design, in contrast to different penalty functions when target changes.
FLAG-Meta utilizes element-wise by group-wise comparisons to obtain the fine-grained structures across groups, rather than penalizing the same entry across groups equivalently, regardless of the group relations, as in JGL Guo et al. (2011), JEMP Lee and Liu (2015), FGL and GGL Danaher et al. (2014), SCAN Hao et al. (2018), TFRE Bilgrau et al. (2020), and others. Furthermore, it is easy for FLAG-Meta to incorporate prior information, such as group relations, group memberships, and relationships of edge subsets within group subsets, if available, into the FLAG-Meta framework.
The majority of existing joint estimation methods are designed at the precision level, typically as \(\|\theta_{ij}^{(k_{1})}-\theta_{ij}^{(k_{2})}\|,1\leq k_{1},k_{2}\leq K\)Danaher et al. (2014), Price et al. (2015), Saegusa and Shojaie (2016), Price et al. (2021), Mohan et al. (2012), is penalized to encourage similarity. In contrast, FLAG-Meta is flexible in testing similarity at the partial correlation, scaled precision level, which is more robust in comparing conditional dependence between the same variables across different groups after adjusting the influence from the varied variance and precision from the diagonal elements in the covariance or precision matrix.
In conclusion, FLAG-Meta incurs only a little extra computational cost of in \(\mathcal{O}(K^{2}p^{2})\) based on FLAG. It is flexible in identifying both similarities and differences with fine-grained structure as element-wise by group-wise, which makes it easier to incorporate with prior information at any granularity, and it is accurate for smaller standard error and larger statistical power. Moreover, FLAG-Meta only requires summary statistics instead of raw data from different resources, making it more valuable, especially when data from different groups cannot be shared.
#### 3.2.2 Many-to-one Meta-analysis
The previous part explains the methodology for aggregating two groups through one-to-one meta-analysis, which can be further extended to more groups. Suppose that there exist \(K\) groups, and the set of all groups are denoted as \(G=\{1,...,K\}\) with the cardinality \(|G|=K\), and we first choose group 1 as the main target for explanation. For each pair of random variables \((i,j)\) with \(i\neq j\), the partial correlation from other groups are compared with \(\rho_{ij}^{(1)}\) separately by testing whether \(\rho_{ij}^{(1)}-\rho_{ij}^{(k)}=0\) for \(k=2,...,K\). Then the groups other than group 1 whose tests cannot be rejected are collected in a subset of groups \(G\) as \(G_{1}^{(\text{meta})}=\{k\mid k\neq 1,\text{hypothesis }\rho_{ij}^{(1)}-\rho_{ij}^{(k)}=0\text{ is not rejected}\}\).
For the null set of groups, still with the assumption \(\rho^{(k)}=\rho+e^{(k)}\) for \(k\in G_{1}^{(\text{meta})}\), the shared underlying partial correlation is computed by \(\rho=\frac{\Sigma_{k\in G_{1}^{(\text{meta})}\,w^{(k)}\rho^{(k)}}}{\Sigma_{i} w^{(k)}}\), where weights \(w(k)=\frac{1}{var(\rho^{(k)})}\) represent the inverse of the variance of the estimated partial correlation from different groups. The standard error of this shared partial correlation with respect to target group 1 is \(\text{se}(\rho^{(1)})=\frac{1}{\sqrt{\Sigma_{k\in G_{1}^{(\text{meta})}}w^{( k)}}}\), and \(\eta^{(\text{meta})}\) is also adjusted, so as the corresponding entry \(\Theta_{ij}^{(1)}\) in the precision matrix for target group 1. All the pairs of random variables are evaluated through the same approach. In addition, this whole procedure could be applied to other target groups as well.
Another alternative approach is to use one-to-one meta-analysis for \(K-1\) times. For instance, considering group 1 as the target group as well, we can apply one-to-one meta-analysis between group 1 and group \(i\) with \(i\in\{2,...,K\}\) for the first time. Then, the result of partial correlation and precision after meta-analysis with group \(i\) is used to apply one-to-one meta-analysis with the result from group \(j\) for \(j\in G\backslash\{1,i\}\), and so on so force. The strength of this procedure is that the
contribution of each additive considered group can be explicitly shown. The demonstration of this procedure in a real application will be shown in Section 4.2.2, which deals with the university webpage dataset. Specifically, the group with the smallest sample size is considered as the target group and then other groups are used one by one for meta-analysis in the ascending order of sample size.
A special case of the additively applying one-to-one meta-analysis is to follow its original index \(1,2,...,K\), where the data in different groups are collected in a time series and the index of the group corresponds to the time steps. One-to-one meta-analysis can be applied sequentially, starting from group 1 and group 2, and then up to the data from group \(K\) with the most recent time.
In conclusion, there are various ways to apply meta-analysis in multiple groups, depending on the aims of analysis. FLAG-Meta is flexible because it is based on the most fine-grained granularity across entries and groups.
### Covariate-adjusted Model for Joint Estimation
In real-world applications, taking the gene co-expression network from human brain data as an example, sample properties of sample like brain regions and age periods can be considered as covariates.
The conditional Gaussian graphical model (cGGM) is first presented by Yin and Li (2011), which takes covariates into consideration as \(z|v\sim\mathcal{N}(\zeta v,\Theta^{-1})\), where \(v\in\mathbb{R}^{q}\), and \(\zeta\in\mathbb{R}^{p\times q}\), rather than regarding means of random variables as constants, which is invariant to heterogeneity. The cGGM is estimated by a penalized likelihood-based method, where both \(\zeta\) and \(\Theta\) are penalized by \(\ell_{1}\) norm based on their sparsity assumptions.
Then, a two-stage method proposed by Cai et al. (2013) to solve covariate-adjusted Gaussian graphical model \(z=\zeta v+\tilde{z}\) where \(\tilde{z}\) is a \(p\times 1\) random vector with mean zero and inverse covariance \(\Theta^{-1}\), using a constrained \(\ell_{1}\) minimization similar to that of Cai et al. (2011). The first step is to estimate the regression coefficient matrix \(\zeta\) by solving the optimization row by row: \(\hat{\zeta}=\operatorname*{argmin}_{\zeta\in\mathbb{R}^{p\times q}}|\zeta|_{1 },\)s.t. \(|S_{vz}-\zeta S_{vv}|\leq\lambda_{1}\) where \(S_{vz}=\frac{1}{N}\Sigma_{n=1}^{N}(z_{i}-\bar{z})(v_{i}-\bar{v})^{T}\) and \(S_{vv}=\frac{1}{N}\Sigma_{n=1}^{N}(v_{i}-\bar{v})(v_{i}-\bar{v})^{T}\). In the second step, the precision matrix \(\Theta\) is estimated when \(\hat{\zeta}\) is fixed from the previous step, by \(\hat{\Theta}=\operatorname*{argmin}_{\Theta\in\mathbb{R}^{p\times p}}|\Theta| _{1},\)s.t. \(|I_{p}-S_{zz}\Theta|_{\infty}\leq\lambda_{2}\) where \(S_{zz}=\frac{1}{N}\Sigma_{n=1}^{N}(z_{i}-\bar{z})(z_{i}-\bar{z})^{T}\).
Similarly, a two-step procedure designed by Chen et al. (2016), known as asymptotically normal estimation with thresholding after adjusting covariates (ANTAC), to estimate \(\zeta\) and \(\beta\) separately using scaled lasso. In the first step, they solve the following optimization problems: \(\hat{\zeta}_{j},\hat{\sigma}_{jj}=\operatorname*{argmin}_{\zeta_{j}\in \mathbb{R}^{q},\sigma\in\mathbb{R}^{+}}\frac{\|Z_{j}-v\zeta_{j}\|_{2}}{2n \sigma}+\frac{\sigma_{jj}}{2}+\lambda_{1}\Sigma_{k=1}^{q}\frac{\|\Upsilon_{k }\|}{\sqrt{n}}|\zeta_{jk}|\), for \(j=1,...,p\), where the parameter is theoretically specified as \(\lambda_{1}=\sqrt{\frac{2(1+\frac{\log p}{n})}{n}}\). Next, adjusted data \(\tilde{Z}=Z-\Upsilon\hat{\zeta}\) is used to estimate the precision matrix, according to the regression residuals after estimating coefficients \(\beta\) by solving the optimization as follows, \(\hat{\beta}_{l},\hat{\sigma}_{ll}=\operatorname*{argmin}_{\beta_{l}\in\mathbb{ R}^{p-2},\sigma_{ll}\in\mathbb{R}^{+}}\frac{\|\tilde{Z}_{j}-\tilde{Z}_{A^{c}} \beta_{ll}\|_{2}}{2n\sigma_{ll}}+\frac{\sigma_{ll}}{2}+\lambda_{2}\Sigma_{k=1}^ {q}\frac{\tilde{Z}_{k}\|}{\sqrt{n}}|\beta_{lk}|,l\in A=\{i,j\}\), where the parameter is theoretically specified as \(\lambda_{2}=\sqrt{\frac{2\log p}{n}}\).
One limitation of the methods from Cai et al. (2013); Chen et al. (2016) is that the two-stage estimation process induce propagation of errors since the estimation of the precision matrix relies on \(\hat{\zeta}\) from the first step.
When taking covariates into consideration, the random effect model for the Gaussian graphical model as 1 can be extended to
\[Y=\Upsilon\zeta+X\beta+\epsilon,\beta_{i}^{T}\sim\mathcal{N}(0,\Gamma_{\beta}),\epsilon_{i}\sim\mathcal{N}(0,\Gamma_{\epsilon}), \tag{23}\]
where \(\Upsilon\in\mathbb{R}^{n\times q}\) is the covariate matrix and \(\zeta\in\mathbb{R}^{q\times 2}\). The advantage of the flexible and accurate Gaussian graphical model with covariate adjusted (FLAG-CA) is that it evaluates the fixed effect \(\zeta\) and the random effect \(\beta\) in a single unified model, rather than using two separate steps. When adjusting for the effect of covariates, the model can be easily estimated with little extra computational cost in each iteration.
#### 3.3.1 MM Algorithm for FLAG-CA
For the revision of MM algorithm, the incomplete-data log-likelihood is
\[\begin{split}\ell(\Gamma)=&\ln\mathbb{P}(Y|X; \Gamma_{\beta},\Gamma_{\epsilon})\\ =&-\frac{1}{2}\ln\det\Omega-\frac{1}{2}(\bar{Y}- \bar{\Upsilon}\bar{\zeta})^{T}\Omega^{-1}(\bar{Y}-\bar{\Upsilon}\bar{\zeta})+ c,\end{split} \tag{24}\]
where \(\bar{Y}=\text{vec}Y\), \(\bar{\Upsilon}=I_{2}\otimes\Upsilon\in\mathbb{R}^{2n\times 2q},\bar{\zeta}= \text{vec}(\zeta)\), and \(c\) is a constant. The MM algorithm updates the fixed effect \(\zeta\) and the variance components \(\Gamma\) alternatively, with one being updated while the other is fixed. In each iteration, an extra update of \(\zeta\) involves solving a weighted least squares problem, as \(\bar{\zeta}^{(m+1)}=\text{argmin}_{\bar{\zeta}}\frac{1}{2}(\bar{Y}-\bar{ \Upsilon}\bar{\zeta})^{T}\Omega^{-(m)}(\bar{Y}-\bar{\Upsilon}\bar{\zeta})=( \Upsilon^{T}\Omega^{-(m)}\Upsilon)^{-1}\Upsilon^{T}\Omega^{-(m)}Y\). The revised MM algorithm for FLAG-CA is summarized in the appendix 14.
#### 3.3.2 PX-EM Algorithm for FLAG-CA
The model of PX-EM algorithm for the FLAG-CA method is \(Y=\Upsilon\zeta+\delta X\beta+\epsilon\), where \(\delta\in\mathbb{R}^{1}\) is the expanded parameter. The complete-data log-likelihood when adjusting for covariates is
\[\begin{split}\ell(\Gamma)=&\text{logPr}(\bar{Y}, \bar{\beta}|\Gamma_{\beta},\Gamma_{\epsilon};\bar{X})\\ =&-\frac{1}{2}\ln|\Omega|-\frac{1}{2}\text{vec}(Y- \Upsilon\zeta-\delta X\beta)^{T}\Omega^{-1}\text{vec}(Y-\Upsilon\zeta-\delta X \beta)\\ =&-\frac{n}{2}\ln|\Gamma_{\epsilon}|-\frac{1}{2}( \bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\tilde{X}\bar{\beta})^{T}(\Gamma_{ \epsilon}^{-1}\otimes I_{n})(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\tilde{X }\bar{\beta})\\ &-\frac{p-2}{2}\ln|\Gamma_{\beta}|-\frac{1}{2}\bar{\beta}^{T}( \Gamma_{\beta}^{-1}\otimes I_{p-2})\bar{\beta},\end{split} \tag{25}\]
where \(\bar{Y}=\text{vec}Y,\bar{X}=I_{2}\otimes X,\bar{\beta}=\text{vec}(\beta)\) are the same transformations as in the previous section, and \(\bar{\Upsilon}=I_{2}\otimes\Upsilon\in\mathbb{R}^{2n\times 2q},\bar{\zeta}=\text{vec}(\zeta)\). Then the posterior distribution of \(\bar{\beta}\) is \(\mathcal{N}(\bar{\beta}|\mu_{\bar{\beta}},\Sigma_{\bar{\beta}})\), where
\[\Sigma_{\bar{\beta}}^{-1}=\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+ \Gamma_{\beta}^{-1}\otimes I_{p-2},\]
\[\mu_{\bar{\beta}}=(\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+\Gamma_{ \beta}^{-1}\otimes I_{p-2})^{-1}\delta(\Gamma_{\epsilon}^{-1}\otimes X^{T})( \bar{Y}-\bar{\Upsilon}\bar{\zeta}).\]
In the E-step, the expectation of complete-data log-likelihood in Equation 25 is taken with respect to \(\beta\), given the parameters from last iteration, as
\[\mathcal{Q}(\Omega|\Omega_{old})= -\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log}| \Gamma_{\beta}|-\frac{1}{2}\{(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\bar{X} \mu_{\bar{\beta}})^{T}(\Gamma_{\epsilon}^{-1}\otimes I_{n})(\bar{Y}-\bar{ \Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}}) \tag{26}\] \[+\delta^{2}\text{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T}X) \Sigma_{\bar{\beta}}]\}-\frac{1}{2}\{\mu_{\bar{\beta}}^{T}(\Gamma_{\beta}^{-1} \otimes I_{p-2})\mu_{\bar{\beta}}+\text{tr}[(\Gamma_{\beta}^{-1}\otimes I_{p- 2})\Sigma_{\bar{\beta}}]\}\] \[= -\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log} |\Gamma_{\beta}|\] \[-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\epsilon}^{-1}\otimes I_{n} )[(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}- \bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})^{T}+\delta^{2}\tilde {X}\Sigma_{\bar{\beta}}\tilde{X}^{T}]\Big{]}\] \[-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\beta}^{-1}\otimes I_{p-2}) (\mu_{\bar{\beta}}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{\beta}})\Big{]}\] \[= -\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log} |\Gamma_{\beta}|\] \[-\frac{1}{2}\text{tr}\Big{[}\Gamma_{\epsilon}^{-1}\begin{pmatrix} \text{tr}[S_{11}]&\text{tr}[S_{12}]\\ \text{tr}[S_{21}]&\text{tr}[S_{22}]\end{pmatrix}\Big{]}-\frac{1}{2}\text{tr} \Big{[}\Gamma_{\beta}^{-1}\begin{pmatrix}\text{tr}[W_{11}]&\text{tr}[W_{12}]\\ \text{tr}[W_{21}]&\text{tr}[W_{22}]\end{pmatrix}\Big{]},\]
where \(S=(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}- \bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})^{T}+\delta^{2}\tilde {X}\Sigma_{\bar{\beta}}\tilde{X}^{T}=\begin{pmatrix}S_{11}&S_{12}\\ S_{21}&S_{22}\end{pmatrix}\),
\(W=\mu_{\bar{\beta}}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{\beta}}=\begin{pmatrix}W _{11}&W_{12}\\ W_{21}&W_{22}\end{pmatrix}\).
In the M-step, parameters \(\delta,\Gamma_{\beta},\Gamma_{\epsilon}\) are updated similarly, with the only difference being that when adjusting the covariates, \(\bar{Y}\) is used as a mean effect offset version \((\bar{Y}-\bar{\Upsilon}\bar{\zeta})\) and an extra update for \(\zeta\) is added. The revised MM algorithm for FLAG-CA is summarized in the appendix 7.
## 4 Numerical Examples
In this section, the proposed methods are evaluated using various simulation settings, and the real data applications.
### Simulation Studies
The critical advantage of FLAG is its ability to perform statistical inference on each entry in the precision matrix, which quantifies the uncertainty associated with each edge. To verify the effectiveness of false discovery rate (FDR) control for graph recovery, a simple simulation setting is designed with \(p=50,n=300\), and the nonzero entries whose value is \(0.15\) are randomly generated with the nonzero proportion \(\pi\) varies \(\{0.1,0.15,0.2,0.3,0.4,0.5,0.6,0.7\}\). The results from FLAG are compared with two methods, ANT and GGM estimation with false discovery rate control (GFC, Liu (2013)) which support the statistical inference and FDR control.
As shown in the Figure 1, the FDR is controlled effectively by FLAG, while the FDR of ANT and GFC are out of control when the nonzero proportion exceeds \(0.5\).
#### 4.1.1 Block Magnified Matrix
To investigate the sensitivity of the methods is to data scaling, unscaled data and scaled data (each column with variance 1) are used with different methods for comparison. Since the estimated precision \(\hat{\theta}\) from the same method may differ depending on whether the data is scaled or not, the estimated partial correlation \(\rho_{ij}=-\frac{\theta_{ij}}{\sqrt{\theta_{i\cdot}\theta_{jj}}}\) is used for comparison.
The ground truth is a block magnified matrix as \(\Theta=\begin{pmatrix}\alpha_{1}\Theta_{0}&0&0\\ 0&\alpha_{2}\Theta_{0}&0\\ 0&0&\alpha_{3}\Theta_{0}\end{pmatrix}\), where \((\alpha_{1},\alpha_{2},\alpha_{3})=(1,5,25)\). The simulated submatrix \(\Theta_{0}\) has all diagonal elements equal to one, and its off-diagonal elements are non-zero with probability \(\pi=0.05\). The non-zero 0ff-diagonal elements are sampled from \(\{0.2,0.4\}\). According to this simulation setting, all the non-zero partial correlations are at the same scale, ranging from \(\{0.2,0.4\}\), which makes it easier for comparison.
Figure 2 shows the results from centered data in X and that from scaled data in Y, with points along the diagonal lines expected. The estimated partial correlation of FLAG is not sensitive to data scaling, compared to CLIME, GLasso, Hub GLasso, and De-sparsified GLasso. Specifically, the penalty parameter \(\lambda\) of the GLasso method, which was tuned by 10-fold cross validation, is 0.063 for the centered data and 0.158 for the scaled data. This indicates different levels of sparsity in matrices when the input data is scaled compared to when it is not. Referring to the subfigure of GLasso, the data points located on the x-axis or y-axis represent the entries that are zero in one setting and nonzero in the other.
Methods with regularization of the precision matrix are particularly fragile when the entries in the precision matrix are of different scales. Specifically, when given unscaled data, such kind of methods have false positives in the region with relatively smaller magnitudes of entries, and false negatives in the region with relatively larger magnitudes of entries. Both the estimation error and recovery performance of our method are not sensitive to data scaling and they are comparable to the outcomes of the well-performing methods in this block-magnified matrix setting.
Hub StructureThe ground truth for the precision matrix is an adjacency matrix of a weighted graph with a hub structure, where the hub indicates the node that connects with many other nodes with a large degree that exceeds the average Barabasi (2013). The hub structure exists
Figure 1: Scatter plots of the estimated partial correlation using different methods, with each data point representing the result from the scaled data in Y versus the result from the centered data in X.
\begin{table}
\begin{tabular}{l c c} \hline \hline Methods & Data centered & Data centered and scaled \\ \hline MLE & 0.859 & 0.859 \\ CLIME & 0.162 & 0.166 \\ FLAG & 0.178 & 0.178 \\ ANT & 0.157 & 0.157 \\ BGGM & 0.803 & 0.772 \\ GLasso & 0.246 & 0.193 \\ HubGLasso & 0.251 & 0.227 \\ DsGLasso & 0.628 & 0.574 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relative Frobenius norm error of the estimated partial correlation matrix using different methods, with 100 replications.
Figure 2: Scatter plots of the estimated partial correlation using different methods, with each data point representing the result from the scaled data in Y versus the result from the centered data in X.
widely in real-world applications, such as the structural and functional connectivity hubs in the human brain Van den Heuvel and Sporns (2013), a fragile financial instruments that can have a major impact on the financial market by influencing the prices of many related securities, and the source nodes of anomalous activity in the cyber security field Hero and Rajaratnam (2012).
The hub nodes in the ground truth are indexed by \(1,...,h\), where the number of hub nodes is smaller than the dimension, i.e., \(h<p\). The precision matrix can be split into blocks as \(\Theta=\begin{pmatrix}\Theta_{aa}&\Theta_{ab}\\ \Theta_{ba}&\Theta_{bb}\end{pmatrix}\), where \(a=\{1,...,h\}\) and \(b=\{h+1,...,p\}\). Specifically, \(\Theta_{aa}\) encodes the conditional dependence between hub nodes, \(\Theta_{ab}\) and \(\Theta_{ba}\) correspond to the edges between hub and non-hub nodes, and the dependencies between the non-hub nodes are in block \(\Theta_{bb}\).
Based on the conditional Gaussian property as \(\Theta_{ba}=-\beta\Theta_{aa}\), where \(\Theta_{ba}\in\mathbb{R}^{(p-h)\times h},\beta\in\mathbb{R}^{(p-h)\times h}\), and \(\Theta_{aa}\in\mathbb{R}^{h\times h}\). Once \(\Theta_{aa}\) and \(\beta\) are generated, the true \(\Theta_{ba}\) can be obtained through multiplication. According to the definition of a hub in a graph, each hub node has many connections with other nodes, and thus \(\Theta_{ba}\) is required to have a large proportion of non-zero entries. To investigate whether the sparsity of the true \(\beta\) influences the precision estimation, \(h=10\) hubs are separated into five pairs, and the columns in \(\beta\) that correspond to the hub nodes with odd indices are fully populated with non-zero elements, while the proportion of non-zero entries in the columns with even indices is varied across \(\{0.9,0.7,0.5,0.3,0.1\}\). The remaining block matrix \(\Theta_{bb}\), which denotes the relationships between non-hub nodes, is a relatively sparse matrix with a non-zero proportion of \(\pi=0.3\). Specifically, the diagonal elements of \(\Theta_{bb}\) are set to \(50\), and the non-zero elements are uniformly generated from \(U[3,5]\). In this simulation, the dimension is set to \(p=50\) and the sample size is \(n=200\).
Figure 2(a) shows the true precision matrix and the estimated precision matrices using different methods. Edges involving the hub nodes that correspond to entries in block matrices A and B are colored in purple for positive values and green for the negative values. Edges between non-hub nodes that correspond to entries in block matrix C are colored in brown. Some entries in the estimated matrices are gray, indicating that the estimated value is far away from the range of the true values.
In the block A, which encodes the conditional dependencies between hub nodes, several methods, including MLE, CLIME, Hub GLasso, ANT, and BGGM, produce false positives. The non-zero entries in block matrix \(\Theta_{aa}\) are underestimated by GLasso, CLIME, Hub GLasso, and Desparsified GLasso methods, and overestimated by BGGM method. In the block B, which captures the edges between hubs and non-hub nodes, results from methods CLIME and Hub GLasso miss the majority of non-zero elements. In the block C, whose non-zero entries indicate the conditional dependencies between non-hub nodes, several methods, including MLE, GLasso, Hub GLasso, and BGGM, produce inaccurate estimates of the diagonal elements. A large proportion of estimates in the block matrix \(\Theta_{bb}\) from MLE and Desparsified GLasso falling far away from the true range. By contrast, FLAG performs well in both precision matrix estimation and graph recovery, producing estimates that fall within a similar range to the ground truth in all blocks and fewer false positives. More detailed comparisons are provided in the following two parts, based on repeated experiments.
Precision Matrix EstimationFigure 2(b) shows the comparisons of the estimated precision between hub nodes as the sparsity of the coefficient \(\beta\) varies. MLE overestimates the precision values, while the penalized likelihood-based methods, including GLasso, CLIME, Hub GLasso, and Desparsified GLasso, underestimate them. The underestimation obtained by the ANT method is more obvious as the non-zero proportion increases. To further explain this observation, a detailed comparison between the ANT and FLAG methods is conducted.
Figure 10 shows a detailed explanation of entries in the precision matrix, with varying sparsity
Figure 3: The results of precision estimation and graph recovery using different methods.
of intrinsic \(\beta\), and a comparison between FLAG and ANT. Based on the sparsity assumption assigned to \(\beta\) by ANT, \(\beta^{(ANT)}\) has many zero entries, which induces an underestimation of \(\text{var}(X\beta)\) and an overestimation of \(\text{var}(\epsilon)\). As a result, the estimated precision by ANT is underestimated, while FLAG can still estimate the precision accurately in this case.
Table 2 shows that FLAG has accurate estimation in the whole precision matrix, with a particularly pronuounced advantage in submatrix A, which denotes the conditional dependence among the hub nodes.
Graph RecoveryAs shown in Figure 2(c), FLAG achieves the best graph recovery in block A and C, with an Area Under the ROC Curve (AUC) of 0.992 in the block determining edges between hub nodes and an AUC of 0.634 in the block of edges between hub nodes and non-hub nodes. It should be noted that all the entries in the block B are non-zero in the ground truth, and therefore no false positive exist.
The False Discovery Rate (FDR) is well-controlled in the entire precision matrix as shown in the leftmost subplot of Figure 11. The actual FDR is relatively conservative in the whole precision matrix due to the dense connections between hubs and non-hub nodes, where the false discovery rate in the block B is zero in this setting. This observation is consistent with the findings of smaller actual FDR than controlled in graphs with hub structures, as reported in Liu (2013).
In conclusion, FLAG is the only method that performs well in both precision matrix estimation and graph recovery across all blocks, particularly in the edges between hubs, where it outperforms the other methods, without any explicit assumption on the graph structure. In graphs with a hub structures, hub nodes are crucial components due to their numerous connections and greater influence on other nodes and the graph as a whole. Consequently, the edges of hub nodes are more informative, and FLAG exhibits better performance than other methods in this setting.
#### 4.1.2 Multiple Graphs
For each graph, a cluster structure is constructed, and the corresponding precision matrix is a block diagonal matrix. The dimension for each group is \(p=20\), and the sample sizes for the two groups are \(n_{1}=100\), and \(n_{2}=200\). Within each cluster, all nodes are connected to the node with the smallest index to ensure the connectivity, and then the probability of the existence of edges between nodes other than that one is \(\pi=0.3\). The diagonal elements in the precision matrix are set as one, and other non-zero entries are set as 0.2 for easier comparison.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Methods & Precision Matrix & Block A & Block B & Block C \\ \hline MLE & 0.772 (0.005) & 0.492 (0.007) & 1.129 (0.009) & 0.771 (0.005) \\ GLasso & 0.504 (0.007) & 0.320 (0.006) & 0.610 (0.007) & 0.504 (0.007) \\ CLIME & 0.286 (0.001) & 0.606 (0.001) & 1 (0) & 0.279 (0.001) \\ HubGLasso & 0.664 (3e-4) & 0.549 (0.001) & 0.927 (0.001) & 0.663 (3e-4) \\ DsGLasso & 0.412 (0.001) & 0.452 (0.002) & 0.662 (0.002) & 0.411 (0.001) \\ ANT & 0.334 (0.001) & 0.181 (0.004) & 0.758 (0.003) & 0.331 (0.001) \\ BGGM & 1.067 (0.005) & 59.22 (0.687) & 5.300 (0.074) & 0.819 (0.007) \\
**FLAG** & **0.329** (0.001) & **0.160** (0.004) & 0.847 (0.003) & 0.325 (0.001) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Relative Frobenius norm error of estimated precision matrix using different methods, with 100 replications.
First, the entries of the partial correlation matrix in each group are estimated individually, followed by testing for whether they are equal to zero or not, with the p-values of these tests collected.
Then, these null cases are tested for whether the partial correlation of the same entry from two groups is equal. For entries that cannot reject this hypothesis testing, meta-analysis is applied, and the p-values of testing whether the entry after meta-analysis is zero are obtained. Similarly, entries that are non-zero from both groups in the ground truth are collected and tested following the same routine. The partial correlation of individual estimation and inference shows a large power as the points deviate away from the diagonal line, with the power in group 2 being larger as it has more samples. When it comes to the result after meta-analysis, the power exceeds the performance of a single group, with the enhancement of power being more obvious for group 1.
FLAG-Meta has larger power and better graph recovery, smaller standard error for each entry and smaller estimation error of whole precision matrix. The improvement is more obvious in the group 1 with smaller sample size.
Figure 4: The comparison of statistical inference of precision matrix and graph recovery between FLAG-based and GLasso-based methods. The sample size is 80 for Group 1 and 120 for Group 2.
### Real Data Analysis
#### 4.2.1 Human Brain Gene Expression Data
We apply the FLAG method to the spatio-temporal gene expression data of the human brain Kang et al. (2011). Seven high-confidence Autism spectrum disorder (ASD) genes (GRIN2B, DYRK1A, ANK2, TBR1, POGZ, CUL3, SCN2A)Willsey et al. (2013) are selected to analyze the co-expression network among ASD-related genes. The data from period 1 and 2, which correspond to the early stages of brain development, as well as the groups with sample sizes smaller than three are all excluded Lin et al. (2017).
Data are integrated as several groups in seven time periods and four brain regions. Our aim is to discover how the conditional dependence among ASD-related genes changes over time or across region. The time periods are as follows, 1) Early fetal: [10PCW, 19PCW); 2) Late fetal: [19PCW, 38PCW); 3) Infancy: [0M, 12M); 4) Childhood and adolescence: [1Y, 20Y]; 5) Young adulthood: [20Y, 40Y); 6) Middle adulthood: [40Y, 60Y); 7) Late adulthood: age\(\geq\)60Y; The brain regions are 1) Parietal lobe, Occipital lobe, Temporal lobe; 2) Frontal lobe; 3) Striatum, Hippocamp, Amygdala; 4) Thalamus, Cerebellum.
To compare the results of different methods in this dataset, we use the group in period 13 and region 2, which has a relatively large sample size of 85, as shown in Figure 13. As the dimension equals seven and the sample size equals 85, the maximum likelihood estimator, i.e., inverse sample covariance as estimated precision matrix, is a good reference for estimation. The estimation from the CLIME method shows less magnitude than the reference. The magnitude of estimated precision and partial correlation of the gene pair (DYRK1A, TBR1) from the ANT method is about half of the reference's, while the estimation through FLAG method equals the reference's. The reason for such underestimation from the ANT method is similar to what we observe in the simulation, where the large zero proportion (80%) in \(\beta^{(\text{ANT})}\) induces a smaller \(\text{var}(X\beta)\), and a larger \(\text{var}(\epsilon)\) (0.386 by ANT and 0.316 by FLAG), resulting in a smaller estimated precision (0.41 by ANT and 1.01 by FLAG) and a smaller magnitude of partial correlation (-0.14 by ANT and -0.29 by FLAG). In addition, due to the underestimation of precision, the inferred graph from the ANT method omits the edge between DYRK1A and TBR1. The red lines in the graphs from the FLAG and ANT methods indicate edges of great significance, with a p-value of test smaller than 0.05 after Bonferroni correction, and blue lines indicate the significant edges after controlling the False Discovery Rate (FDR) to be smaller than 0.1.
Figure 5 shows the temporal varying pattern of conditional dependence between ASD-related genes is shown in each row, and spatial variations are in each column. The edges inferred by Bonferroni correction are denoted in red, and FDR \(\leq\) 0.1 in blue. The thickness of edges is weighted by its magnitude of partial correlation. As an example of spatial variation in time period 6-7, the gene pair (DYRK1A, CUL3) has a precision of -1.16, -1.12, -0.96, -2.33, and partial correlation of this pair is 0.37, 0.36, 0.39, 0.57 and in regions 1, 2, 3, 4, respectively. In period 6-7, the conditional dependence between this pair exists in all regions, and their partial correlation shows consistency in the first three regions, while it is higher in region 4. Moreover, there are many edges involving the gene DYRK1A, which is evident in the graphs of region 2, where the edge of the pair (DYRK1A, ANK2) exists in almost all the periods except the period 14. This finding is supported by the evidence that DYRK1A plays an important role in the signaling pathway regulating cell proliferation as a protein kinase and may be involved in brain development Di Vona et al. (2015).
As shown in figure 6, the estimated partial correlation by different methods are compared in the upper subfigure. Using the maximum likelihood estimation with a relatively large sample size as a reference, the estimation by FLAG is quite similar to the reference's, thus accurate. However, GLasso method shrinks some precision entries to zero, the estimation from CLIME method has
a smaller magnitude, and in period 10-12, ANT method has non-zero estimation against all the other methods. These cases, such as GLasso method having some false negatives in the results, CLIME method underestimating the magnitude of precision and partial correlation, and some inaccurately estimated entries from ANT method, are consistent with what we observed in the simulation studies.
#### 4.2.2 University Webpage Data
The webpage dataset was collected from the computer science departments of four universities by the World Wide Knowledge Base (Web-\(>\)KB) project of the Carnegie Mellon University (CMU) Text Learning Group, with pages manually classified into several categories. This raw data has been pre-processed by Cachopo et al. (2007) with stemming words. The occurrences of terms in 544 student webpages, 374 faculty webpages, 310 course webpages, and 168 project webpages are used in the following analysis.
First, the word count of the \(i\)-th term in the \(j\)-th webpage is denoted as \(f_{i,j}\), which is used to calculate the following relative frequency of terms in the document (webpage). The Document-Term Matrix (DTM) weighting for terms in \(D\) documents is the multiplication between local and global weights, i.e., \(x_{i,j}=L_{i,j}G_{j}\), where the log local weight is \(L_{i,j}=\log(f_{i,j}+1)\), and entropy Global weight is \(G_{j}=1+\frac{\Sigma_{i}p_{i,j}\log p_{i,j}}{\text{D}}\), with \(p_{i,j}=\frac{f_{i,j}}{gf_{j}}\). 100 terms are selected with the largest entropy \(-\Sigma_{i}p_{i,j}\log p_{i,j}\) for the following analysis.
Standardization is to scale data with zero mean and unit variance, but different operations may lead to different outcomes. The document-term matrix weighting is denoted as \(X\). Specifically, when the raw count matrix is from all webpages, the weighting matrix is \(X^{(all)}\), and when the webpages of a single category are split to be preprocessed, the weighting matrix is \(X^{(student)},X^{(faculty)},X^{(course)}\), and \(X^{(project)}\). It is obvious that using all webpages or that
Figure 5: Inferred graphs by FLAG, arranged by region 1,2,3,4 in different rows and time periods in different columns.
Figure 6: Partial correlation estimated by different methods between gene pair (GRIN2B, POGZ) using the expression data in brain region 2.
from each category separately will lead to different weighting due to different term frequencies. Thus, standardizing \(X^{(all)}\) and then taking corresponding lines for each category is different from standardizing weights from individual weights separately. Even when the data is in the same scale, methods with parameters to be tuned, such as CLIME, GLasso, Hub GLasso, and Desparsified GLasso, still have unstable results when the data standardization is different, while FLAG preserves stable results, as shown in Figure 14. After comparison, the data standardization is fixed to center and scale data from each category separately in the following analysis.
When taking single category data as input, four inferred graphs by FLAG can be obtained. The common edges in the graphs of all four categories are standard phrases in computer science websites, such as ('comput','scienc'), ('home', 'page'), ('high', 'perform'), and ('commun', 'network'). The corresponding precision and partial correlation are far away from zero, and the p-values of tests are much smaller than \(1e-4\). Compared with the results obtained by the ANT method, there are some standard phrases that are omitted by ANT but successfully identified by FLAG. For example, the common phrase'related area' links the term pair ('relat', 'area'). However, the result from ANT underestimates its precision and fails to identify this edge in the course category data. More precisely, the estimated precision and partial correlation of this pair by ANT are 0.13 and -0.06, respectively, while the estimates are 0.52 and -0.22 by FLAG. This situation is consistent with our finding in the simulation that the underestimation of precision by ANT comes from a large zero proportion (80.6%, 79.6%) in the \(\beta\), which induces smaller \(\text{var}(X\beta)\) and larger \(\text{var}(\epsilon)\) ((0.46,0.62) by ANT and (0.36,0.53) by FLAG), and thus leads to smaller estimated precision.
The graphs inferred by FLAG can capture different conditional dependencies in different categories. Taking the term 'data' as an example, the edge ('data', 'educ') in the student category is significant to have a precision of -0.23 and a p-value of corresponding hypothesis test of \(5e-4\). The edge ('data','structur') has a p-value of \(7e-10\) in the faculty category and \(2e-10\) in the course category. The edge ('data','model') in the project category has a p-value of \(3e-5\). The estimated precision and partial correlation have a relatively large standard error due to the small sample size in the project category, which can be alleviated by meta-analysis.
In addition to graph recovery in each category, the inference can be extended to test the differences of partial correlation of the same pair across different categories, as shown in Table 2(b), with the null hypothesis \(\rho^{(\text{category A})}-\rho^{(\text{category B})}=0\). Specifically, the test of the pair ('data','model') between the project category and the result from the faculty category rejects the null hypothesis, indicating that the partial correlation is significant to be different in these
\begin{table}
\end{table}
Table 3: Edge differences of term pairs (‘data’, ‘structure’) and (‘data’, ‘model’) across categories.
two categories.
The results from the project category, due to its small sample size, have relatively large standard error in estimated precision and partial correlation, and thus its inferred graph has few edges due to relative small power. Since all data are from the terms on the webpages of computer science department in universities, it is natural to leverage their common shared properties to enhance the result in the project category. In order to obtain the result after one-to-one meta-analysis and identify how each category contributes to the enhancement of the result, each category is used for meta-analysis in the order of ascending sample size: course, faculty, and student. The whole procedure is shown in Figure 7. In each step, the data from one category is combined with the previous result in grey, and the edges that are only detected after meta-analysis are in red. Blue dotted lines denote the edges that are shown in the previous result but are not significant in the result of meta-analysis.
The first meta-analysis is between the project and course categories. Compared with the graph inferred only based on project data, 61 edges are added. The pairs ('engin','software') and ('language', 'implement'), whose dependencies are supported by the common phrase'software engineering' in the computer science field, and concurrence of related words 'implement' and (programming) 'language', are not found by data in a single category but are discovered by meta-analysis between project and course data. The next step is the meta-analysis between the result of meta-analysis of project and course and the result from the faculty category. As shown in Figure 6(b), 46 edges are added in red, while 5 edges are removed in blue dotted lines. The meta-analysis in this stage not only further enlarges the power but also detects some possible false positives like ('high','select') and ('area', 'project'). Overall, the meta-analysis provides a result from the project category with smaller standard error and larger power.
For comparison, taking the result shown in Figure 6(c) that achieves many-to-one meta-analysis with respect to the project category, the edges of the node 'data' in the single category project data are only with'model', while the edges in the result of FLAG-Meta are with'model','structur', and'research'.
From Figure 15, the joint GLasso fails to recover such reasonable edges of 'data' with'structur' and'research', and it involves some false positives like edges between 'data' and 'class', 'develop', 'program'.
#### 4.2.3 U.S. Stock Prices
The raw data consists of daily close prices of 99 stocks in the S&P100 Index from 2018-01-02 to 2022-06-30. The stock with the code 'DOW' in the S&P 100 Index is excluded due to its start time being on 2019-03-21, with missing data of more than 14 months.
It is preprocessed by taking the logarithmic difference, \(Z_{i,j}=\log P_{i,j}-\log P_{i-1,j}\), where \(P_{i,j}\) is the close price of the \(j\)-th stock on the \(i\)-th day. The log return is used as the input data in the following analysis, where the outcome is a perceived network Anufriev and Panchenko (2015), and the conditional dependence in the stock network is the return co-movements.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Pair of Terms & \(\rho^{\text{(project)}}\) & \(\rho^{\text{(course)}}\) & \(\rho^{\text{(meta)}}\) \\ \hline (’engin’, ’software’) & 0.32 (0.09) & 0.19 (0.06) & 0.24 (0.05) \\ (’language’, ’implement’) & 0.30 (0.08) & 0.17 (0.06) & 0.21 (0.05) \\ (’assist’, ’support’) & 0.24 (0.09) & 0.20 (0.06) & 0.21 (0.05) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Example of edges missed by FLAG given the data from individual groups, but unveiled by FLAG-Meta.
Figure 7: A many-to-one meta-analysis using the FLAG-Meta method, with project category as the pivot for progressive analysis.
Due to the small variance of the log return, which is around \(e^{-4}\), the precision is about \(e^{4}\) to \(e^{5}\). Such a large magnitude increases instability in the estimation of precision, as well as the partial correlation. From Figure 16, it can be observed that the estimated partial correlation from FLAG is the least sensitive to data scaling, as the scattered points are most tightly clustered around the diagonal line, indicating that the FLAG method provides more consistent results across different scales of input data. In contrast, the results from regularization-based methods such as CLIME, GLasso, HubGLasso, and DsGLasso heavily rely on the penalty parameter. It is evident that the tuned parameters will vary widely depending on whether the input data is log return or scaled data. On the one hand, the two types of tuned parameters have no correspondence, and thus the results also vary greatly by each penalty-based method. On the other hand, given two results from the same method, when the input data is scaled or not, it is difficult to determine which result to use, even though they are expected to reveal the same underlying structure from the data.
Inspired by Bernardini et al. (2022), we are also interested in whether and how the S&P100 stock co-movement network shows the impact of Covid-19 pandemic by using a rolling window of one-year length, which shifts one month in each step, as the input data. Recall the stock market crash in 2020, there were trading curbs on March \(9^{th}\), March \(12^{nd}\), March \(16^{th}\), and March \(18^{th}\), which occured 25 years after the previous one in 1997. Such stock market crashes imply increased instability in the market due to the Covid-19 pandemic.
A large complex system transitions from stable to unstable once its connectance increases over a critical level, as suggested by Gardner and Ashby (1970). In it common knowledge in the financial market that the correlations between securities, no matter marginal correlations or partial correlations, increases significantly during market crises, just as the prices of most securities drop together with negative returns. Therefore, it is natural to use the stock network whose edges are weighted by (partial) correlation to evaluate the stability of market.
The stability of a system is quantified using the connectances \(C\) and the average interaction strengths \(\alpha\), and the system is stable when \(\alpha^{2}nC<1\), where \(n\) is the number of variables in the system, and the system is unstable when \(\alpha^{2}nC>1\), as proposed by May in May (1972). The May-Wigner stability theorem has been applied to evaluate the stock network stability in Heiberger (2014), with the stability condition \(m=\sqrt{nC}\alpha<1\) where \(n\) is the number of stocks as the size of the network, and connectances \(C\) is the density of connections, and the average interaction strength \(\alpha\) equals the average value of strength among nodes, with the weighted degree of a node as its strength for each node. As for the estimated precision matrix and partial correlation and inferred graphs of Gaussian graphical model, the weight of edges is the magnitude of partial correlation for fair comparison. The stability indicator \(m=\sqrt{nC}\alpha\) is calculated by different methods given data in the rolling window along the time.
In Figure 8, the stability of graphs estimated or inferred from different methods are shown in different lines, with each point on the line representing the stability calculated using the most recent one-year market data. For instance, the point at time '2020-04' uses the log-return from [2019-04-01,2020-04-01) as input. Recall the May-Wigner stability theorem, which states that the system is stable when \(m<1\) and unstable when \(m>1\). Among the methods shown, FLAG is the only method whose outcome correctly oscillates around the reference line as one, while the result from GLasso is not shown due to its magnitude of \(m\) being too large compared with other methods and the reference value one. The stability evaluated by the FLAG method increases significantly from February 2020 to April 2020, which highly matches the crashes in the U.S. stock market from 20 February 2020 to 7 April 2020. After this period, FLAG detects that the market stabilizes from March 2021. However, the stability calculated by the results from ANT increases dramatically from March 2020 to April 2020 and decreases dramatically from March 2021 to April 2021, indicating that the results are dominated by the data in March 2020
when it is included. This scenario implies the vulnerability of results from the ANT method. Regarding the point at time '2021-03', the aim is to evaluate the stability of the market in the recent period, but the stability indicator equals 2.36 when input data is the recent 12-month-long, and 1.24 for the recent 11-month-long data as input, making it difficult to determine which result to trust. The results from BGGM are roughly twice the expected value, although the trend is relatively matches that from the real market. According to the simulation studies, BGGM overestimates the magnitude of the precision matrix, partial correlation, and then such overestimation is propagated to the strength of nodes as the sum of weighted edges, strength of networks, and stability. The results from the CLIME method are too flat to reflect the dynamic pattern of the market. In conclusion, FLAG can successfully detect the impact of the Covid-19 pandemic on the US stock market with the proper magnitude of stability.
The many-to-one meta-analysis is conducted between the results from data in 2021 and that from the other groups (data in 2019 and 2020), compared with the joint group GLasso, with the subgraph of node 'PYPL' as an example. The results from the joint group graphical lasso
Figure 8: The stability \(m\) of the stock network obtained from different methods, using a rolling window of one-year length shifted by one month.
Figure 9: The inferred subgraphs around the stock ’PYPL’ in the year 2021, using the JGL and FLAG-Meta methods.
vary widely depending on the threshold of the estimated precision values, making it difficult to determine the optimal threshold especially in real data. The results from FLAG-Meta have larger power compared with the results estimated from single-year data.
## 5 Discussion
The Flexible and Accurate method of Gaussian graphical model (FLAG) aims to estimate precision matrix entries accurately and efficiently, and further quantify the uncertainty of each entry, which allows for better leveraging of the common structure across different groups through meta-analysis. FLAG has no explicit structural assumptions on the precision matrix or the corresponding graphs, making it tuning-free. Its capability of element-wise inference allows for extension to multiple graphs with small computational consumption, making it highly flexible.
Simulation studies in three different settings show that FLAG is not sensitive to data scaling, unlike other methods that require tuning parameters. FLAG is particularly suitable for the data with a hub structure, where it outperforms other methods, especially in the region of edges between hubs, even when the non-zero proportion of underlying coefficients is varied. FLAG can make inferences to test each edge individually and adjust partial correlation and precision values after cooperating the entries that have common structure across groups to achieve smaller standard error and larger power. FLAG is accurate, with a small relative error and a large area under the ROC curve in the simulation studies.
FLAG is capable of unveiling the co-expression relationships between genes in the brain across time and region, identifying the associations between terms in the webpage data from different categories, and revealing the relationships between stocks in the S&P100 with stability influenced by Covid-19 captured well. |
2309.12279 | The Broad Impact of Feature Imitation: Neural Enhancements Across
Financial, Speech, and Physiological Domains | Initialization of neural network weights plays a pivotal role in determining
their performance. Feature Imitating Networks (FINs) offer a novel strategy by
initializing weights to approximate specific closed-form statistical features,
setting a promising foundation for deep learning architectures. While the
applicability of FINs has been chiefly tested in biomedical domains, this study
extends its exploration into other time series datasets. Three different
experiments are conducted in this study to test the applicability of imitating
Tsallis entropy for performance enhancement: Bitcoin price prediction, speech
emotion recognition, and chronic neck pain detection. For the Bitcoin price
prediction, models embedded with FINs reduced the root mean square error by
around 1000 compared to the baseline. In the speech emotion recognition task,
the FIN-augmented model increased classification accuracy by over 3 percent.
Lastly, in the CNP detection experiment, an improvement of about 7 percent was
observed compared to established classifiers. These findings validate the broad
utility and potency of FINs in diverse applications. | Reza Khanmohammadi, Tuka Alhanai, Mohammad M. Ghassemi | 2023-09-21T17:40:44Z | http://arxiv.org/abs/2309.12279v1 | The Broad Impact of Feature Imitation: Neural Enhancements Across Financial, Speech, and Physiological Domains
###### Abstract
Initialization of neural network weights plays a pivotal role in determining their performance. Feature Imitating Networks (FINs) offer a novel strategy by initializing weights to approximate specific closed-form statistical features, setting a promising foundation for deep learning architectures. While the applicability of FINs has been chiefly tested in biomedical domains, this study extends its exploration into other time series datasets. Three different experiments are conducted in this study to test the applicability of imitating Tsallis entropy for performance enhancement: Bitcoin price prediction, speech emotion recognition, and chronic neck pain detection. For the Bitcoin price prediction, models embedded with FINs reduced the root mean square error by around 1000 compared to the baseline. In the speech emotion recognition task, the FIN-augmented model increased classification accuracy by over 3 percent. Lastly, in the CNP detection experiment, an improvement of about 7 percent was observed compared to established classifiers. These findings validate the broad utility and potency of FINs in diverse applications.
Reza Khanmohammadi\({}^{\dagger}\) Tuka Alhanai\({}^{\lx@sectionsign}\) Mohammad M. Ghassemi\({}^{\dagger}\)\({}^{\dagger}\)Computer Science and Engineering Department, Michigan State University
\({}^{\lx@sectionsign}\)Department Computer Engineering, New York University Abu Dhabi Feature Imitating Network, Bitcoin Price Prediction, Speech Emotion Recognition, Chronic Neck Pain
## 1 Introduction
Deep learning has established itself as a foundational technique across various applications, primarily due to its capability to learn complex patterns and relationships. One of the crucial aspects influencing the efficacy of deep learning models is the initialization of their weights. Proper weight initialization can lead to faster model convergence and enhanced performance [1]. While the reliance on large datasets and extensive computational resources is vital for determining feature quality and model versatility, correct initialization can offset some of the dependencies on these resources. This offset is especially crucial in domains with limited data and computational capabilities, underlining the importance of leveraging deep learning's potential without a heavy reliance on large datasets and extensive resources. To cater to such scenarios, FINs [2] offer an intuitive approach where neural networks are initialized to imitate specific statistical properties. By doing so, FINs provide a more informed starting point, making neural networks less opaque and offering a hint of interpretability in what is often dubbed a "black box." The beauty of FINs lies in their simplicity, allowing researchers to directly incorporate domain-specific knowledge into the model's architecture, fostering both efficacy and understandability.
### Contributions
While FINs have made significant strides in biomedical signal processing [2, 3, 4], their applicability in broader domains remains a topic of interest. In this work, we delve into the potential of FINs across three distinct areas: financial, speech, and Electromyography (EMG) time series analysis. Our research aims to demonstrate how integrating a lightweight FIN can enhance the performance of different neural network architectures, regardless of the task or network topology. By investigating their effects across different contexts, we offer insights into the adaptability, benefits, and potential boundaries of using FINs.
## 2 Related Work
**The Evolution of Transfer Learning Across Domains:** Transfer learning has emerged as a potent technique in machine learning, reshaping the paradigm by repurposing pre-trained models to tackle different tasks from their original intent [5]. Such a strategy has yielded transformative advancements, especially in computer vision [6], speech analysis [7], and natural language processing (NLP) [8]. Foundational models like ResNet [9], wav2vec [10], and BERT [11] stand as prime examples of this shift, requiring significantly reduced training data when finetuned for new tasks. Transitioning this approach to the biomedical arena presents unique challenges. There is an inherent lack of large and diverse biomedical datasets [10][12], which has led to cross-domain adaptations, such as repurposing computer vision models for audio classification [13]. These adaptations, while novel, often do not achieve the same efficacy as within-domain counterparts, highlighting the pressing need for tailored approaches for biomedical data.
**Statistical Feature Imitation Bridges the Transfer Learning Divide in Diverse Specialized Tasks:** FINs have established a unique role in addressing this particular challenge [2]. FINs offer a distinctive approach to neural learning by initializing weights to simulate distinct statistical features, effectively bridging domain knowledge with machine learning. This method has catalyzed notable progress in many fields by showcasing its effectiveness across various tasks. In the seminal work introducing FINs [2], the authors showcased the efficacy of this novel approach across three experiments. In Electrocardiogram (ECG) artifact detection, a FIN imitating the Kurtosis feature outperforms standard models in both performance and stability. Similarly, for Electroencephalogram (EEG) artifact detection within the same research, FINs imitating Kurtosis and Shannon's Entropy enhanced results. Moreover, when applied to EEG data for fatigue and drowsiness detection, a FIN based on Shannon's entropy consistently outperformed baselines, while certain models like VGG proved ineffective. Additionally, FINs have shown promise in specialized applications. In biomedical image processing, Ming et al (2023) provided state-of-the-art results across tasks including COVID-19 detection from CT scans and brain tumor identification and segmentation from MRI scans [3]. In sports analytics, the hybrid architecture of MambaNet [4] employed FINs to effectively predict NBA playoff outcomes, showcasing the broad versatility of the FIN approach. Although FINs have shown promise in biomedical applications and sports analytics, their potential in financial and speech time series data is yet to be explored.
## 3 Imitating Tsallis Entropy
A FIN is a neural network that is trained to approximate a closed-form statistical feature of choice. In our study, we train a FIN to imitate Tsallis Entropy. Tsallis entropy, a non-extensive generalization of the traditional Shannon entropy, measures the uncertainty or randomness of a system. Uniquely, it takes into account the correlations and higher-order interactions that are often overlooked by the conventional Shannon entropy. This quality makes Tsallis entropy particularly apt for systems exhibiting non-standard statistical behaviors and long-range dependencies.
**The Influence of \(q\) on Tsallis Entropy** The distinguishing characteristic of Tsallis entropy is its reliance on the parameter \(q\). The Shannon entropy becomes a special case of Tsallis entropy when \(q=1\). When \(q>1\), the entropy gives more weight to lower probabilities, making it more sensitive to rare events. Conversely, for \(q<1\), the entropy calculation is dominated by higher probabilities. This variability in weighting is encapsulated by the equation for a discrete probability distribution \(p(i)\) as influenced by the temperature scaling parameter \(\tau\):
\[H_{q}(\tau)=\frac{1}{q-1}\left(1-\sum_{i}\text{softmax}\left(\frac{u(i)}{\tau }\right)^{q}\right) \tag{1}\]
Where \(u(i)\) represents the unscaled probabilities from the normalized input. In our implementation, \(q\) is set to a default value of 1.5 and further treated as a trainable parameter within our FIN, allowing the model to adaptively finetune its value to optimally capture the inherent complexities and nuances of the dataset.
**Temperature Scaling with Parameter \(\tau\)** Another pivotal parameter in our approximation process is \(\tau\). This temperature parameter modulates the entropy's sensitivity by scaling the inputs to the softmax function. Specifically, as \(\tau\) approaches 0, the softmax output mirrors a one-hot encoded distribution, while increasing \(\tau\) causes the resultant distribution to edge towards uniformity. The introduction of \(\tau\) in the Tsallis entropy equation underlines its importance in shaping the final probabilities. In the context of our work, \(\tau\) is initialized with a default value of 1, but like \(q\), it's also trainable within our FIN, allowing the network to adjust it adaptively during the learning phase.
**Training** To approximate the Tsallis entropy using neural networks, we generated synthetic signals with uniform random values between 0 and 1. The output regression values for the FIN were the Tsallis Entropy values, which were computed directly on the synthetic signals using the defined closed-form expression in equation 1. This calculation is fundamentally based on a power-law probability distribution. We utilized a simple gradient descent optimizer along with mean absolute error (L1) loss to train this network. Additionally, early stopping was integrated, and the training was optimized with learning rate modifications facilitated by the ReduceLROnPlateau scheduler.
**Baseline Model** In each of our three experiments, we employed a neural network as a comparative baseline. This network had a representational capability (i.e. number of parameters) that was either equal to or exceeded the FIN-embedded networks introduced in that particular experiment. We investigated multiple network topologies, experimenting with as many as ten variations for each baseline. The model that showcased the best performance on the validation set was subsequently chosen for comparison against the Tsallis Entropy FIN-powered networks.
## 4 Experiments & Results
### Experiment I
**Objective** This experiment focuses on predicting the closing price of Bitcoin on a given day, for the subsequent day. We hypothesize that we can achieve enhanced predictive accuracy over traditional baselines by initializing certain neural network weights to imitate Tsallis entropy, followed by fine-tuning during training.
**Data and Preprocessing** Our study leveraged a publicly accessible dataset1 that spanned over seven years, from March
2015 to April 20221. Owing to notable Bitcoin price fluctuations in 2017 and 2021, the dataset was bifurcated into two periods: Period 1, from March 2015 to September 2018, and Period 2, from October 2018 to April 2022. Each period was split into approximately an 85 to 15% ratio for training and testing. The dataset encompassed a total of 47 features clustered into various categories such as Bitcoin price variables, specific technical features of Bitcoin, other cryptocurrencies, commodities, market indexes, and more. In the original study conducted by Chen (2023) [14], ten features were utilized for Period 1: Bitcoin's opening price, highest price of the day, lowest price of the day, closing price, price of one Ethereum in USD, WTI crude oil price, the Standard and Poor's 500, National Association of Securities Dealers Automated Quotations (NASDAQ), Dow Jones Industrial Average (DJI), and the mean difficulty on a given day of finding a hash that meets the protocol-designated requirement. For Period 2, a subset of six features was used: the first four Bitcoin-specific prices, the price of one Ethereum in USD, and the Nikkei 225, determined through feature selection. In contrast, our study consistently employed this subset of six features across both periods, as it led to improved results.
**Methods** While the baseline architectures include a Random Forest (RF) regression and a deep LSTM network [14], our research takes this foundation a step further. We introduce a new model, namely Deep LSTM + Attention, which is inspired by the LSTM's structural elements but incorporates significant advancements. Contrary to the original RF regression and LSTM models, our design integrates the last seven timesteps of each feature, enriching its grasp on historical data and potentially enhancing its predictive prowess. Moreover, we incorporated two distinct attention mechanisms: one at the input level and another within the network layers, aiming for refined data representations. Complementing these improvements, we embedded and fine-tuned the Tsallis entropy FIN within this network (FIN-ENN), serving as a transformative layer to delve deeper into the financial intricacies.
**Results** The results of our analysis can be found in Table 1. We used Root Mean Square Error (RMSE) that gauges prediction deviations from actual values, and Mean Absolute Percentage Error (MAPE) which quantifies relative error in percentage terms as two metrics for evaluating model performance. In our investigation, we discovered that our introduced Attention-based LSTM network outperformed both the RF regression and LSTM models from the original baseline study. Our model's improvement over the baseline can be attributed to refined neural modeling. Notably, this improvement can be attributed to the meticulous integration of the attention mechanism and extended window size, capturing the last seven timesteps as opposed to the 1 and 2 timestep windows in the original work. Our results indicate a clear superiority of the longer window in effectively predicting next-day closing prices. Building on this, the FIN-Embedded Neural Network (FIN-ENN), which embeds Tsallis entropy at the input level, showcased even greater performance. Specifically, it further decreased prediction errors by 44.16 RMSE and 0.52 MAPE in Period 1, and 94.79 RMSE and 0.33 MAPE in Period 2 when compared to the baseline. The Tsallis entropy is evidently a significant factor in price prediction, as illustrated by our final model. By leveraging this entropy, we've effectively harnessed the temporal intricacies of the financial dataset, thus ensuring more precise forecasts.
### Experiment II
**Objective** This experiment aims to enhance speech emotion recognition by leveraging FINs. Unlike the previous experiment, where the input data was fed directly into the FIN, here, we utilize a latent representation of the data--a condensed, yet informative, representation derived from previous layers of a deep neural network. Our hypothesis posits that by feeding this latent representation through the FIN, specifically designed to imitate the Tsallis entropy, and further fine-tuning it during training, we can achieve superior recognition performance. Our target is to surpass the state-of-the-art (SOTA) model, the Acoustic CNN (1D) from the reference study.
**Data and Preprocessing** We used the publicly available modified version2 of the Sharif Emotional Speech Database (ShEMO) [15], which contains 3 hours and 25 minutes of semi-natural speech samples in.wav format. These 3000 samples, recorded by 87 native Farsi speakers, are multi-labeled. The reference study [16] concentrated on emotions like anger, happiness, sadness, surprise, and neutral. Each speech segment, with an average length of 4.11 seconds, was
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Period** & **Model** & **RMSE** & **MAPE** \\ \hline \multirow{3}{*}{1} & RF regression & 321.61 & 3.39\% \\ & Deep LSTM & 330.26 & 3.57\% \\ & Deep LSTM + Attention & 283.83 & 2.97\% \\ & NN-Baseline & 287.47 & 2.97\% \\ & FIN-ENN & **277.45** & **2.87\%** \\ \hline \multirow{3}{*}{2} & RF regression & 2096.24 & 3.29\% \\ & Deep LSTM & 3045.87 & 4.68\% \\ \cline{1-1} & Deep LSTM + Attention & 2014.43 & 2.96\% \\ \cline{1-1} & NN-Baseline & 2127.70 & 3.18\% \\ \cline{1-1} & FIN-ENN & **2001.45** & **2.96\%** \\ \hline \end{tabular}
\end{table}
Table 1: Comparative evaluation of models using RMSE and MAPE metrics over the two periods.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Method** & **Input Feature** & **Accuracy** \\ \hline Acoustic CNN (1D) & emoq\_large & 66.12 \\ \hline NN-Baseline & w2v2-perisan-v3 & 69.40 \\ FIN-ENN & w2v2-perisan-v3 & **72.23** \\ \hline NN-Baseline & w2v2-perisan-ser & 94.87 \\ FIN-ENN & w2v2-perisan-ser & **95.51** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of emotion recognition accuracy across different models and input features.
embedded using wav2vec2 [17] to enhance its representation in our neural network model.
**Methods** Our method is a deep neural network with a series of fully connected (dense) layers with decreasing units: 512, 256, 128, 64, and 32. Each layer is followed by a ReLU activation function and a dropout layer (rate=0.5) to prevent overfitting. Crucially, after obtaining the 32-unit latent representation from the penultimate layer, the FIN is integrated to compute the Tsallis entropy of this representation. The computed entropy is then concatenated with the 32-unit latent representation and fed into the final fully connected layer to produce the output corresponding to the emotion classes.
**Results** Our experiment compared three models: our proposed FIN-ENN, the NN-Baseline, and the Acoustic CNN (1D) from the reference study [16]. The baseline model utilized the emo_large feature set, extracting 6552 high-level acoustic features from each audio file using the openSMILE toolkit [18]. These features arise from high-level statistical functions applied to low-level descriptors. Conversely, our FIN-ENN model adopted two fine-tuned versions of the wav2vec2 model: w2v2-persian-v3 and w2v2-persian-ser4. As shown in Table 2, the FIN-ENN model's integration of Tsallis FIN contributed to an absolute accuracy improvement of 2.83% for w2v2-persian-v3 and 0.64% for w2v2-persian-ser compared to their FIN-less counterparts.
Footnote 3: [https://git.lnyan.comm/m3hrdadifi/wav2vec2-large-xlsr-persian-v3](https://git.lnyan.comm/m3hrdadifi/wav2vec2-large-xlsr-persian-v3)
Footnote 4: [https://git.lnyan.comm/3hrdadifi/wav2vec2-xlsr-persian-speech-emotion-recognition](https://git.lnyan.comm/3hrdadifi/wav2vec2-xlsr-persian-speech-emotion-recognition)
### Experiment III
**Objective** This experiment delves into the detection of Chronic Neck Pain (CNP) through EMG data. We hypothesize that embedding a neural network with the FIN, specifically designed to imitate the Tsallis entropy, will improve CNP detection performance compared to traditional models.
**Data and Preprocessing** Our dataset, sourced from Jimenez-Grand et al [19] and publicly available on Kaggle5, consists of twenty asymptomatic individuals and twenty with Chronic Neck Pain (CNP). Asymptomatic individuals had no significant neck issues in the last two years, while CNP individuals reported notable pain recently. Data was collected as participants walked barefoot along a six-meter rectilinear path, repeated three times at two-minute intervals. Building upon the approach adopted in the original study by Jim'enez-Grand et al. [19], we extracted the same four time domain and six frequency domain features from the EMG data. However, instead of analyzing every 500 ms of the signal (as determined by a 1000Hz sampling rate), we segmented the entire signal into five distinct parts, a method inspired by [20]. Similarly to prior studies, our focus centered on four upper back muscle groups: Trapezius, Sternocleidomastoid, C4 Paraspinal, and Latissimus Dorsi, with each muscle group including both left and right muscles, and features were computed for each side.
Footnote 5: [https://www.kaggle.com/datasets/david893/neck-emg-data](https://www.kaggle.com/datasets/david893/neck-emg-data)
**Methods** Jim'enez-Grand et al. [19] employed K-NN, SVM, and LDA for classification, processing both raw and Neighbourhood Component Analysis (NCA)-selected features [21]. In contrast, we used the raw extracted features to train a feed-forward neural network comprising two hidden layers with 256 and 32 units. Drawing inspiration from our previous experiment, the 32-dimensional latent representation from the second hidden layer was channeled into the Tsallis FIN. This processed output was then concatenated with the original 32 features, yielding a 33-dimensional vector that was finally directed to a sigmoid activation to perform the binary classification.
**Results** As outlined in Table 3, we compared the performance of our FIN-ENN model against those developed in the original study using accuracy, specificity, and sensitivity. The original study's models, namely K-NN, SVM, and LDA, achieved a maximum accuracy of 55.00% with NCA-selected features. Our NN-Baseline registered an accuracy of 57.50%. However, by leveraging the Tsallis FIN in our architecture, we achieved a superior accuracy of 62.50%. This improvement is also evident in improvements made in both specificity (65.00%) and sensitivity (60.00%). Our results reinforce our initial hypothesis, underscoring the benefits of incorporating the FIN for CNP detection from physiological EMG data.
## 5 Conclusion
In our experiments, integrating a Feature Imitating Network (FIN) designed to imitate Tsallis entropy consistently enhanced predictive model performances across diverse domains. In predicting Bitcoin's subsequent day's closing price, the enhanced neural network outshone traditional models like Random Forest regression and LSTM. Similarly, in speech emotion recognition, the FIN-augmented model excelled at processing latent representations. In detecting Chronic Neck Pain (CNP) through EMG data, it surpassed established classifiers like K-NN, SVM, and LDA. The consistent edge the FIN provides across these areas underscores its broad utility and efficacy. Future studies can more profoundly investigate influential financial, speech, and physiological features to imitate, aiming to amplify the performance of neural predictive models further.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Method** & **Accuracy** & **Specificity** & **Sensitivity** \\ \hline K-NN (raw) & 35.00 & 35.00 & 35.00 \\ SVM (raw) & 32.50 & 31.57 & 33.33 \\ LDA (raw) & 42.50 & 42.85 & 42.10 \\ \hline K-NN (NCA) & 55.00 & 54.54 & 55.55 \\ SVM (NCA) & 55.00 & 60.00 & 54.17 \\ LDA (NCA) & 55.00 & 56.25 & 55.00 \\ \hline NN-Baseline (raw) & 57.50 & 55.00 & 60.00 \\ FIN-ENN (raw) & **62.50** & **65.00** & **60.00** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of classification performance in CNP detection. |
2303.18069 | Pathologies in satisfaction classes | We study subsets of countable recursively saturated models of $\mathsf{PA}$
which can be defined using pathologies in satisfaction classes. More precisely,
we characterize those subsets $X$ such that there is a satisfaction class $S$
where $S$ behaves correctly on an idempotent disjunction of length $c$ if and
only if $c \in X$. We generalize this result to characterize several types of
pathologies including double negations, blocks of extraneous quantifiers, and
binary disjunctions and conjunctions. We find a surprising relationship between
the cuts which can be defined in this way and arithmetic saturation: namely, a
countable nonstandard model is arithmetically saturated if and only if every
cut can be the "idempotent disjunctively correct cut" in some satisfaction
class. We describe the relationship between types of pathologies and the
closure properties of the cuts defined by these pathologies. | Athar Abdul-Quader, Mateusz Łełyk | 2023-03-31T13:59:12Z | http://arxiv.org/abs/2303.18069v1 | # Pathologies in Satisfaction Classes
###### Abstract.
We study subsets of countable recursively saturated models of \(\mathsf{PA}\) which can be defined using pathologies in satisfaction classes. More precisely, we characterize those subsets \(X\) such that there is a satisfaction class \(S\) where \(S\) behaves correctly on an idempotent disjunction of length \(c\) if and only if \(c\in X\). We generalize this result to characterize several types of pathologies including double negations, blocks of extraneous quantifiers, and binary disjunctions and conjunctions. We find a surprising relationship between the cuts which can be defined in this way and arithmetic saturation: namely, a countable non-standard model is arithmetically saturated if and only if every cut can be the "idempotent disjunctively correct cut" in some satisfaction class. We describe the relationship between types of pathologies and the closure properties of the cuts defined by these pathologies.
**Keywords**: Nonstandard models of Peano Arithmetic, Satisfaction classes, Recursive saturation, Arithmetical saturation, Disjunctive correctness.
**2020 _Mathematics Subject Classification_**: 03C62, 03H15
## 1. Introduction
Kotlarski-Krajewski-Lachlan famously showed [7] that every countable, recursively saturated model of \(\mathsf{PA}\) has a full satisfaction class. Enayat-Visser [3] strengthened this result using more typically model-theoretic tools. These results show the conservativity of the theory \(\mathsf{CT}^{-}\) of compositional truth over the base arithmetic theory \(\mathsf{PA}\). Both proofs illustrate the weakness of \(\mathsf{CT}^{-}\): not only the theory is a conservative extension of the base theory \(\mathsf{PA}\), but also it is consistent with failure of some very basic truth principles such as _disjunctive correctness_ (DC): "A disjunction is true if and only if it has a true disjunct". In particular one can construct models of \(\mathsf{CT}^{-}\) in which for a nonstandard number \(a\) the disjunction
\[\underbrace{0\neq 0\lor 0\neq 0\vee\ldots\lor 0\neq 0}_{a\text{ times}}\]
is within the scope of the truth predicate. Thus it is well known how to construct _pathological_ satisfaction classes.
One can easily exclude such pathological behaviour by adding to the theory \(\mathsf{CT}^{-}\) induction axioms for the extended language. It is well known that the theory \(\mathsf{CT}\) of an _inductive_ truth predicate is not conservative over \(\mathsf{PA}\); indeed, \(\mathsf{CT}\) proves the Global Reflection Principle for \(\mathsf{PA}\), that is the statement
(GRP) \[\forall\phi\big{(}\mathrm{Prov}_{\mathsf{PA}}(\phi)\to T(\phi)\big{)}.\]
In fact, \(\mathsf{CT}_{0}\), the theory \(\mathsf{CT}^{-}\) augmented by \(\Delta_{0}\)-induction for formulas in the language including the truth predicate, is equivalent to (GRP).
Recent work by Enayat and Pakhomov [2] pointed to a deeper connection between non-conservativity and disjunctive correctness. The natural-looking extension of \(\mathsf{CT}^{-}\) with DC turns out to be equivalent to \(\mathsf{CT}_{0}\). Ali Enayat (unpublished) separated DC into two principles: DC-out, stating that every true disjunction has a true disjunct, and DC-in, stating that a disjunction with a true disjunct is true. Cieslinski, Lelyk, and Wcislo [1] show that already \(\mathsf{CT}^{-}+\text{DC-out}\) is equivalent to \(\mathsf{CT}_{0}\), while \(\mathsf{CT}^{-}+\text{DC-in}\) is conservative over \(\mathsf{PA}\). Conservativity of DC-in is shown by proving that every countable model of \(\mathsf{PA}\) has an elementary extension which is "disjunctively trivial": that is, one in which every disjunction of nonstandard length is evaluated as true. In such disjunctively trivial models of \(\mathsf{CT}^{-}\), \(\omega\) is definable as the cut for which the truth predicate \(T\) is "disjunctively correct."
In this article, we aim at deepening our understanding of the phenomenon of disjunctive correctness: we consider related questions around which sets can be definable by exploiting pathologies in the satisfaction class. We analyze "local pathologies", along the lines of repeated (idempotent) disjunctions of a single, fixed sentence \(\theta\), and non-local pathologies, where, for example, we consider idempotent disjunctions of all sentences. We completely classify the subsets of a model which are definable using local pathologies, and use this to conclude that a countable model of \(\mathsf{PA}\) is arithmetically saturated if and only if it carries a satisfaction class which makes all disjunctions of nonstandard length true. We also classify the cuts in a model which can be definable using non-local pathologies.
From the definability perspective, our work complements that of [10], where it was shown that for every subset \(A\) of a countable recursively saturated model \(\mathcal{M}\) there is a satisfaction class \(S\) such that \(A\) is definable in \((\mathcal{M},S)\) as (roughly speaking) the set of those numbers \(x\) such that quantifier correctness fails on the \(x\)-th formula (in a suitably chosen enumeration). We go in the reverse direction: starting from an idempotent sentential operation \(F\) we ask when a set \(A\) can be characterized as the set of those numbers \(x\) for which the satisfaction class behaves correctly when \(F\) is iterated \(x\)-times. Unlike in the case of [10] it turns out that in some countable recursively saturated models, not every cut can be defined in this way.
We conclude the paper with several properties about the full disjunctively correct cut.
### Preliminaries
We formulate \(\mathsf{PA}\) in the usual language \(\mathcal{L}_{\mathsf{PA}}=\{+,\times,<,0,1\}\). We use script letters \(\mathcal{M},\mathcal{N}\), etc to denote models of \(\mathsf{PA}\) and Roman letters \(M,N\), etc to denote their universes. \(\mathrm{ElDiag}(\mathcal{M})\) denotes the elementary diagram of the model \(\mathcal{M}\). We follow standard definitions and conventions used in the study of models of \(\mathsf{PA}\): see [6, Chapter 1]. We recall some of these conventions here.
We fix standard coding for finite sets and sequences: for a model \(\mathcal{M}\models\mathsf{PA}\), \(a,b\in M\),
* \(\mathrm{len}(a)\) denotes the length of the sequence coded by \(a\),
* \((a)_{b}\) denotes the \(b\)-th element of the sequence coded by \(a\), and
* we write \(a\in b\) if \(a\) is in the set coded by \(b\).
**Definition 1**.: A model \(\mathcal{M}\models\mathsf{PA}\) is arithmetically saturated iff for every \(a\in M\) for every type \(p(x,a)\) which is arithmetically definable in the type of \(a\), \(p(x,a)\) is realized in \(\mathcal{M}\).
We note for the reader the equivalence between _countable recursively saturated models_ and _countable resplendent models_, as well as the equivalence between _arithmetically saturated models_ and recursively saturated models in which \(\omega\) is a strong cut. The interested reader is again directed to [6] for definitions and other references.
Let \(\mathcal{M}\models\mathsf{PA}\). By \(\operatorname{Form}^{\mathcal{M}}\) and \(\operatorname{Sent}^{\mathcal{M}}\) we refer to the (definable) sets of (Godel codes of) formulas and sentences, respectively, in the sense of \(\mathcal{M}\). For the rest of this article, we will not distinguish between a formula \(\phi\) and its Godel code \(\lceil\phi\rceil\). We use the following standard abbreviations:
* \(\operatorname{Asn}(x,y)\) is an \(\mathcal{L}_{\mathsf{PA}}\) formula which asserts that \(y\) is an assignment for \(x\), which means that it assigns values to all and only those variables which have free occurrences in \(x\) (\(x\) can be a term or a formula).
* \(s^{\alpha}\) denotes the value of the term \(s\) under the assignment \(\alpha\).
* \(\dot{\exists}\) denotes the arithmetical operation which given a variable \(v\) and a formula \(\phi\) returns \(\exists v\phi\). \(\dot{\vee}\), \(\dot{\div}\) and \(\dot{=}\) have analogous meanings.
* for any two assignments \(\alpha\), \(\beta\), we write \(\beta\sim_{v}\alpha\) iff \(\beta\) differs from \(\alpha\) at most on a variable \(v\) and the domain of \(\beta\) extends the domain of \(\alpha\) at most with \(v\).
* for \(\phi\in\operatorname{Form}_{\mathcal{L}_{\mathsf{PA}}}\), \(\beta\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
substitutions. For example, it is an exercise to use an Enayat-Visser construction to show that in every countable and recursively saturated model \(\mathcal{M}\) there is a satisfaction class \(S\) such that for some formula \(\phi\) and assignment \(\alpha\), \((\exists v\phi,\alpha)\in S\) but for no closed term \(t\), \(\langle\phi[t/v],\alpha\rangle\in S\) (\(\phi[t/v]\) denotes the substitution of a closed term \(t\) for all occurrences of the variable \(v\).)
Because of these and similar problems, it is not known whether in an arbitrary model of \((\mathcal{M},S)\models\mathsf{CS}^{-}\) one can define a compositional truth predicate \(T\) for the language of arithmetic satisfying the natural axiom
\[\forall\phi(v)\big{(}T(\forall v\phi(v))\equiv\forall xT(\phi[\underline{x}/v ])\big{)},\]
where \(\underline{x}\) denotes the canonical numeral naming \(x\). It is known that each standard definition of truth from satisfaction (e.g. "being satisfied by all assignments" or "being satisfied by an empty assignment") might fail to define a truth predicate in a model of \(\mathsf{CS}^{-}\).
To overcome these problems it is customary to extend the above list of axioms of \(\mathsf{CS}^{-}\) with the regularity axiom (compare [10]). Its full-blown definition is rather involved and we will give it in the Appendix. A satisfaction class which satisfies the regularity axiom is called a _regular_ satisfaction class. Importantly, if \(S\) is a regular satisfaction class in \(\mathcal{M}\), then terms with the same values can be substituted for free variables in a formula saliva veritate, i.e. for every formula \(\phi\in\mathrm{Form}^{\mathcal{M}}\), every variable \(v\in\mathrm{Var}^{\mathcal{M}}\), all terms \(s,t\in\mathrm{Term}^{\mathcal{M}}\) and all assignments \(\alpha\) it holds in \((\mathcal{M},S)\) that
\[\mathrm{Asn}(\phi([t/v]),\alpha)\wedge\mathrm{Asn}(\phi[s/v],\alpha)\wedge s ^{\alpha}=t^{\alpha}\to S(\phi[s,v],\alpha)\equiv S(\phi[t/v],\beta).\]
One can check, that if \(S\) is a regular satisfaction class in \(\mathcal{M}\), then the formula \(\mathrm{Sent}(x)\wedge S(x,\emptyset)\) defines in \((\mathcal{M},S)\) a truth predicate which satisfy the above natural axiom for the universal quantifier.
In the Appendix we show how to improve one of our constructions in order to obtain regular satisfaction classes. As a consequence we will be able to construct many pathological _truth_ classes. However, we decided to leave the regularization of all our constructions for further research.
Another basic property of satisfaction classes is satisfying internal induction. Before introducing it let us define one handy abbreviation: if \((\mathcal{M},S)\models\mathsf{CS}^{-}\), and \(\psi\) is a formula in the sense of \(\mathcal{M}\) with exactly one free variable, then \(T*\psi(x)\) denotes a \(\mathcal{L}_{\mathsf{PA}}\cup\{S\}\)-formula with one free variable \(x\) which naturaly expresses "The result of substituting the numeral naming \(x\) for the unique free variable of \(\psi\) is satisfied by the empty assignment"(see [8], Lemma 3.6.) We say that in \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(S\) satisfies the internal induction iff for every \(\psi\in\mathrm{Form}^{\mathcal{M}}\) with a unique free variable, the formula \(T*\psi(x)\) satisfies the induction axiom, i.e.
\[(\mathcal{M},S)\models T*\psi(0)\wedge\forall x\big{(}T*\psi(x)\to T*\psi(x+1) \big{)}\to\forall xT*\psi(x).\]
We conjecture that all our constructions can be fine-tuned to yield regular satisfaction classes satisfying internal induction, however we decided to treat this problem on a different occasion.
**Remark 3**.: As shown in [8], Lemma 3.7, if \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(S\) is regular and satisfies internal induction, then for every \(\psi\in\mathrm{Form}^{\mathcal{M}}\) with exactly one free variable, if \(X_{\psi}=\{x\in M\quad:\quad(\mathcal{M},S)\models T*\psi(x)\}\), then \((\mathcal{M},X_{\psi})\models\mathsf{PA}^{*}\). That is, \((\mathcal{M},X_{\psi})\) satisfies the full induction schema in the language \(\mathcal{L}_{\mathsf{PA}}\cup\{X\}\), where \(X\) is interpreted as \(X_{\psi}\).
**Definition 4** (Local compositional conditions).: Let \(\operatorname{Comp}(x,y,z)\) be the disjunction of the following \(\mathcal{L}_{\mathsf{PA}}\cup\{S\}\) formulae
1. \(\exists s,t\in\operatorname{Term}(x=(s\dot{=}t)\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x,\alpha)\equiv s^{\alpha}=t^{\alpha}))\).
2. \(x=(y\dot{\lor}z)\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x, \alpha)\equiv(S(y,\alpha\lfloor_{y})\lor S(z,\alpha\lfloor_{z})))\).
3. \(x=(\dot{\neg}y)\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x, \alpha)\equiv\neg S(y,\alpha))\).
4. \(\exists v\in\operatorname{Var}(x=\dot{=}vy\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x,\alpha)\equiv\exists\beta\sim_{v}\alpha S(y, \beta\lceil_{y})))\).
Suppose \(\langle\phi_{i}:i\leq c\rangle\) is a coded sequence of elements of \(\operatorname{Sent}^{\mathcal{M}}\) and suppose \(\theta\in\operatorname{Sent}^{\mathcal{M}}\).
* \(\bigvee\limits_{i\leq c}\phi_{i}\) is defined, inductively, so that \(\bigvee\limits_{i\leq 0}\phi_{i}=\phi_{0}\), and \(\bigvee\limits_{i\leq n+1}\phi_{i}=(\bigvee\limits_{i\leq n}\phi_{i})\lor \phi_{n+1}\).
* \(\bigwedge\limits_{i\leq c}\phi_{i}\) is defined similarly.
Given an \(\mathcal{M}\)-definable function \(F:\operatorname{Sent}^{\mathcal{M}}\to\operatorname{Sent}^{\mathcal{M}}\), we define \(F^{c}(x)\) by induction on \(c\) as follows: \(F^{0}(x)=x\), \(F^{c+1}(x)=F(F^{c}(x))\).
* \(\bigvee\limits_{i\leq c}^{\operatorname{bin}}\theta\) is defined as \(F^{c}_{\vee}(\theta)\), where \(F_{\vee}(\phi)=\phi\vee\phi\). These are "binary idempotent disjunctions." Similarly, one can define "binary idempotent conjunctions."
* \((\neg\neg)^{c}\theta\) is defined as \(F^{c}_{\neg\neg}(\theta)\), where \(F_{\neg\neg}(\phi)=\neg\neg\phi\).
* \((\forall x)^{c}\theta\) is defined as \(F^{c}_{\vee}(\theta)\), where \(F_{\vee}(\phi)=\forall x\phi\).
In a model \((\mathcal{M},S)\models\mathsf{CS}^{-}\), we can define the following sets:
* for a given \(\theta\), the "idempotent disjunctively correct set for \(\theta\)", \[\operatorname{IDC}^{\theta}_{S}=\{c:T(\bigvee\limits_{i<c}\theta)\equiv T( \theta)\},\]
* the "idempotent disjunctively correct set": \[\operatorname{IDC}_{S}=\{c:\forall\phi T(\bigvee\limits_{i<c}\phi)\equiv T( \phi)\}.\]
* the "disjunctively correct set": \[\operatorname{DC}_{S}=\{c\in M:(\mathcal{M},S)\models\forall\langle\phi_{i}:i \leq c\rangle\big{(}T(\bigvee\limits_{i\leq c}\phi_{i})\equiv\exists i\leq cT (\phi_{i})\big{)}\}.\]
We can similarly define the "conjunctively correct set" for a given \(\theta\) (\(\operatorname{QC}^{\theta}_{S}\)), the "double negations correct set" for a given \(\theta\) (\(\operatorname{DNC}^{\theta}_{S}\)), the "binary idempotent disjunctively/conjunctively correct set" (\(\operatorname{IDC}^{\operatorname{bin},\theta}_{S}\)), or their respective non-local versions (\(\operatorname{QC}_{S},\operatorname{DNC}_{S},\operatorname{IDC}^{ \operatorname{bin}}_{S}\)).
Given a set \(X\) (often one of the above pathologically definable sets), we introduce the following notation for _the longest initial segment of \(X\)_:
\[I(X)=\{x\in X:\forall y\leq x(y\in X)\}.\]
This allows us to denote, for example, the idempotent disjunctively correct _cut_, \(I(\operatorname{IDC}_{S})\).
## 2. Separability
In this part, we classify which sets can be \(\operatorname{IDC}^{0=1}_{S}\) for some \(S\). Rather than simply looking at disjunctions, however, we generalize the setting to draw similar conclusions about the conjunctively correct set for \(0=0\), the double negations correct set for any atomic sentence \(\phi\), or the binary idempotent disjunctively / conjunctively correct set for \(\phi\) and much more.
**Definition 5**.: Let \(X\subseteq\operatorname{Form}^{\mathcal{M}}\).
1. If \(x,y\in\operatorname{Form}^{\mathcal{M}}\), we say \(x\triangleleft y\) if \(x\) is an immediate subformula of \(y\).
2. \(X\) is _closed_ if whenever \(x\triangleleft y\in X\), then \(x\in X\).
3. \(\operatorname{Cl}(X)\) is the smallest closed set containing \(X\).
4. \(F\subseteq X\)_generates_\(X\) if \(X=\operatorname{Cl}(F)\).
5. \(X\) is _finitely generated_ if there is a finite \(F\subseteq X\) that generates it.
We describe a generalization of the idempotent disjunction operation \(c\mapsto\bigvee\limits_{i<c}\theta\).
**Definition 6**.: Fix a standard sentence \(\theta\). Let \(\Phi(p,q)\) be a (finite) propositional template, over propositional variables \(p\) and \(q\). By this we mean that in \(\Phi\) we allow all propositional connectives, along with quantifiers (over dummy variables). We insist that \(\Phi\) has non-zero complexity (that is, \(\Phi(p,q)\) has at least one propositional connective or quantifier), along with the following properties:
* \(q\) appears in \(\Phi(p,q)\),
* if \(\mathcal{M}\models\theta\), then \(\Phi(\top,q)\) is equivalent to \(q\), and
* if \(\mathcal{M}\models\neg\theta\), then \(\Phi(\bot,q)\) is equivalent to \(q\).
Define \(F:M\to\operatorname{Sent}^{\mathcal{M}}\) as follows:
* \(F(0)=\theta\), and
* \(F(x+1)=\Phi(\theta,F(x))\).
We say such an \(F\) is a _local idempotent sentential operator for \(\theta\)_, and \(\Phi(p,q)\) is a _template_ for \(F\).
We emphasize here that \(\Phi\) is finite, so that if \(\phi\) and \(\psi\) are sentences, then \(\psi\in\operatorname{Cl}(\Phi(\phi,\psi))\). In addition, if \(p\) appears in \(\Phi(p,q)\), then \(\phi\in\operatorname{Cl}(\Phi(\phi,\psi))\) as well.
Note that for any \(n\in\omega\) and atomic sentence \(\theta\), if \(F\) is a local idempotent sentential operator for \(\theta\) and \((\mathcal{M},S)\models\mathsf{CS}^{-}\), then \((\mathcal{M},S)\models T(\theta)\equiv T(F(n))\). In fact, \((\mathcal{M},S)\models T(F(x))\equiv T(F(x+n))\), for each \(x\in M\).
This approach allows us to generalize several examples of local pathologies, for example:
\[\left\{\bigvee\limits_{c}(0\neq 0):c\in M\right\}, \left\{\bigwedge\limits_{c}(0=0):c\in M\right\},\] \[\{(\forall x)^{c}(0=0):c\in M\}, \{(\neg\neg)^{c}(0=0):c\in M\}\]
can all appear as \(\{F(c):c\in M\}\) for various \(\theta\) and \(\Phi\). We study the question of when, given such a function \(F\), a set \(X\) can be the set \(\{x:T(F(x))\equiv T(\theta)\}\) in a model \((\mathcal{M},S)\models\mathsf{CS}^{-}\). We will see that such sets \(X\) will require the following property.
**Definition 7**.: Let \(\mathcal{M}\models\mathsf{PA}\), and \(A\subseteq D\subseteq M\). \(A\) is _separable from \(D\)_ if for each \(a\) such that for every \(n\in\omega\), \((a)_{n}\in D\), there is \(c\) such that for each \(n\in\omega\), \((a)_{n}\in A\) if and only if \(\mathcal{M}\models n\in c\). We say a set \(X\) is _separable_ if it is separable from \(M\).
In Propositions 8, 9, and 10, we refer to definable sets and functions. Here we insist that these are definable in the arithmetic structure of \(\mathcal{M}\): that is, they are definable (possibly using parameters) using formulas from \(\mathcal{L}_{\mathsf{PA}}\). First we notice some basic properties of separability.
**Proposition 8**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Suppose \(D_{1},D_{2}\) are \(\mathcal{M}\)-definable, \(A\subseteq D_{1}\cap D_{2}\), and \(A\neq D_{1},D_{2}\). Then \(A\) is separable from \(D_{1}\) iff \(A\) is separable from \(D_{2}\)._
Proof.: Fix \(d\in D_{1}\setminus A\). Assume \(A\) is separable from \(D_{1}\) and fix any \(a\) such that for every \(n\), \((a)_{n}\in D_{2}\). Let \(b\) be defined by
\[(b)_{i}=\left\{\begin{array}{l}(a)_{i}\text{ if }(a)_{i}\in D_{1}\\ d\text{ otherwise}\end{array}\right.\]
Then for every \(i\in\omega\), \((b)_{i}\in D_{1}\), so there is \(c\) such that for every \(i\in\omega\), \((b)_{i}\in A\) iff \(i\in c\). Then it follows that also for every \(i\in\omega\), \((a)_{i}\in A\) iff \(i\in c\).
**Proposition 9**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Suppose \(A\subseteq D\) and \(f\) is an \(\mathcal{M}\)-definable function such that \(D\subseteq\text{im}(f)\). Then if \(A\) is separable from \(D\), then \(f^{-1}[A]\) is separable from \(f^{-1}[D]\)_
Proof.: Easy exercise.
**Proposition 10**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Let \(I\subseteq_{e}\mathcal{M}\) and \(A\subseteq\mathcal{M}\) be an \(\mathcal{M}\)-definable set such that \(\sup(A\cap I)=I\) and \(A\cap I\) is separable. Then \(I\) is separable._
Proof.: Define the function \(f\) by
\[f(x)=\left\{\begin{array}{l}\mu y.\{y\in A:x\leq y\}\text{ if such $y$ exists}\\ 0\text{ otherwise}\end{array}\right.\]
Then, by the assumptions, \(I=f^{-1}[A\cap I]\). The result follows by Proposition 9.
As stated before, given \(\theta\), a local idempotent sentential operator \(F\) for \(\theta\), and \(D=\{F(x):x\in M\}\), we wish to classify the subsets \(A\subseteq D\) which can be the sets of true sentences in \(D\) (equivalently, we wish to classify the sets \(X\) such that \(\{F(x):x\in X\}\) is the set of true sentences in \(D\)). First we need the following Lemma.
**Lemma 11**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Let \(\theta\) be an atomic sentence and \(F\) a local idempotent sentential operator for \(\theta\). Let \(J_{0},J_{1}\subseteq M\) be closed under predecessors, disjoint, and \(J_{i}\cap\omega=\emptyset\) for \(i=0,1\). Let \(X=\operatorname{Cl}(\{F(x):x\in J_{0}\cup J_{1}\})\). Then there is a unique \(X\)-satisfaction class \(S\) such that for each \(i\) and \(x\in J_{i}\), \((F(x),\emptyset)\in S\) if and only if \(i=0\)._
Proof.: Let \(S_{0}=\{(F(x),\emptyset):x\in J_{0}\}\). We extend \(S_{0}\) to an \(X\)-satisfaction class \(S\). Take any \(\phi\in X\). Then, since \(J_{i}\) are closed under predecessors and disjoint, then there is a unique \(i\) and minimal \(x\) such that \(\phi\in\operatorname{Cl}(F(x))\) and \(x\in J_{i}\). Recall that \(F(x)=\Phi(\theta,F(x-1))\), and \(\theta\) is atomic. One notices that the subformulas of \(\Phi(\theta,q)\) must be equivalent to one of \(q\), \(\neg q\), \(\top\), or \(\bot\). Let \(\Psi(p,q)\) be the subformula of \(\Phi(p,q)\) such that \(\Psi(\theta,F(x-1))=\phi\). Again, the presentation of \(\phi\) as \(\Psi(\theta,F(x-1))\) is unique by induction in \(\mathcal{M}\). We put \(\langle\phi,\emptyset\rangle\in S\) if any of the following hold:
* \(\Psi(\theta,q)\) is equivalent to \(q\) and \(i=0\),
* \(\Psi(\theta,q)\) is equivalent to \(\neg q\) and \(i=1\), or
* \(\Psi(\theta,q)\) is equivalent to \(\top\).
One checks that \(S\) is an \(X\)-satisfaction class.
Theorems 12 and 13 are generalizations of unpublished work by Jim Schmerl1.
Footnote 1: Private communication to Ali Enayat.
**Theorem 12**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Let \(\theta\) be an atomic sentence and \(F\) a local idempotent sentential operator for \(\theta\). Let \(X\subseteq M\) be separable, closed under successors and predecessors, and for each \(n\in\omega\), \(n\in X\) if and only if \(\mathcal{M}\models\theta\). Then \(\mathcal{M}\) has an expansion \((\mathcal{M},S)\models\mathsf{CS}^{-}\) such that \(X=\{x\in M:(\mathcal{M},S)\models T(F(x))\equiv T(\theta)\}\)._
Notice that \(X\) is separable if and only if \(M\setminus X\) is separable. This means that there is some flexibility in building such satisfaction classes \(S\).
Proof.: Let \(D=\{F(x):x\in M\}\) and \(A=\{F(x):x\in X\}\). Note that \(A\) is separable from \(D\). We build sequences \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) such that:
* each \(F_{i}\) is a finitely generated set of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\),
* each \(S_{i}\) is a full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated,
* \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\), and
* for each \(\phi\in D\cap F_{i}\), \((\phi,\emptyset)\in S_{i}\) if and only if \(\phi\in A\).
Given such a sequence, \(S=\cup S_{i}\upharpoonright F_{i}\) would be the required full satisfaction class on \(\mathcal{M}\).
Externally fix an enumeration of \(\operatorname{Form}^{\mathcal{M}}\) in order type \(\omega\). We can assume, without loss of generality, that \(\theta\) appears first in this enumeration. Suppose \(F_{i}\) and \(S_{i}\) have been constructed. Let \(F_{i+1}\) be generated by \(F_{i}\) and the least \(x\in\operatorname{Form}^{\mathcal{M}}\setminus F_{i}\) in the aforementioned enumeration. Let \(F^{\prime}=F_{i}\cup(F_{i+1}\cap D)\). Let \(a\) be a sequence such that \(\{F((a)_{n}):n\in\omega\}=F_{i+1}\cap D\). Note that this is possible since \(F_{i+1}\) is finitely generated. Let \(c\) be as in the definition of separability for \(a\).
Since \(X\) is closed under successors and predecessors, then if \((a)_{n}\) and \((a)_{m}\) are in the same \(\mathbb{Z}\)-gap (that is, there is some \(k\in\omega\) such that \((a)_{n}\) and \((a)_{m}\) differ by \(k\)), then \((a)_{n}\in X\) if and only if \((a)_{m}\in X\). Since \(X\) is separable, this means that, if \((a)_{n}\) and \((a)_{m}\) are in the same \(\mathbb{Z}\)-gap, then \(n\in c\) if and only if \(m\in c\). Let \(J_{0}\) be the closure under successors and predecessors of \(\{(a)_{n}:n\in\omega,n\in c\), and \((a)_{n}>\omega\}\), and \(J_{1}\) be the closure under successors and predecessors of \(\{(a)_{n}:n\in\omega,n\notin c\), and \((a)_{n}>\omega\}\). By Lemma 11, there is a \(\operatorname{Cl}(F_{i+1}\cap D)\)-satisfaction class \(S^{\prime}\) such that for each \(\phi=F((a)_{n})\in F_{i+1}\cap D\), \(S^{\prime}(F((a)_{n},\emptyset)\) if and only if \((a)_{n}\in X\). That is, \(S^{\prime}(\phi,\emptyset)\) if and only if \(\phi\in A\).
Notice that \(\operatorname{Cl}(F^{\prime})=F_{i}\cup\operatorname{Cl}(F_{i+1}\cap D)\). We extend \(S^{\prime}\) to a \(\operatorname{Cl}(F^{\prime})\) satisfaction class simply by preserving \(S_{i}\) on \(F_{i}\). One notices that if \(\phi\in F_{i}\cap D\), then by induction \(\langle\phi,\emptyset\rangle\in S_{i}\) if and only if \(\phi\in A\).
Then \(S^{\prime}\) is a \(\operatorname{Cl}(F^{\prime})\)-satisfaction class, so by [3, Lemma 3.1], \(\mathcal{M}\) has an elementary extension \(\mathcal{N}\) carrying a \(\operatorname{Form}^{\mathcal{M}}\)-satisfaction class \(S\) agreeing with \(S^{\prime}\) on \(\operatorname{Cl}(F^{\prime})\). In particular, this shows the consistency of the recursive theory \(Th\) consisting of the following:
* \(S\) is a full satisfaction class,
* \(\{S(\phi,\alpha)\equiv S_{i}(\phi,\alpha):\phi\in F_{i}\}\), and
* \(\{S(F((a)_{n}),\emptyset)\equiv n\in c:n\in\omega\}\).
Since \((\mathcal{M},S_{i})\) is recursively saturated, by resplendency \((\mathcal{M},S_{i})\) has an expansion to \(Th\), and such an expansion is a full satisfaction class agreeing with \(S^{\prime}\) on formulas
from \(\operatorname{Cl}(F^{\prime})\). Recall that countable recursively saturated models are _chronically resplendent_ ([6, Theorem 1.9.3]): by this we mean that such expansions can, themselves, be taken to be resplendent. That is, we can assume that \((\mathcal{M},S_{i},S)\) is recursively saturated. Let \(S_{i+1}=S\) and continue.
In the above result, notice that if \(n\in\omega\), then clearly \(\mathcal{M}\models F(n)\) if and only if \(\mathcal{M}\models\theta\). Therefore, \(\omega\subseteq X\) if and only if \(\mathcal{M}\models\theta\), and \(\omega\cap X=\emptyset\) if and only if \(\mathcal{M}\models\neg\theta\). Moreover, if \(X=\{x:(\mathcal{M},S)\models T(F(x))\}\) then \(X\) is necessarily closed under successors and predecessors. The next result shows that separability of \(X\) is also necessary.
**Theorem 13**.: _Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(D\) is any set of sentences (not necessarily of the form \(\{F(x):x\in M\}\)), and \(A=\{\phi\in D:(\mathcal{M},S)\models T(\phi)\}\). Then \(A\) is separable from \(D\)._
Note that by Proposition 9, if \(D=\{F(x):x\in M\}\), \(A=\{F(x):(\mathcal{M},S)\models T(F(x))\}\), and \(X=\{x:F(x)\in A\}\), this is equivalent to stating that \(X=\{x:F(x)\in A\}\) is separable (from \(M\)).
Proof.: Let \(a\in M\) be such that for each \(n\in\omega\), \((a)_{n}\in D\). We show that that there is a \(c\) so that for all \(i\in\omega\), \((a)_{i}\in A\) iff \(i\in c\).
By a result of Stuart Smith, [9, Theorem 2.19], \((\mathcal{M},S)\) is _definably \(S\)-saturated_. This means that for any coded sequence \(\langle\phi_{i}(x):i\in\omega\rangle\) such that each \(\phi_{i}\in\operatorname{Form}^{\mathcal{M}}\), if for each \(i\in\omega\) there is \(m\in M\) such that \((\mathcal{M},S)\models\forall j<i(T(\phi_{j}(m)))\), then there is \(m\in M\) such that for all \(i\in\omega\), \((\mathcal{M},S)\models T(\phi_{i}(m))\).
Let \(\phi_{j}(x)\) be the formula given by \((a)_{j}\equiv(j\in x)\). That is, since \((a)_{j}\) is the code of a sentence, \(\phi_{j}(m)\) is evaluated as true in a satisfaction class \(S\) if the sentence \((a)_{j}\) is evaluated as true and \(j\in m\), or \((a)_{j}\) is evaluated as false and \(j\not\in m\).
Let \(i\in\omega\), and let \(m\in M\) be such that for all \(j<i\), \((a)_{j}\in A\) if and only if \(j\in m\). Then,
\[(\mathcal{M},S)\models\forall j\leq i\,(T(\phi_{j}(m))).\]
Therefore there is \(m\) such that for all \(i\in\omega\), \((\mathcal{M},S)\models T(\phi_{i}(m))\). In particular, for each \(n\in\omega\), if \((a)_{n}\in A\), then \(T((a)_{n})\) and therefore \(n\in m\). Moreover, if \(n\not\in m\), then \((\mathcal{M},S)\models\neg T((a)_{n})\). By assumption this means \((a)_{n}\not\in A\).
## 3. Separable Cuts
In this section, we wish to examine the results of the previous section in case where we have \(I\subseteq_{\operatorname{end}}M\) a cut. We examine some properties of separable cuts. We conclude this section by showing that a countable model is arithmetically saturated if and only if it has a disjunctively trivial expansion to a model of \(\mathsf{CS}^{-}\).
**Proposition 14**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be nonstandard and \(I\subseteq_{\operatorname{end}}M\). The following are equivalent._
1. \(I\) _is separable._
2. _There is no_ \(a\in M\) _such that_ \(I=\sup(\{(a)_{i}:i\in\omega\}\cap I)=\inf(\{(a)_{i}:i\in\omega\}\setminus I)\)_._
3. _For every_ \(a\in M\)_, there is_ \(d\) _such that for all_ \(i\in\omega\)_,_ \((a)_{i}\in I\) _if and only if_ \((a)_{i}<d\)
Compare (3) to the notion of _strength_: a cut \(I\subseteq_{\operatorname{end}}M\) is strong if for each \(a\) there is \(c>I\) such that whenever \(i\in I\), \((a)_{n}\in I\) if and only if \((a)_{n}<c\). Clearly, condition (3) is equivalent to strength if \(I=\omega\).
Proof.: \((2)\iff(3)\) follows immediately from definitions.
We show \((1)\implies(3)\): Suppose \(I\) is separable and let \(a\in M\). We show that there is \(c\in M\) such that for each \(n\in\omega\), \((a)_{n}\in I\) if and only if \((a)_{n}<c\). Since \(I\) is separable, there is \(c\) such that for each \(n\in\omega\), \((a)_{n}\in I\) if and only if \(n\in c\). Consider the type
\[p(x)=\{(a)_{n}<x\equiv n\in c:n\in\omega\}.\]
This type is finitely satisfiable, so (by restricted saturation of nonstandard models, see [6, Corollary 1.11.4]) there is \(c^{\prime}\) which satisfies \(p(x)\).
Now we show \((3)\implies(1)\). Let \(a\in M\). There is \(c\) such that \((a)_{n}\in I\) if and only if \((a)_{n}<c\). Consider the type
\[p(x)=\{(a)_{n}<c\equiv n\in x:n\in\omega\}.\]
This type is finitely satisfiable and therefore satisfied by some \(c^{\prime}\in M\). Such a \(c^{\prime}\) witnesses separability of \(I\).
By Theorem 12, Theorem 13, and Proposition 14, then \(I\) is separable if and only if there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and
\[I=\{c:(\mathcal{M},S)\models\neg T(\bigvee_{c}(0=1))\}.\]
Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\). By Theorem 13 one has that \(X=\{c:(\mathcal{M},S)\models\neg T(\bigvee_{c}(0=1))\}\) is separable. Is it the case that \(I(\operatorname{IDC}_{S}^{0=1})=\{x:\forall c<x\neg T(\bigvee_{c}(0=1))\}\) is also separable? Our next result shows that it is not always the case: if \(I\subseteq_{\operatorname{end}}M\) has no least \(\mathbb{Z}\)-gap above it, then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I(\operatorname{IDC}_{S}^{0=1})=I\). Later, in Corollary 19, we see that if \(\mathcal{M}\) is not arithmetically saturated, then such an \(I\) need not be separable.
**Proposition 15**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Suppose \(I\subseteq_{\operatorname{end}}M\) has no least \(\mathbb{Z}\)-gap. Then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and_
\[I=\{x:\forall c<x\neg T(\bigvee_{c}(0=1))\}.\]
Proof.: First notice that for any \(c<d\) in different \(\mathbb{Z}\)-gaps, for any \(a\in M\), there is \(b\) such that \(c<b<d\) and \(b\not\in\{(a)_{i}:i\in\omega\}\). To see this, notice that if \(a,c\), and \(d\) are as above, by recursive saturation the type
\[p(x)=\{c<x<d\}\cup\{(a)_{i}\neq x:i\in\omega\}\]
is realized in \(M\). In fact, one can ensure that the \(\mathbb{Z}\)-gap of such a \(b\) is disjoint from \(c\), \(d\), and \(\{(a)_{i}:i\in\omega\}\).
Now we show how to construct the required satisfaction class. Fix a sequence \(d_{0}>d_{1}>\ldots\) such that \(d_{i+1}\) is not in the same \(\mathbb{Z}\)-gap as \(d_{i}\) and \(\inf(\{d_{i}:i\in\omega\})=I\). We proceed similarly to Theorem 12: we build sequences \(b_{0}>b_{1}>\ldots\), \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) such that:
* for each \(i\in\omega\), \(d_{i+1}<b_{i}<d_{i}\) and \(b_{i}\) is in a different \(\mathbb{Z}\)-gap from \(d_{i}\) and \(d_{i+1}\),
* each \(F_{i}\) is a finitely generated set of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\),
* each \(S_{i}\) is a full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated,
* \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\),
* \(\bigvee_{d_{i}}(0=1)\in F_{i}\) and whenever \(\bigvee_{c}(0=1)\in F_{i}\) and \(c\leq d_{i}\), \(\langle\bigvee_{c}(0=1),\emptyset\rangle\not\in S_{i}\), and
* \(\bigvee_{b_{i}}(0=1)\in F_{i+1}\setminus F_{i}\) and \(\langle\bigvee_{b_{i}}(0=1),\emptyset\rangle\in S_{i+1}\).
Given such a sequence, let \(S=\cup(S_{i}\upharpoonright F_{i})\). Then \(S\) is the required full satisfaction class. To see this, suppose \(J=\{x:\forall c<x\neg T(\bigvee(0=1))\}\). Notice that \((\mathcal{M},S)\models T(\bigvee_{b_{i}}(0=1))\), so for each \(x\in J\) and \(i\in\omega\), \(x\stackrel{{ c}}{{<}}b_{i}\); since \(\inf(\{b_{i}:i\in\omega\})=I\), we have \(J\subseteq I\). Conversely, let \(d\in I\). For each \(c<d\), there is \(i\) such that \(\bigvee(0=1)\in F_{i}\). Then \(c<d_{i}\), so \(\langle\bigvee_{c}(0=1),\emptyset\rangle\not\in S_{i}\), and hence \(\langle\bigvee_{c}(0=1),\emptyset\rangle\not\in S\).
\({}^{c}\) We proceed to the construction. Suppose \(F_{i}\) and \(S_{i}\) has been constructed satisfying the above. Since \(F_{i}\) is finitely generated, there is \(a\) coding the lengths of disjunctions of \((0=1)\) in \(F_{i}\). By recursive saturation, there is \(b_{i}\) such that \(d_{i+1}<b_{i}<d_{i}\) and \(b_{i}\not\in\{(a)_{i}:i\in\omega\}\); moreover, we ensure that the \(\mathbb{Z}\)-gap of \(b_{i}\) is disjoint from \(d_{i}\), \(d_{i+1}\), and \(\{(a)_{i}:i\in\omega\}\). Let \(F_{i+1}\) be generated by \(F_{i}\), \(\bigvee_{b_{i}}(0=1)\), \(\bigvee_{d_{i+1}}(0=1)\), and the first formula \(\phi\not\in F_{i}\) in some externally fixed enumeration of Form\({}^{\mathcal{M}}\). Let
\[F^{\prime}=F_{i}\cup(F_{i+1}\cap\{\bigvee_{c}(0=1):c\in M\}).\]
Then \(F^{\prime}\) is a closed set of formulas. Let \(S^{\prime}=S_{i}\upharpoonright F_{i}\cup\{(\bigvee_{b_{i}-n}(0=1),\emptyset): n\in\omega\}\). In particular, \(\langle\bigvee_{d_{i+1}}(0=1),\emptyset\rangle\not\in S^{\prime}\). \(S^{\prime}\) is an \(F^{\prime}\)-satisfaction class, so by [3, Lemma 3.1], \(\mathcal{M}\) has an elementary extension \(\mathcal{N}\) carrying a Form\({}^{\mathcal{M}}\)-satisfaction class \(S\) agreeing with \(S^{\prime}\) on \(F^{\prime}\). Therefore, the theory \(Th\) asserting the following is consistent:
* \(S\) is a full satisfaction class,
* \(S\) agrees with \(S_{i}\) on formulas from \(F_{i}\),
* \(\{S\bigvee_{b_{i}-n}(0=1),\emptyset):n\in\omega\}\), and
* \(\{\neg S(\bigvee_{c}(0=1),\emptyset):c<d_{i+1},\bigvee_{c}(0=1)\in F_{i+1}\}\).
By resplendency, \(\mathcal{M}\) has a full satisfaction class \(S\) satisfying \(Th\); by chronic resplendency, we can assume \((\mathcal{M},S)\) is recursively saturated. Let \(S_{i+1}=S\) and continue.
To find some examples of separable cuts, we recall some definitions from [5]. Below, we let \(\operatorname{Def}_{0}(a)\) be the set of elements of \(\mathcal{M}\) which are \(\Delta_{0}\)-definable from \(a\) in \(\mathcal{M}\).
**Definition 16** ([5]).: Let \(\mathcal{M}\models\mathsf{PA}\) and let \(I\subseteq_{\operatorname{end}}M\).
1. \(I\) is _coded by \(\omega\) from below_ if there is \(a\in M\) such that \(I=\sup(\{(a)_{i}:i\in\omega\})\). \(I\) is _coded by \(\omega\) from above_ if there is \(a\in M\) such that \(I=\inf(\{(a)_{i}:i\in\omega\})\). \(I\) is \(\omega\)_-coded_ if it is either coded by \(\omega\) from below or from above.
2. \(I\) is \(0\)_-superrational_ if there is \(a\in M\) such that either \(\operatorname{Def}_{0}(a)\cap I\) is cofinal in \(I\) and for all \(b\in M\), \(\operatorname{Def}_{0}(b)\setminus I\) is not coinitial in \(M\setminus I\), or \(\operatorname{Def}_{0}(a)\setminus I\) is coinitial in \(M\setminus I\) and for all \(b\in M\), \(\operatorname{Def}_{0}(b)\cap I\) is not cofinal in \(I\).
**Theorem 17**.: _Let \(\mathcal{M}\models\mathsf{PA}\) and \(I\subseteq_{\text{end}}M\). Then the following are equivalent:_
1. \(I\) _is_ \(\omega\)_-coded and separable._
2. \(I\) _is_ \(0\)_-superrational._
Proof.: \((1)\implies(2)\): Suppose \(I\) is \(\omega\)-coded, and let \(a\) be such that \(\sup(\{(a)_{i}:i\in\omega\})=I\) (the case in which \(I\) is coded by \(\omega\) from above is similar). Suppose also that \(b\in M\) is such that \(\operatorname{Def}_{0}(b)\setminus I\) is coinitial in \(M\setminus I\). Then the following type is realized in \(M\):
\[p(x)= \{(x)_{2n}=(a)_{n}:n\in\omega\}\] \[\cup \{(x)_{2n+1}=t_{n}(b):n\in\omega\},\]
where \(\langle t_{n}:n\in\omega\rangle\) is a recursive enumeration of all \(\Delta_{0}\)-definable Skolem functions. If \(c\) realizes this type, then \(\sup(\{(c)_{i}:i\in\omega\}\cap I)=\inf(\{(c)_{i}:i\in\omega\}\setminus I)=I\), contradicting (1).
\((2)\implies(1)\): [5, Proposition 6.2] implies that if \(I\) is \(0\)-superrational, then \(I\) is \(\omega\)-coded. To see separability, notice that by \(0\)-superrationality, if \(\operatorname{Def}_{0}(a)\cap I\) is cofinal in \(I\), then \(\operatorname{Def}_{0}(a)\setminus I\) is not coinitial in \(M\setminus I\) (and vice versa).
[5, Theorem 6.5] states that \(\omega\) is a strong cut if and only if every \(\omega\)-coded cut is \(0\)-superrational. Taken together with the above result, we see that if \(\omega\) is not strong, then separable cuts are never \(\omega\)-coded.
**Proposition 18**.: _For any \(\mathcal{M}\models\mathsf{PA}\):_
1. _If_ \(\omega\) _is a strong cut, then every cut_ \(I\) _which is_ \(\omega\)_-coded is separable._
2. _If_ \(\omega\) _is not a strong cut, then every cut_ \(I\) _which is_ \(\omega\)_-coded is not separable._
Proof.: \((1)\) is due to [5, Theorem 6.5\((a)\implies(c)\)]. We show (2). Suppose \(\omega\) is not strong. There is \(a\) such that \(\inf(\{(a)_{i}:i\in\omega\}\setminus\omega)=\sup(\{(a)_{i}:i\in\omega\}\cap \omega)=\omega\).
If \(I\subseteq_{\text{end}}M\) is a cut which is \(\omega\)-coded from above, then there is \(c>I\) such that \(I=\inf(\{(c)_{n}:n\in\omega\})\). For simplicity assume that the sequence coded by \(c\) is a strictly decreasing and its domain consists of all elements smaller than a nonstandard element \(d\). Let \(b\) code the sequence defined by \((b)_{i}=(c)_{(a)_{i}}\). We claim that \(b\) witnesses the failure of separability of \(I\).
Indeed, \((c)_{(a)_{i}}\in I\) if and only if \((c)_{(a)_{i}}<(c)_{n}\) for each standard \(n\) if and only if \((a)_{i}>\omega\). Since the set \(\{(a)_{i}:i\in\omega\}\setminus\omega\) is coinitial with \(\omega\), then \(\{(c)_{(a)_{i}}:i\in\omega\}\cap I\) is cofinal with \(I\). Indeed, for any \(x\in I\) there is a nonstandard number \(y<d\) such that \(x<(c)_{y}\in I\). However, by the proerties of \(a\) there is also a standard number \(i\in\omega\) such that \(\omega<(a)_{i}<y\). Since \(c\) is strictly decreasing, it follows that for any such \(i\), \(x<(c)_{(a)_{i}}\in I.\) Similarly, since \(\{(a)_{i}:i\in\omega\}\cap\omega\) is cofinal with \(\omega\), then \(\{(c)_{(a)_{i}}:i\in\omega\}\setminus I\) is coinitial with \(I\).
The case when \(I\) is upward \(\omega\)-coded is treated similarly.
**Corollary 19**.: _Suppose \(\mathcal{M}\models\mathsf{PA}\) is countable, recursively saturated but not arithmetically saturated. Then there are separable sets \(X\) such that \(I(X)\) is not separable._
Proof.: Let \(c\) be nonstandard, and \(I=\sup(\{c+n:n\in\omega\})\). Then \(I\) has no least \(\mathbb{Z}\)-gap above it, and so by Proposition 15, there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I=I(\operatorname{IDC}_{S}^{0=1})\). Let \(X=\operatorname{IDC}_{S}^{0=1}\). Then \(X\) is separable by Theorem 13 and \(I=I(X)\). Since \(I\) is \(\omega\)-coded, by Proposition 18 (2), if \(\omega\) is not a strong cut, then \(I\) cannot be separable.
Separable cuts always exist in recursively saturated models. We can in fact see more: every recursively saturated model \(\mathcal{M}\) has a separable cut \(I\subseteq_{\operatorname{end}}M\) which is not closed under addition. Moreover, \(\mathcal{M}\) has separable cuts \(I\subseteq_{\operatorname{end}}M\) that are closed under addition but not multiplication, and ones closed under multiplication but not exponentiation.
To see this, first notice that if \((\mathcal{M},I)\) is recursively saturated and \(I\subseteq_{\operatorname{end}}M\), then \(I\) is separable. This follows directly from the equivalent definition of separability that says that for each \(a\) there is \(d\) such that for all \(i\in\omega\), \((a)_{i}\in I\) iff \((a)_{i}<d\). Now let \(I\subseteq_{\operatorname{end}}M\) be any cut not closed under addition. By resplendence, there is \(J\subseteq_{\operatorname{end}}M\) such that \((\mathcal{M},J)\) is recursively saturated and not closed under addition.
Again, notice that this proof generalizes to show that if \(f\) and \(g\) are increasing definable functions such that there is any cut \(I\subseteq_{\operatorname{end}}M\) closed under \(f\) but not \(g\), then there is \(I\subseteq_{\operatorname{end}}M\) that is separable and closed under \(f\) but not \(g\). Hence there are separable cuts which are closed under addition but not multiplication, and cuts which are closed under multiplication but not exponentiation.
### Arithmetic Saturation
In [1, Lemma 26], we see that there exist _disjunctively trivial_ models: models \((\mathcal{M},T)\models\mathsf{CT}^{-}\) such that for all sequences \(\langle\phi_{i}:i<c\rangle\) of sentences such that \(c>\omega\), \((\mathcal{M},T)\models T(\bigvee\limits_{i<c}\phi_{i})\). That is, models such that all disjunctions of nonstandard length are evaluated as true. In this part we see that disjunctive triviality implies arithmetic saturation.
**Definition 20**.: Let \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{\operatorname{end}}M\).
1. If, for every \(c>I\) and every sequence of sentences (in the sense of \(\mathcal{M}\)) \(\langle\phi_{i}:i<c\rangle\), \((\mathcal{M},S)\models T(\bigvee\limits_{i<c}\phi_{i})\), then we say \((\mathcal{M},S)\) is _disjunctively trivial above_\(I\). If \((\mathcal{M},S)\) is disjunctively trivial above \(\omega\), we simply say \((\mathcal{M},S)\) is _disjunctively trivial_.
2. If, for every \(c\in I\) and every sequence of sentences (in the sense of \(\mathcal{M}\)) \(\langle\phi_{i}:i<c\rangle\), \((\mathcal{M},S)\models T(\bigvee\limits_{i<c}\phi_{i})\equiv\exists i<c\;T( \phi_{i})\), we say that \((\mathcal{M},S)\) is _disjunctively correct on \(I\)_.
**Corollary 21**.: _Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{\operatorname{end}}M\). If \((\mathcal{M},S)\) is disjunctively trivial above \(I\) and disjunctively correct on \(I\), then \(I\) is separable. In particular, if \((\mathcal{M},S)\) is disjunctively trivial above \(\omega\), then \(\mathcal{M}\) is arithmetically saturated. Conversely, if \(\mathcal{M}\) is arithmetically saturated, there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and is disjunctively trivial above \(\omega\)._
Proof.: If \((\mathcal{M},S)\models\mathsf{CS}^{-}\) is disjunctively trivial above \(I\) and correct on \(I\), then \(I=\{c:(\mathcal{M},S)\models\neg T(\bigvee(0=1))\}\). Therefore \(I\) is separable by Theorem 13. If \(I=\omega\), then (by Proposition 14) \(\omega\) is a strong cut in \(\mathcal{M}\) and therefore \(\mathcal{M}\) is arithmetically saturated. Conversely, suppose \(\mathcal{M}\) is arithmetically saturated. We construct sequences \(F_{0}\subseteq F_{1}\dots\) of finitely generated sets of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\) and full satisfaction classes \(S_{0},S_{1},\dots\). Suppose \(S_{i}\) is a full satisfaction class such that \((\mathcal{M},S_{i})\) is recursively saturated and if \(\phi\in F_{i}\cap\operatorname{Sent}^{\mathcal{M}}\) is disjunction of nonstandard length, then \(S_{i}(\phi,\emptyset)\).
Let \(a\) code the lengths of all disjunctions in \(F_{i+1}\). That is, suppose \((b)_{n}\) is the \(n\)-th element of \(F_{i+1}\), and \((a)_{n}\) is the maximum \(c\) such that there is a sequence \(\langle\phi_{j}:j<c\rangle\) such that \((b)_{n}=\bigvee\limits_{j<c}\phi_{j}\). Since \(\omega\) is strong, there is \(d>\omega\) such that
for each \(n\in\omega\), \((a)_{n}\in\omega\) if and only if \((a)_{n}<d\). By [1, Lemma 26], the theory \(Th\) asserting the following is consistent:
* \(\operatorname{ElDiag}(\mathcal{M})\),
* \(S_{i+1}\) is compositional for each \(\phi\in F_{i+1}\),
* \(\{S_{i}(\phi,\alpha)\equiv S_{i+1}(\phi,\alpha):\phi\in F_{i}\}\) for all assignments \(\alpha\) of \(\phi\), and
* \(\{S_{i+1}(\bigvee\limits_{j<c}\phi_{j},\alpha):\bigvee\limits_{j<c}\phi_{j} \in F_{i+1}\text{ and }c>d\}\) for all assignments \(\alpha\) of \(\phi\).
Since \(Th\) is a consistent, recursive theory and \((\mathcal{M},S_{i})\) is recursively saturated, by resplendence, \((\mathcal{M},S_{i})\) has an expansion \((\mathcal{M},S_{i},S_{i+1})\models Th\). Continue as before, obtaining \(S=\cup S_{i}\upharpoonright F_{i}\), a full satisfaction class which is disjunctively trivial.
We observe that, for each \(n\), there is an arithmetical sentence \(\theta_{n}\):= "There exists a \(\Delta_{n}\) full model of \(\mathsf{CS}^{-}\) which is disjunctively trivial above \(\omega\)". Here by "\(\omega\)" we mean the image of the canonical embedding of the ground model onto an initial segment of the model and a "full model" means a model with a satisfaction relation satisfying the usual Tarski's truth condition. Corollary below shows that each such sentence is false.
**Corollary 22**.: _For every \(n\), \(\mathbb{N}\models\neg\theta_{n}\)._
Proof.: Assume to the contrary and fix \(n\) such that \(\mathbb{N}\models\theta_{n}\). Fix a \(\Delta_{n}\)-definable model \(\mathcal{M}:=(M,+,\cdot,S)\models\mathsf{CS}^{-}\) such that \(\mathbb{N}\subseteq(M,+,\cdot)\) and \(\mathcal{M}\) is disjunctively trivial above \(\omega\). Then \((M,+,\cdot)\) is arithmetically saturated and consequently \((\mathbb{N},\operatorname{SSy}(\mathcal{M}))\models\mathsf{ACA}_{0}\). However, each set in \(\operatorname{SSy}(\mathcal{M})\) is \(\Delta_{n}\) definable in \(\mathbb{N}\), which is not possible.
It follows from the above corollary that the construction of the disjunctively trivial model of \(\mathsf{CT}^{-}\) does not formalize in any true arithmetical theory, in particular it does not formalize in \(\mathsf{PA}\). Hence one cannot hope to interpret \(\mathsf{CT}^{-}+\operatorname{DC}-\operatorname{in}\) in \(\mathsf{PA}\) by using the construction of a disjunctively trivial model internally in \(\mathsf{PA}\). This is unlike in the case of a standard Enayat-Visser construction: [4] shows how to formalize the model theoretical argument from [3] in \(\mathsf{PA}\) in order to conclude that \(\mathsf{CT}^{-}\) is feasibly reducible to \(\mathsf{PA}\) and, in consequence, it does not have speed-up over \(\mathsf{PA}\).
## 4. Non-local Pathologies
In previous sections, we have considered a single, fixed \(\theta\) and functions \(F\) such that \(F(x)\) is the \(x\)-th iterate of \(\theta\) in some sense. We described sets defined by certain correctness properties with respect to this \(\theta\). In other words, we explored "local" pathologies (pathologies that are local to a fixed \(\theta\)). In this section we address several sets defined using non-local pathologies: for example, instead of fixing a \(\theta\) and looking at the idempotent disjunctions of \(\theta\), we look at all idempotent disjunctions (of any sentence). These sets can include \(\operatorname{IDC}_{S}\), \(\operatorname{QC}_{S}\), \(\operatorname{IDC}_{S}^{\operatorname{bin}}\), \(\operatorname{DNC}_{S}\), among others.
**Remark 23**.: Let us fix a model \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and consider
\[\operatorname{QC}_{S}=\{c:\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T( (\forall x)^{c}\phi)\equiv T(\phi)\}.\]
Notice that \(\operatorname{QC}_{S}\) is necessarily closed under addition, since if, for each \(\phi\),
\[T((\forall x)^{c}\phi)\equiv T(\phi),\]
then let \(\theta=(\forall x)^{c}\phi\), and so
\[T((\forall x)^{c}\theta)\equiv T(\theta)=T((\forall x)^{c}\phi)\equiv T(\phi).\]
Since \((\forall x)^{c}\theta=(\forall x)^{2c}\phi\), we conclude that \(c\in\operatorname{QC}_{S}\) if and only if \(2c\in\operatorname{QC}_{S}\). Suppose that \(\operatorname{QC}_{S}\) is not a cut, and let \(c_{0}<c_{1}\) be such that \(c_{0}\notin\operatorname{QC}_{S}\) and \(c_{1}\in\operatorname{QC}_{S}\). Then there is \(\phi\) such that \(\neg[T((\forall x)^{c_{0}}\phi)\equiv T(\phi)]\), but \(T((\forall x)^{c_{1}}\phi)\equiv T(\phi)\). Then \(c_{1}\in\operatorname{QC}_{S}\), \(2c_{1}\in\operatorname{QC}_{S}\), but \(c_{0}+c_{1}\notin\operatorname{QC}_{S}\), since \(T((\forall x)^{c_{0}+c_{1}}\phi)\equiv T((\forall x)^{c_{0}}\phi)\).
Let \(I\subseteq_{\operatorname{end}}J_{0}\subseteq_{\operatorname{end}}J_{1} \subseteq_{\operatorname{end}}M\) be separable cuts closed under addition such that \(c_{0}\in J_{0}\) and \(c_{1}\in J_{1}\setminus J_{0}\). Then \(X=I\cup(J_{1}\setminus J_{0})\) is separable, but by the above argument, there can be no \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(\operatorname{QC}_{S}=X\).
This remark shows that there are complications that occur with sets defined by these non-local pathologies. For the remainder of this section, we look instead at the _cuts_ defined by these pathologies.
We again generalize the setting to draw conclusions about \(I(\operatorname{IDC}_{S}),I(\operatorname{QC}_{S})\) and \(I(\operatorname{IDC}_{S}^{\operatorname{bin}})\). To formalize this notion, we again look at finite propositional templates \(\Phi(p,q)\) (recall this notion from the beginning of Section 2). We restrict our attention to \(\Phi\) with the following properties:
* \(\Phi(p,q)\) is not equivalent to \(p\),
* the complexity of \(\Phi(p,q)\) is non-zero,
* \(q\)**must** appear in \(\Phi(p,q)\),
* \(p\wedge q\vdash\Phi(p,q)\), and
* \((\neg p\wedge\neg q)\vdash\neg\Phi(p,q)\).
**Definition 24**.: Suppose \(\Phi\) has the above properties. Then \(F:M\times\operatorname{Sent}\to\operatorname{Sent}\) defined as follows:
* \(F(0,\phi)=\phi\) for all \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), and
* \(F(x+1,\phi)=\Phi(\phi,F(x,\phi))\).
is called an _idempotent sentential operator_. We say that \(\Phi\) is a _template for \(F\)_.
Notice that for any \(\theta\), the function \(F(\cdot,\theta)\) is one to one.
**Lemma 25**.: _Let \(\Phi\) be a template for \(F\), and \(F\) an idempotent sentential operator. If \(p\) does not appear in \(\Phi(p,q)\), then for all \(x,y\in M\) and \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), \(\mathcal{M}\models F(x+y,\phi)=F(x,F(y,\phi))\)._
Proof.: If \(p\) does not appear in \(\Phi(p,q)\), then there is a propositional function \(\Psi(q)\) such that \(\Phi(p,q)=\Psi(q)\). Let \(G:\operatorname{Sent}^{\mathcal{M}}\to\operatorname{Sent}^{\mathcal{M}}\) be defined by \(G(\phi)=\Psi(\phi)\). Then,
\[F(x+1,\phi)=\Psi(F(x,\phi))=G(F(x,\phi)).\]
Since \(F\) and \(G\) are \(\mathcal{M}\)-definable, by induction, one observes that for all \(x\), \(F(x,\phi)=G^{x}(\phi)\), the \(x\)-th iterate of \(G\). Therefore,
\[F(x+y,\phi)=G^{x+y}(\phi)=G^{x}(G^{y}(\phi))=G^{x}(F(y,\phi))=F(x,F(y,\phi)).\qed\]
As before, notice that if \(p\) appears in \(\Phi(p,q)\), then for each \(\phi\) and \(x\), \(\phi\in\operatorname{Cl}(F(x,\phi))\). For this reason, if \(p\) appears in \(\Phi(p,q)\), we refer to \(F\) as _accessible_. If not, then because of Lemma 25, we say \(F\) is _additive_.
**Definition 26**.: Let \(F\) be an idempotent sentential operator.
* \(\theta\) is \(F\)_-irreducible_ if whenever \(F(x,\phi)=\theta\), then \(\phi=\theta\) and \(x=0\).
* The \(F\)_-length_ of \(\phi\) is the maximum \(x\) such that there is \(\theta\) such that \(F(x,\theta)=\phi\).
* The \(F\)_-root_ of \(\phi\) is the unique \(\theta\) such that \(F(x,\theta)=\phi\), where \(x\) is the \(F\)-length of \(\phi\).
**Remark 27**.: By working through the possible truth tables for \(\Phi(p,q)\), one notices that if \(\Phi(p,q)\) has the required properties, then it is logically equivalent to one of the following propositional formulae:
* \(p\lor q\),
* \(p\wedge q\), or
* \(q\).
We say that \(\Phi(p,q)\) is \(q\)_-monotone_ if it is logically equivalent to either \(p\lor q\) or to \(q\).
Notice that if \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), then in each of these cases, one can show that if \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \((\mathcal{M},S)\models\forall xT(F(x,\phi))\equiv T(F(x+1,\phi))\).
**Lemma 28**.: _Let \(F\) be an idempotent sentential operator._
1. _If_ \(F\) _is accessible, then for all_ \(x,y>0\)_,_ \(\phi,\psi\in\operatorname{Sent}^{\mathcal{M}}\)_, if_ \(F(x,\phi)=F(y,\psi)\)_, then_ \(x=y\) _and_ \(\phi=\psi\)_. In other words, when_ \(x>0\)_, the_ \(F\)_-root of_ \(F(x,\phi)\) _is_ \(\phi\)_._
2. _If_ \(F\) _is additive, then the_ \(F\)_-root of_ \(\phi\) _is_ \(F\)_-irreducible. Moreover, for all_ \(x,y>0\)_,_ \(\phi,\psi\in\operatorname{Sent}^{\mathcal{M}}\)_, if_ \(F(x,\phi)=F(y,\psi)\)_, then the_ \(F\)_-root of_ \(\phi\) _and_ \(F\)_-root of_ \(\psi\) _are the same._
Proof.: First we show (1). Suppose \(F\) is accessible and \(F(x,\phi)=F(y,\psi)\). If \(x,y>0\), then \(F(x,\phi)=\Phi(\phi,F(x-1,\phi))\), and \(F(y,\psi)=\Phi(\psi,F(y-1,\psi))\). Since \(F\) is accessible, then \(p\) appears as a leaf of the syntax tree of \(\Phi(p,q)\). Since \(\Phi(\phi,F(x-1,\phi))=\Phi(\psi,F(y-1,\psi))\), we see that \(\phi=\psi\). One shows by induction (in \(\mathcal{M}\), since \(F\) is \(\mathcal{M}\)-definable) that if \(F(x,\phi)=F(y,\phi)\), then \(x=y\).
Next we show (2). Suppose \(F\) is additive and \(\theta\) is the \(F\)-root of \(\phi\). Then \(F(x,\theta)=\phi\) and \(x\) is the \(F\)-length of \(\phi\). If \(\theta\) is not \(F\)-irreducible, then there is \(y>0\) and \(\psi\) such that \(F(y,\psi)=\theta\). Then
\[\phi=F(x,\theta)=F(x,F(y,\psi))=F(x+y,\psi),\]
the last equality holding by additivity. Since \(x+y>x\), this contradicts that \(x\) is the \(F\)-length of \(\phi\).
To show the "moreover" part of (2), let \(x,y>0\), \(\phi,\psi\in\operatorname{Sent}^{\mathcal{M}}\), and \(F(x,\phi)=F(y,\psi)\). Define \(G:\operatorname{Sent}^{\mathcal{M}}\to\operatorname{Sent}^{\mathcal{M}}\) by \(G(\phi)=\Phi(\phi,\phi)\), so that \(F(x,\phi)=G^{x}(\phi)\). Notice that \(G\) is one to one. Since \(G\) is one to one, then if \(x=y\), \(G^{x}(\phi)=G^{y}(\psi)\) implies, by induction in \(\mathcal{M}\), that \(\phi=\psi\). Suppose \(x>y\). Then again by induction in \(\mathcal{M}\), we have that \(\mathcal{M}\models G^{x-y}(\phi)=\psi\). Let \(\theta\) be the \(F\)-root of \(\phi\), so that there is \(a\) such that \(G^{a}(\theta)=\phi\). Then \(G^{a+(x-y)}(\theta)=\psi\), so \(\theta\) is the \(F\)-root of \(\psi\).
Consider the following examples of \(\Phi(p,q)\):
* \(\Phi(p,q)=q\lor p\). In this case, \(F(x,\phi)=\bigvee\limits_{x}\phi\).
* \(\Phi(p,q)=q\wedge p\). In this case, \(F(x,\phi)=\bigwedge\limits_{x}^{x}\phi\).
* \(\Phi(p,q)=(\forall y)q\). Then \(F(x,\phi)=\underbrace{\forall y\ldots\forall y}_{x\text{ times}}\phi\).
* \(\Phi(p,q)=q\lor q\). Then \(F(x,\phi)=\bigvee\limits_{x}^{\operatorname{bin}}\phi\).
* \(\Phi(p,q)=\neg\neg q\). Then \(F(x,\phi)=(\neg\neg\neg)^{x}\phi\).
The goal of this section is to characterize those cuts \(I\) such that
\[I=\{c:\forall x\leq c\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T(\phi) \equiv T(F(x,\phi))\}.\]
This would allow us to characterize \(I(\operatorname{IDC}_{S})\), \(I(\operatorname{IDC}_{S}^{\operatorname{bin}})\), and \(I(\operatorname{QC}_{S})\), among others. For \(\operatorname{IDC}_{S}^{\operatorname{bin}}\) and \(\operatorname{QC}_{S}\) the relevant \(F\) functions are additive, while for \(\operatorname{IDC}_{S}\), \(F\) is accessible. For the most part we will restrict our attention to \(\Phi\) with syntactic depth \(1\). This covers most of the above cases, with the notable exception of \(\neg\neg q\); we treat this case separately.
**Theorem 29**.: _Let \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and suppose \(F\) is an additive idempotent sentential operator. If_
\[I=\{c:\forall x\leq c\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T(\phi) \equiv T(F(x,\phi))\},\]
_then \(I\) is closed under addition._
Proof.: Let \(a\in I\). We show \(2a\in I\). To see this, let \(x\leq 2a\). If \(x\leq a\), we are done. Otherwise, let \(b=x-a\), so \(x=a+b\) and \(a,b\leq a\). Notice that for \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), we have \((\mathcal{M},S)\models T(\phi)\equiv T(F(a,\phi))\) and \((\mathcal{M},S)\models T(F(a,\phi))\equiv T(F(b,F(a,\phi))\). By additivity, \(F(b,F(a,\phi))=F(a+b,\phi)\), and \(x=a+b\), so we have
\[(\mathcal{M},S)\models T(\phi)\equiv T(F(x,\phi)).\qed\]
Given a cut \(I\operatorname{\subseteq_{\operatorname{end}}}M\), we say \(I\) is _\(F\)-closed_ if either \(F\) is accessible or \(F\) is additive and \(I\) is closed under addition. We say _\(I\) has no least \(F\)-gap_ if one of the following holds:
* \(F\) is accessible and if \(x>I\), then there is a \(y\) such that for each \(n\in\omega\), \(x-n>y>I\), or
* \(F\) is additive and if \(x>I\), there is a \(y\) such that for each \(n\in\omega\), \(\lfloor\frac{x}{n}\rfloor>y>I\).
Our next main result shows that if \(I\) is \(F\) closed and either separable or has no least \(F\)-gap, then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and
\[I=\{c:\forall x\leq c\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T(\phi) \equiv T(F(x,\phi))\}.\]
Our method of proof will be similar to our previous results: we build sequences of finitely generated sets \(F_{0}\subseteq F_{1}\subseteq\ldots\) and full satisfaction classes \(S_{0},S_{1},\ldots\) with particular properties. We first prove two important lemmas which we use in the inductive step of our construction.
For the rest of this section, we modify Definition 5 so that we say \(\phi\triangleleft\psi\) if either \(\phi\) is an immediate subformula of \(\psi\) or \(\phi\) is the \(F\)-root of \(\psi\). Similarly modify the definitions of closed sets and finitely generated sets so that such sets are closed under \(F\)-roots. Note that by Lemma 28, if \(F\) is accessible, this changes nothing about finitely generated and/or closed sets, but this does have an effect for additive \(F\).
**Definition 30**.: Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\). Let \(I\operatorname{\subseteq_{\operatorname{end}}}M\), \(X\subseteq\operatorname{Form}^{\mathcal{M}}\) closed, and \(S\) a full satisfaction class.
1. \(S\) is _\(F\)-correct on \(I\)_ for formulae in \(X\) if for each \(\phi\in X\) and \(x\in M\), whenever \(F(x,\phi)\in X\) and \(x\in I\), then \(S(F(x,\phi),\alpha)\) if and only if \(S(\phi,\alpha)\) for all assignments \(\alpha\) of \(\phi\).
2. \(S\) is \(F\)_-trivial above_ \(I\) for formulae in \(X\) if for each \(\phi\in X\) and \(x\in M\), whenever \(F(x,\phi)\in X\) and \(x>I\), then either \(\Phi(p,q)\) is \(q\)-monotone and \(S(F(x,\phi),\alpha)\) for all assignments \(\alpha\), or \(\Phi(p,q)\) is not \(q\)-monotone and \(\neg S(F(x,\phi),\alpha)\) for all assignments \(\alpha\) of \(\phi\).
**Lemma 31**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\), and assume \(\Phi(p,q)\) has syntactic depth 1. Let \(I\mbox{$\,\subseteq\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\theta=F((a)_{n},(b)_{n})\) and \(\phi=F((a)_{m},(b)_{m})\). By Lemma 28, if \(F\) is accessible, then either \(x=0\) and \(\theta=\phi\), or \(x=(a)_{n}\) and \(\phi=(b)_{n}\); so if \(F\) is accessible, there is nothing to show. Suppose \(F\) is additive. Therefore (by our hypothesis) \(I\) is closed under addition. By Lemma 28, \((b)_{n}=(b)_{m}\) and \((a)_{n}=(a)_{m}+x\). There are two cases to consider, corresponding to the \(F\)-correctness and \(F\)-triviality properties of \(\theta\):
Case 1: \(x\in I\) (\(F\)-correctness): Since \(I\) is closed under addition, \((a)_{n}\in I\) if and only if \((a)_{m}\in I\). By separability, therefore, \((a)_{n}<d\) if and only if \((a)_{m}<d\). If \((a)_{n}<d\), then by \(F\)-correctness we have \((\theta,\emptyset)\in S_{1}\) if and only if \(((b)_{n},\emptyset)\in S_{1}\) and \((\phi,\emptyset)\in S_{1}\) if and only if \(((b)_{n},\emptyset)\in S_{1}\). Therefore, \((\theta,\emptyset)\in S_{1}\) if and only if \((\phi,\emptyset)\in S_{1}\). If \((a)_{n}>d\), then by \(F\)-triviality we have either \((\theta,\emptyset)\in S_{1}\) and \((\phi,\emptyset)\in S_{1}\), or \((\theta,\emptyset)\notin S_{1}\) and \((\phi,\emptyset)\notin S_{1}\). Again we have \((\theta,\emptyset)\in S_{1}\) if and only if \((\phi,\emptyset)\in S_{1}\).
Case 2: \(x>I\) (\(F\)-triviality): In this case, \((a)_{n}>I\), and therefore \((a)_{n}>d\). By \(F\)-triviality, if \(\Phi\) is \(q\)-monotone, we have \((\theta,\emptyset)\in S_{1}\), and if \(\Phi\) is not \(q\)-monotone, we have \((\theta,\emptyset)\notin S_{1}\).
Now we return to showing that \(Th\) is consistent. Let \(T_{0}\subseteq Th\) be a finite subtheory. Let \(C\) be the set of formulas such that the instances of their compositionality, preservation, \(F\)-correctness and \(F\)-triviality appear in \(T_{0}\). Then \(C\) is finite, so the modified subformula relation, \(\triangleleft\), is well-founded on \(C\). We define \(S\) inductively on this relation:
Suppose \(\phi\) is minimal in \(C\). If \(\alpha\) is an assignment for \(\phi\), we put \((\phi,\alpha)\in S_{1}\) if any of the following hold:
1. \(\phi\in X\) and \((\phi,\alpha)\in S\),
2. \(\phi\) is atomic, \(\alpha\) is an assignment for \(\phi\) and \(\mathcal{M}\models\phi[\alpha]\), or
3. \(\Phi(p,q)\) is \(q\)-monotone, \(\phi=F((a)_{n},(b)_{n})\), \(\alpha=\emptyset\) and \((a)_{n}>d\).
Define \(\phi\) of higher rank using compositionality if possible. If it is not possible, meaning that no immediate subformula of \(\phi\) is in \(C\), then there must be \(\psi\in C\) such that \(\psi\) is the \(F\)-root of \(\phi\). Let \(\phi=F((a)_{n},(b)_{n})\), where \((b)_{n}=\psi\). In this case, put \((\phi,\alpha)\in S_{1}\) if either \((\psi,\alpha)\in S_{1}\) or \((a)_{n}>d\) and \(\Phi\) is \(q\)-monotone.
We show that \((\mathcal{M},S,S_{1})\models T_{0}\). Clearly, \((\mathcal{M},S,S_{1})\) satisfies the elementary diagram of \(\mathcal{M}\), and by definition, \((\mathcal{M},S,S_{1})\) satisfies all compositional axioms in \(T_{0}\).
We show that \((\mathcal{M},S,S_{1})\) satisfies the preservation scheme. Suppose \(\phi\in X\). Then if \(\phi\) is minimal in \(C\) in the subformula relation, then \(S_{1}(\phi,\alpha)\) if and only if \(S(\phi,\alpha)\) by construction. If \(\phi\) is not minimal, then \(S_{1}(\phi,\alpha)\) if and only if \(S(\phi,\alpha)\) follows by compositionality along with \(F\)-correctness and \(F\)-triviality of \(S\) on sentences from \(X\).
Next we show \(F\)-triviality. Suppose \(\phi=F((a)_{n},(b)_{n})\in C\) and \((a)_{n}>d\). We assume \(\Phi(p,q)\) is \(q\)-monotone; the other case is similar. If \(\phi\) is minimal in \(C\), then by construction, \((\phi,\emptyset)\in S_{1}\). If \(\phi\) is not minimal, then let \(\psi=F((a)_{n}-1,(b)_{n})\). As \((a)_{n}>I\), \((a)_{n}-1>I\) as well, so \((a)_{n}-1>d\). If \(\psi\in C\), then by induction, we have \((\psi,\emptyset)\in S_{1}\). By compositionality, \((\psi,\emptyset)\in S_{1}\) if and only if \((\phi,\emptyset)\in S_{1}\), so \((\phi,\emptyset)\in S_{1}\). If \(\psi\not\in C\), then it must be the case that \((b)_{n}\in C\), and by construction, \((\phi,\emptyset)\in S_{1}\) since \((a)_{n}>d\).
Lastly, we show the \(F\)-correctness scheme. Suppose \(\phi=F((a)_{n},(b)_{n})\in C\), \((a)_{n}<d\), and \(S_{1}(\phi,\emptyset)\equiv S_{1}((b)_{n},\emptyset)\in T_{0}\). If \(\phi\in X\), then \((b)_{n}\in X\), and \((\phi,\emptyset)\in S\) if and only if \(((b)_{n},\emptyset)\in S\). By preservation, the same holds with \(S_{1}\) replacing \(S\)
Suppose \(\phi\not\in X\). Let \(\psi=F((a)_{n}-1,(b)_{n})\). If \(\psi\in C\), then as \(\psi\) and \((b)_{n}\) each have lower rank than \(\phi\), we can assume \(((b)_{n},\emptyset)\in S_{1}\) if and only if \((\psi,\emptyset)\in S_{1}\). Then by compositionality, we have \(S(\phi,\alpha)\equiv S(\psi,\alpha)\), so,
\[(\phi,\emptyset)\in S_{1}\iff(\psi,\emptyset)\in S_{1}\iff((b)_{n},\emptyset )\in S_{1}.\]
If \(\psi\not\in C\), then by our construction, \((\phi,\emptyset)\in S_{1}\) if and only if either \(((b)_{n},\emptyset)\in S_{1}\) or \((a)_{n}>d\) (and \(\Phi\) is \(q\)-monotone). Since \((a)_{n}<d\), then \((\phi,\emptyset)\in S_{1}\) if and only if \(((b)_{n},\emptyset)\in S_{1}\).
Since \(Th\) is consistent, there is a model \((\mathcal{M}^{\prime},S^{\prime},S^{\prime}_{1})\models Th\). By resplendency of \((\mathcal{M},S)\), \((\mathcal{M},S)\) has an expansion \((\mathcal{M},S,S_{1})\models Th\). This \(S_{1}\) is the required \(X^{\prime}\)-satisfaction class.
We shall now prove an analogous lemma with a different assumption about \(I\): instead of separability we shall require that there is no least \(F\)-gap above \(I\). In the proof we shall need one more notion, which we shall now define:
**Definition 32**.: Let \(\mathcal{M}\models\mathsf{PA}\), and let \(F\) be an idempotent sentential operator. Assume that \(F\) is additive. For \(Z\subseteq\operatorname{Form}^{\mathcal{M}}\) and \(d\in M\), let \(Z_{d}\) be the set of those formulae of the form \(F(c,\phi)\), for which there are \(n\in\mathbb{N}\), \(a\in M\), such that
* \(0<a-c<n\cdot d\),
* \(F(a,\phi)\in Z\),
* \(\phi\) is the root of \(F(a,\phi)\).
For uniformity of our proofs, when \(F\) is accessible, we take \(Z_{d}\) to be just the closure of \(Z\) (under immediate subformulae and taking \(F\)-roots).
**Proposition 33**.: _Let \(\mathcal{M}\models\mathsf{PA}\), \(F\) an idempotent sentential operator, and \(Z\subseteq\operatorname{Form}^{\mathcal{M}}\). Then, for every \(d\in M\)\((Z_{d})_{d}\subseteq Z_{d}\)._
Proof.: This is clear if \(F\) is accessible, so assume \(F\) is additive. Fix an arbitrary \(c,\phi\) such that \(F(c,\phi)\in(Z_{d})_{d}\). Choose \(a,n\) such that \(F(a,\phi)\in Z_{d}\) and \(0<a-c<n\cdot d\). By definition it follows that for some \(c^{\prime}\), \(n^{\prime}\), \(\phi^{\prime}\) and \(a^{\prime}\), \(F(a,\phi)=F(c^{\prime},\phi^{\prime})\), \(F(a^{\prime},\phi^{\prime})\in Z\) and \(0<a^{\prime}-c^{\prime}<n^{\prime}\cdot d\). Since \(F\) is additive this means that \(\phi=\phi^{\prime}\) (since both of them are roots) and \(a=c^{\prime}\), hence
\[0<a^{\prime}-c=a^{\prime}-a+a-c=a^{\prime}-c^{\prime}+a-c<(n+n^{\prime})\cdot d.\]
**Lemma 34**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\), and assume \(\Phi(p,q)\) has syntactic depth 1. Let \(I\!\subseteq_{\mathit{end}}M\) be \(F\)-closed and has no least \(F\)-gap. Suppose \(S\) is a full satisfaction class, \((\mathcal{M},S)\) is recursively saturated, \(d>I\) and \(S\) is \(F\)-correct on \([0,d).\) Suppose further that \(X\subseteq\operatorname{Form}^{\mathcal{M}}\) is finitely generated. Then for any formula \(\tilde{\phi}\in\operatorname{Form}^{\mathcal{M}}\), there are \(I<d_{0}<d_{1}<d\), a finitely generated \(X^{\prime}\supseteq X\) and a full satisfaction class \(S^{\prime}\) such that \(\dot{p}i\in X^{\prime}\), \((\mathcal{M},S^{\prime})\) is recursively saturated, \(S^{\prime}\upharpoonright X=S\upharpoonright X\), \(S^{\prime}\) is \(F\)-correct on \([0,d_{0})\) and, for some \(\theta\in X^{\prime}\), \(F(d_{1},\theta)\in X^{\prime}\) and \((\mathcal{M},S^{\prime})\models\neg(S^{\prime}(\theta,\emptyset)\equiv S^{ \prime}(F(d_{1},\theta),\emptyset))\)._
Proof.: Fix \(\mathcal{M},I,S,X,d\) and \(\tilde{\phi}\) as in the assumptions. Let \(\odot\) denote \(+\) if \(F\) is accessible and \(\cdot\) if \(F\) is additive. Let \(d_{1}\), \(d_{0}\) be any numbers above \(I\) such that for every \(n,k\in\omega\), \(d_{0}\odot n<d_{1}\odot k<d\). Suppose that every formula in \(X\cup\{\tilde{\phi}\}\) has complexity smaller than \(r\in M\). Let \(\theta:=(\neg)^{2r}0=0\) if \(F\) is not \(q\)-monotone and
\(\theta:=\neg(\neg)^{2r}0=0\) in the other case. We note that \(\theta\) is the \(F\)-root of \(F(d_{1},\theta)\) and \(\operatorname{Cl}(F(d_{1},\theta))\) is disjoint from \(X\). We put \(Y:=\operatorname{Cl}(X\cup\{F(d_{1},\theta)\})\). Observe that if \(\phi\in Y\) is an \(F\)-root, then either \(\phi\in X\) or \(\phi=\theta\). Hence \(Y\) is closed under \(F\)-roots.
We shall start our construction by extending \(S\) to a \(Y\cup Y_{d_{0}}\)-satisfaction class on \(\mathcal{M}\) which if \(F\)-correct on \([0,d_{0})\). Proposition 33 implies that \((Y_{d_{0}})_{d_{0}}\subseteq Y_{d_{0}}\). Since obviously, for any \(Z,Z^{\prime}\)\((Z\cup Z^{\prime})_{d_{0}}=Z_{d_{0}}\cup Z^{\prime}_{d_{0}}\), it follows that \((Y\cup Y_{d_{0}})_{d_{0}}=Y_{d_{0}}\cup(Y_{d_{0}})_{d_{0}}\subseteq Y\cup Y_{ d_{0}}.\) Additionally \(Y\cup Y_{d_{0}}\) is closed under roots and under immediate subformulae. We argue that \(X_{d_{0}}\cap\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}=\emptyset.\) To this end observe that if \(\psi\in\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}\), then either \(\psi\) is in \(\operatorname{Cl}(\theta)\), and hence the complexity of \(\psi\) is greater than \(2r-n\) for some standard \(n\), or \(\psi\) contains \(\theta\) as a subformula. In both cases the complexity of \(\psi\) is at least \(2r-n\) for some standard \(n\). Consequently, if \(\psi\in\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}\), then \(\psi\) does not belong to \(X_{d_{0}}\), because each formula in \(X_{d_{0}}\) is a subformula of a formula in \(X\), and hence its complexity is not greater than \(r\). Moreover, if \(\phi\), \(F(b,\phi)\) are both in \(Y\cup Y_{d_{0}}\) and \(b<d_{0}\), then \(\phi\in X_{d_{0}}\iff F(b,\phi)\in X_{d_{0}}\). Indeed, from right to left this follows since \((X_{d_{0}})_{d_{0}}\subseteq X_{d_{0}}\). From left to right this is so, since if \(F(b,\phi)\notin X_{d_{0}}\), then either \(F(b,\phi)\in\operatorname{Cl}(\theta)_{d_{0}}\) or \(F(b,\phi)=F(b^{\prime},\theta)\). The first case is impossible since each formula in \(\operatorname{Cl}(\theta)_{d_{0}}\) starts with a negation which does not occur in \(\Phi\). In the latter case it follows that \(\theta\) is a subformula of \(\phi\) (because \(\theta\) is \(F\)-irreducible) and hence \(\phi\notin X_{d_{0}}\).
Let us put \(Y^{\prime}=Y\cup Y_{d_{0}}\). We extend \(S\mathord{\restriction}_{X}\) to a \(Y^{\prime}\)-satisfaction class \(S_{0}\), which is compositional and \(d_{0}\)-correct for formulae in \(Y^{\prime}\). For every \(\phi\in Y^{\prime}\) and every \(\alpha\):
* if \(\phi\in X\), then \(S_{0}(\phi,\alpha)\) iff \(S(\phi,\alpha)\);
* if \(\phi\in\operatorname{Cl}(\{\theta\})\), then (\(\phi\) is a sentence and) \(S_{0}(\phi,\alpha)\) iff \(\alpha=\emptyset\) and \(\phi\) is of the form \((\neg)^{2b}0=0\).
* if \(\phi=F(d_{1},\theta)\), then (\(\phi\) is a sentence and) \(S_{0}(\phi,\alpha)\) iff \(\alpha=\emptyset\) and \(F\) is \(q\)-monotone.
* if \(\phi\) is in the closure of \(F(d_{1},\theta)\), then, since \(\Phi(p,q)\) has syntactic depth 1, \(\phi\) is either in \(\operatorname{Cl}(\{\theta\})\) or \(\phi=F(d_{1}-n,\theta)\) for some \(n\in\omega\). We have already taken care of the former case. In the latter case we let the value of \(\phi\) on \(\alpha\) be the same as that of \(F(d_{1},\theta)\) on \(\alpha\).
* otherwise \(\phi=F(a-b,\psi)\) for some \(k\in\mathbb{N}\), \(a\in M\), \(b<k\cdot d_{0}\) and \(\psi\), \(F(a,\psi)\) such that \(F(a,\psi)\in Y\), \(\psi\) is an \(F\)-root of \(F(a,\psi)\). This can happen only if \(F\) is additive. Since \(Y\) is closed under roots, \(\psi\in Y\), hence for each \(\alpha\) the value of \(\psi\) on \(\alpha\) has already been defined. We stipulate that the value of \(F(a-b,\psi)\) on \(\alpha\) is the same as that of \(F(a,\psi)\) on \(\alpha\). We observe that this is independent of the choice of \(F(a,\psi)\in Y\): if \(F(a,\psi)\) and \(F(a^{\prime},\psi^{\prime})\) both satisfy the above conditions, then either both \(F(a,\psi),F(a^{\prime},\psi^{\prime})\) belong to \(X\) or both of them belong to \(\operatorname{Cl}(F(d_{1},\theta))\). If the former holds our claim follows because \(S\) is \(F\)-correct on \([0,d)\). If the latter holds, it must be the case that \(\psi=\psi^{\prime}=\theta\) and \(|a-a^{\prime}|\) is standard, so our claim follows by construction.
We check that \(S_{0}\) is \(F\)-correct on \([0,d_{0})\) for sentences in \(Y^{\prime}\). If \(F\) is accessible, this easily follows from our construction. Assume that \(F\) is additive. Assume \(0<b<d_{0}\) and fix an arbitrary \(\phi\). By previous considerations either both \(\phi,F(b,\phi)\) belong to \(X_{d_{0}}\) or they both belong to \(\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}.\) In the latter case both \(\phi\) and \(F(b,\phi)\) are of the form \(F(d_{1}-b^{\prime},\theta)\) for \(b^{\prime}<n\cdot d_{0}\). In particular, for an arbitrary \(\alpha\), \(\phi\) and \(F(b,\phi)\) get the same value on \(\alpha\) (by construction).
Suppose then that \(\phi,F(b,\phi)\in X_{d_{0}}\) and fix \(a_{0},a_{1},b_{0},b_{1},n_{0},n_{1},\psi_{0},\psi_{1}\) such that \(\phi=F(a_{0}-b_{0},\psi_{0})\), \(F(b,\phi)=F(a_{1}-b_{1},\psi_{1})\) and \(F(a_{i},\psi)\in X\), \(b_{i}<n_{i}\cdot d_{0}\) and \(\psi_{i}\) is the root of \(F(a_{i},\psi_{i})\). It follows that \(\phi\) and \(F(b,\phi)\) have the same root, so \(\psi_{0}=\psi_{1}\). In particular \(F(b,\phi)=F(a_{1}-b_{1},\psi_{0})=F(a_{0}-b_{0}+b,\psi_{0})\). Hence \(a_{1}-b_{1}=a_{0}-b_{0}+b\), so \(|a_{1}-a_{0}|=|b_{1}+b-b_{0}|\leq 3d_{0}<d\). In particular, since \(S\) is \(F\)-correct on \([0,d)\), \(F(a_{0},\psi_{0})\) and \(F(a_{1},\psi_{0})\) are assigned by \(S\) the same values on each \(\alpha\).
Now we show how to find \(S^{\prime}\) and \(X^{\prime}\) as in the statement of the lemma. We let \(X^{\prime}=X\cup\operatorname{Cl}(\{\tilde{\phi},\theta,F(d_{1},\theta)\})\). For \(S^{\prime}\), by an easy resplendency argument, it is enough to build an extension \(\mathcal{N}\succeq\mathcal{M}\) and a satisfaction class \(S_{N}\) such that
1. \(S_{N}\) is an \(\mathcal{N}\) satisfaction class which is \(F\)-correct satisfaction class on \([0,d_{0})\).
2. \(S_{N}\) makes \(F(d_{1},\theta)\equiv\theta\) false.
3. \(S_{N}\) agrees with \(S\) on \(X\).
We observe that since \(X\) is finitely generated, then condition 3 is expressible in the language of arithmetic augmented with \(S_{N}\) and \(S\). In the construction we shall heavily rely on the extension of \(S\) to \(Y^{\prime}\) given by \(S_{0}\). We build \(\mathcal{N}\) and \(S_{N}\) in stages following the idea of [3]. Let \(\mathcal{M}_{0}=\mathcal{M}\), and we construct a chain of pairs \((\mathcal{M}_{0},S_{0}),(\mathcal{M}_{1},S_{1}),\ldots\) which satisfy the following conditions
* for each \(n\), \(\mathcal{M}_{n}\preceq\mathcal{M}_{n+1}\).
* for each \(n\), \(S_{n+1}\) is a Form\({}^{\mathcal{M}_{n}}\)-satisfaction class.
* \(S_{1}\) agrees with \(S_{0}\) on \(Y^{\prime}\) and for each \(n>1\), \(S_{n+1}\) agrees with \(S_{n}\) on Form\({}^{\mathcal{M}_{n}}\).
* for each \(n\), \(S_{n+1}\) is \(F\)-correct on \([0,d_{0})\) with respect to formulae from Form\({}^{\mathcal{M}_{n}}\).
We show how to construct \(\mathcal{M}_{1},S_{1}\) and the construction of \(\mathcal{M}_{n+1},S_{n+1}\) given \(\mathcal{M}_{n},S_{n}\) for \(n\geq 1\) is fully analogous. We consider the theory given as the union of the following sets of sentences:
1. \(\operatorname{ElDiag}(\mathcal{M}_{0})\);
2. \(\{S(\phi,\alpha):\phi\in Y^{\prime},(\phi,\alpha)\in S_{0}\}\)
3. \(\{\operatorname{Comp}(\phi,\psi,\theta):\phi\in\operatorname{Form}^{\mathcal{ M}_{0}}\}\).
4. \(\{\forall\alpha S(F(a,\phi)\equiv\phi,\alpha):a<d_{0},\phi\in\operatorname{ Form}^{\mathcal{M}_{0}}\}\).
Fix a finite portion \(T_{0}\) of this theory and let \(E\) be the set of those formulae which occur in one of the axioms in \(T_{0}\).
Let us observe that the relation \(\phi\sqsubset\psi:=\mathcal{M}_{0}\models"\phi\) is a subformula of \(\psi"\) is well-founded on \(E\), since \(E\) is finite. By this we mean that \(\phi\sqsubset\psi\) if \(\mathcal{M}_{0}\) sees that \(\phi\) is a subformula (not necessarily direct) of \(\psi\). We define \(S\subseteq M_{0}^{2}\) by induction on the ranking function \(\operatorname{rk}(\cdot)\) given by \(\sqsubset\). For an arbitrary \(\psi\) of rank \(0\) we put
* if \(\psi\) is standard, then we know what to do.
* if \(\psi\in Y^{\prime}\), then \((\psi,\alpha)\in S\) iff \((\psi,\alpha)\in S_{0}\)
* if \(\psi\notin Y^{\prime}\), then for no \(\alpha\), \((\psi,\alpha)\in S\).
If \(\phi\) has positive rank, then
* if all immediate subformulae are in \(E\), then the immediate subformulae of \(\phi\) have lower ranks, so we know what to do.
* if the above does not hold and \(\phi=F(a,\psi)\) for some \(\psi\in E\) and \(0<a<d_{0}\), then \(\psi\) has lower rank, so for an arbitrary \(\alpha\) we put \((\phi,\alpha)\in S\) iff \((\psi,\alpha)\in S\).
* if \(\psi\in Y^{\prime}\), then \((\psi,\alpha)\in S\) iff \((\psi,\alpha)\in S_{0}\).
* otherwise, for every \(\alpha,(\phi,\alpha)\notin S\).
We check that with so defined \(S\), \((\mathcal{M},S)\models T_{0}\). That the compositional clauses hold is clear from the construction. We check that \(S\) is \(F\)-correct on \([0,d_{0})\) for sentences in \(E\). By induction on \(n\) we prove that for all \(\phi,F(a,\phi)\in E\), \(\operatorname{rk}(\phi)+\operatorname{rk}(F(a,\phi))=n\), \(a<d_{0}\), then for every \(\alpha\), \(S(\phi,\alpha)\iff S(F(a,\phi),\alpha)\). Since \(\operatorname{rk}(\phi)+\operatorname{rk}(F(a,\phi))=0\) only if \(a=0\), the base step is trivial. Assume \(\operatorname{rk}(\phi)+\operatorname{rk}(F(a,\phi))\) is positive. Then certainly \(\operatorname{rk}(F(a,\phi))\) is positive. If all immediate subformulae of \(F(a,\phi)\) belong to \(E\), then at least one of them is of the form \(F(a-1,\phi)\) and the thesis follows by inductive hypothesis and idempotency of \(\Phi\), since \(F(a-1,\phi)\) has lower rank than \(F(a,\phi)\). Otherwise, for some \(\psi\in E\) and \(b<d_{0}\) such that \(F(a,\phi)=F(b,\psi)\) and we decided that for every \(\alpha\), the values of \(F(b,\psi)\) and \(\psi\) are the same. By Lemma 28, for some \(b^{\prime}\), either \(\phi=F(b^{\prime},\psi)\) or \(\psi=F(b^{\prime},\phi)\). Hence the thesis follows by the inductive assumption.
Now we argue for the preservation axioms. By induction on the rank of \(\phi\) we prove that if \(\phi\in Y^{\prime}\), then for every \(\alpha\), \(S(\phi,\alpha)\) iff \(S_{0}(\phi,\alpha)\). This is immediate for formulae of rank \(0\). In the induction step we use the definition of \(S\) and the closure properties of \(Y^{\prime}\).
For the step induction step we first consider extend \(S_{n}\restriction_{M_{n}}\) to the set \(\operatorname{Form}^{\mathcal{M}}\cup(\operatorname{Form}^{\mathcal{M}})_{d_{0} }\subseteq\operatorname{Form}^{\mathcal{M}_{n+1}}\). Then we argue as in the first step.
**Theorem 35**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated and \(I\subseteq_{\mathit{end}}M\). Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\), and assume \(\Phi(p,q)\) has syntactic depth 1. Suppose \(I\) is \(F\)-closed. Then if \(I\) is separable or has no least \(F\)-gap above it, there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and_
\[I=\{x:\forall y\leq x\forall\phi\in\operatorname{Sent}^{\mathcal{M}}(T(\phi) \equiv T(F(y,\phi)))\}.\]
Proof.: We construct sequences \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) of sets such that:
1. \(F_{i}\) is finitely generated and \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\),
2. \(S_{i}\) is a full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated
3. \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\),
4. \(S_{i}\) is \(F\)-correct on \(I\) for sentences from \(F_{i}\), and
5. for each \(x>I\), there is \(I<y<x\), \(i\in\omega\) and \(\phi\in F_{i}\) such that \(F(y,\phi)\in F_{i}\) and \(\neg(S_{i}(F(y,\phi),\alpha)\equiv S_{i}(\phi,\alpha))\) for all assignments \(\alpha\).
If \(I\) is separable, we also ensure that \(S_{i}\) is \(F\)-trivial above \(I\) for sentences in \(F_{i}\).
Prior to starting the construction, if \(I\) has no least \(F\)-gap above it, we externally fix a sequence \(d_{0}>d_{1}>\ldots\) such that \(\inf\{d_{i}:i\in\omega\}=I\) and for each \(i\), \(d_{i}\) and \(d_{i+1}\) are in different \(F\)-gaps. Additionally, we externally fix an enumeration of \(\operatorname{Form}^{\mathcal{M}}\) (in order type \(\omega\)).
Suppose we have constructed \(F_{i}\) and \(S_{i}\). Let \(\phi\) be the least formula in our enumeration that is not in \(F_{i}\). If \(I\) is separable, let \(F_{i+1}\) be generated by \(F_{i}\) and \(\phi\), and apply Lemma 31 to obtain \(S_{i+1}\). Otherwise, we suppose \(S_{i}\) is \(F\)-correct on \([0,d_{i})\) and apply Lemma 34 to obtain \(F_{i+1}\), \(S_{i+1}\), and \(I<c_{0}<c_{1}<d_{i}\) such that \(S_{i+1}\) is \(F\)-correct on \([0,c_{0})\) but not on \([0,c_{1})\). (In fact, there is \(\theta\in F_{i+1}\) that witnesses the failure of \(F\)-correctness on \([0,c_{1})\).) Without loss of generality, we can replace \(d_{i+1}\) with the minimum of \(\{c_{0},d_{i+1}\}\), so that we can assume \(S_{i+1}\) is \(F\)-correct on \([0,d_{i+1})\) and continue.
Having constructed these sequences, let \(S=\cup S_{i}\upharpoonright F_{i}\). Then it follows that \(S\) is \(F\)-correct on \(I\) and for each \(x>I\), there is \(\phi\) such that \(\neg(T(\phi)\equiv T(F(x,\phi)))\).
**Remark 36**.: It is easy to see that in fact a tiny modification of our proof of Theorem 35 shows something more: we can perform our construction in such a way that \(S\) is \(F\) correct on \(I\) not only on all sentences but on _all formulae_. Hence, given \(\mathcal{M},I\) and \(F\) as in the assumptions of Theorem 35 we can find a satisfaction class \(S\) such that
\[I =\{x:\forall y\leq x\forall\phi\in\operatorname{Sent}^{\mathcal{M }}(T(\phi)\equiv T(F(y,\phi)))\}\] \[=\{x:\forall y\leq x\forall\phi\in\operatorname{Form}^{\mathcal{ M}}\forall\alpha\big{(}S(\phi,\alpha)\equiv S(F(y,\phi),\alpha)\big{)}\}.\]
We assume that \(\Phi\) has depth \(1\) in the previous results because the more general case is quite complicated. In particular, if \(\Phi\) has depth at least \(2\), then it might not be possible to ensure that \(S\) is \(F\)-trivial above \(I\) as we do in Lemma 31. For example, suppose \(\Phi(p,q)=(\neg\neg)q\), \(\phi=(0=0)\) and \(\psi=\neg(0=0)\). Then, for any \(x\) and any satisfaction class \(S\), \(T((\neg\neg)^{x}\phi)\equiv\neg T((\neg\neg\neg)^{x}\psi)\). However, we show in our next result that we can still ensure that, if \(I\) is separable and closed under addition, there is \(S\) such that \(I\) is the \((\neg\neg)\)-correct cut.
**Proposition 37**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated and \(I\subseteq_{\text{end}}M\) separable and closed under addition. Then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and_
\[I=\{x:\forall y\leq x\forall\phi\in\operatorname{Sent}^{\mathcal{M}}(T(\phi) \equiv T((\neg\neg)^{y}(\phi)))\}.\]
Proof.: Modify the definition of \(\triangleleft\) so that \(\phi\triangleleft\psi\) if either \(\phi\) is an immediate subformula of \(\psi\) or \(\phi\) does not start with a double negation and there is \(x\) such that \((\neg\neg)^{x}\phi=\psi\). (That is, \(\phi\) is the \(F\) root of \(\psi\) where \(F(x,\theta)=(\neg\neg)^{x}\theta\).) By similar techniques to the proof of Theorem 35, it suffices to show the following: given any finitely generated \(X\) and full satisfaction class \(S\) such that
* \((\mathcal{M},S)\) is recursively saturated,
* if \(x\in I\), \(\phi\in X\), and \((\neg\neg)^{x}\phi\in X\), then for each assignment \(\alpha\) of \(\phi\), \((\phi,\alpha)\in S\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S\), and,
* if \(x>I\), \((\neg\neg)^{x}\phi\in X\), and \(\phi\in X\), then for each assignment \(\alpha\) of \(\phi\), \((\neg\phi,\alpha)\in S\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S\),
then for any finitely generated \(X^{\prime}\supseteq X\), there is a full satisfaction class \(S^{\prime}\) such that
* \((\mathcal{M},S^{\prime})\) is recursively saturated,
* \(S^{\prime}\upharpoonright X=S\upharpoonright X\),
* if \(x\in I\), \(\phi\in X^{\prime}\), and \((\neg\neg)^{x}\phi\in X^{\prime}\), then for each assignment \(\alpha\) of \(\phi\), \((\phi,\alpha)\in S^{\prime}\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S^{\prime}\), and,
* if \(x>I\), \((\neg\neg)^{x}\phi\in X^{\prime}\), and \(\phi\in X^{\prime}\), then for each assignment \(\alpha\) of \(\phi\), \((\neg\phi,\alpha)\in S^{\prime}\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S^{\prime}\).
Moreover, rather than find a full satisfaction class satisfying the above, we simply need to find an \(X^{\prime}\)-satisfaction class \(S^{\prime}\) satisfying the above. To do so, let \(a,b\), and \(c\) code enumerations such that \(\{(c)_{n}:n\in\omega\}=X^{\prime}\cap\operatorname{Sent}^{\mathcal{M}}\), \((b)_{n}\) is the root of \((c)_{n}\), and \((c)_{n}=(\neg\neg)^{(a)_{n}}((b)_{n})\). By separability, there is \(d\) such that for each \(n\in\omega\), \((a)_{n}\in I\) if and only if \((a)_{n}<d\). We show that the theory \(Th\) consisting of the following is consistent:
* \(\operatorname{ElDiag}(\mathcal{M})\),
* \(\{\operatorname{Comp}(\phi,\psi,\theta):\phi,\psi,\theta\in X^{\prime}\}\),
* \(\{S^{\prime}(\phi,\alpha)\equiv S(\phi,\alpha):\phi\in X\}\) (preservation),
* \(\{S^{\prime}((\neg\neg)^{(a)_{n}}((b)_{n}),\alpha)\equiv S^{\prime}((b)_{n}, \alpha):n\in\omega,(a)_{n}<d\}\) (\(F\)-correctness), and
* \(\{S^{\prime}((\neg\neg)^{(a)_{n}}((b)_{n}),\alpha)\equiv S^{\prime}(\neg(b)_{n},\alpha):n\in\omega,(a)_{n}>d\}\) (\(F\)-incorrectness).
Again, one can show that if \((\mathcal{M},S,S^{\prime})\models Th\), then \(S^{\prime}\) is an \(X^{\prime}\)-satisfaction class satisfying the required properties.
To show that \(Th\) is consistent, let \(T_{0}\subseteq Th\) be finite, and let \(C\) be the set of formulas whose instances of compositionality, preservation, double negation correctness and double negation incorrectness are in \(T_{0}\). Since \(C\) is finite, then the modified subformula relation \(\triangleleft\) is well-founded on \(C\), and we define \(S^{\prime}\) inductively on this relation.
Suppose \(\phi\) is minimal in \(C\). If \(\alpha\) is an assignment for \(\phi\), we put \((\phi,\alpha)\in S^{\prime}\) if either \(\phi\) is atomic and \(\mathcal{M}\models\phi[\alpha]\), or \(\phi\in X\) and \((\phi,\alpha)\in S\). We define \(\phi\) of higher rank using compositionality if possible. If this is not possible, then it must be the case that there is \(n\in\omega\) such that \(\phi=(\neg\neg)^{(a)_{n}}((b)_{n})\) and \((b)_{n}\in C\) has lower rank than \(\phi\). We put \((\phi,\alpha)\in S^{\prime}\) if either \((a)_{n}<d\) and at an earlier stage we decided \(((b)_{n},\alpha)\in S^{\prime}\), or if \((a)_{n}>d\) and, at an earlier stage we decided \(((b)_{n},\alpha)\not\in S^{\prime}\).
We verify that \((\mathcal{M},S,S^{\prime})\models T_{0}\). Clearly it satisfies the diagram and compositionality axioms by construction. Suppose \(\phi\in X\) is such that \(\forall\alpha(S^{\prime}(\phi,\alpha)\equiv S(\phi,\alpha))\in T_{0}\). If \(\phi\) is of minimal rank, then this is true by construction. If not, we can assume, by induction, that whenever \(\psi\triangleleft\phi\) is such that \(\psi\in C\), then \(\forall\alpha(S^{\prime}(\psi,\alpha)\equiv S(\psi,\alpha))\). If \(\phi\) is determined via compositionality, then the result for \(\phi\) follows from the fact that both \(S\) and \(S^{\prime}\) are compositional for formulas in \(X\). Otherwise, the result for \(\phi\) follows from either double negation correctness up to \(I\), or double negation incorrectness above \(I\).
Now let \(\theta=(\neg\neg)^{(a)_{n}}((b)_{n})\), and suppose \(\forall\alpha S^{\prime}(\theta,\alpha)\equiv S^{\prime}((b)_{n},\alpha)\in T_{0}\), where \((a)_{n}<d\). By construction, \(\theta\) is not minimal in \(C\). The immediate subformula of \(\theta\) is \(\psi=\neg(\neg\neg)^{(a)_{n}-1}((b)_{n})\). If \(\psi\in C\), then by construction we have that \(S^{\prime}(\theta,\alpha)\equiv\neg S^{\prime}(\psi,\alpha)\). By induction, we can assume we have \(S^{\prime}(\psi,\alpha)\equiv\neg S^{\prime}((b)_{n},\alpha)\). If \(\psi\not\in C\), then by construction we put \(S^{\prime}(\theta,\alpha)\equiv S^{\prime}((b)_{n},\alpha)\).
A similar argument shows double negation incorrectness in the case that \((a)_{n}>d\).
By Theorem 35, if \(I\) is either separable or has no least \(\mathbb{Z}\)-gap above it, there is \(T\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I(\mathrm{IDC}_{S})=I\). In fact, if \(\omega\) is a strong cut, then by Proposition 18 every cut \(I\) is either separable or has no least \(\mathbb{Z}\)-gap, and therefore every cut \(I\) can be \(I(\mathrm{IDC}_{S})\) for some satisfaction class \(S\). Similarly, if \(\omega\) is strong, then every additively closed cut \(I\) is either separable or has no least additive gap above it, and therefore each additively closed cut can be \(I(\mathrm{IDC}_{S}^{\mathrm{bin}})\).
To complete the picture, we can show that if \(F\) is an idempotent sentential operator and \(I\) is the \(F\)-correct cut, then either \(I\) has no least \(F\)-gap above it or is separable. Therefore, if \(\mathcal{M}\) is not arithmetically saturated, then there are cuts \(I\) which cannot be realized as \(I(\mathrm{IDC}_{S})\) for any \(T\).
**Proposition 38**.: _Let \(F\) be an accessible idempotent sentential operator. Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{\text{end}}\mathcal{M}\) is such that_
\[I=\{x:\forall y\leq x\forall\phi\big{(}T(F(y,\phi))\equiv T(\phi)\big{)}\}.\]
_Then either there is no least \(\mathbb{Z}\)-gap above \(I\) or \(I\) is separable._
Proof.: Assume that there is a least \(\mathbb{Z}\)-gap above \(I\) and fix \(a\) coding a sequence such that \((a)_{n+1}=(a)_{n}-1\) and \(\inf_{n\in\omega}\{(a)_{n}\}=I\). Since \((a)_{0}\notin I\) there is \(\phi\) such that \((\mathcal{M},S)\models\neg T(F((a)_{0},\phi)\equiv\phi)\). By the properties of \(F\) it follows that for every \(n\in\omega\), \((\mathcal{M},S)\models\neg T(F((a)_{n},\phi)\equiv\phi)\). Let \(D=\{F(a,\phi)\equiv\phi:a<(a)_{0}\}\) and
let \(A=\{F(a,\phi)\equiv\phi:a\in I\}\). It follows that for every \(c<(a)_{0}\), \((\mathcal{M},S)\models T(F(c,\phi)\equiv\phi)\) iff \(F(c,\phi)\equiv\phi\in A.\) So by Theorem 13, \(A\) is separable from \(D\); therefore \(I\) is separable.
This completes the picture for accessible \(F\). In particular, we have a complete picture for which cuts can be \(I(\mathrm{IDC}_{S})\). If \(\omega\) is strong, then every cut can be \(I(\mathrm{IDC}_{S})\) for some \(S\), and if \(\omega\) is not strong, then only those cuts which have no least \(\mathbb{Z}\)-gap above it can be \(I(\mathrm{IDC}_{S})\). What about for cuts which are \(F\)-correct for additive \(F\), like \(I(\mathrm{QC}_{S})\)?
**Lemma 39**.: _Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(t\in M\) is a full binary tree of height \(c\) labelled with sentences, such that \(t\upharpoonright_{T}:=\{s\in\{0,1\}^{<\omega}\ \mid\ T(t(s))\}\) has arbitrarily long branches. Then \(t\upharpoonright_{T}\) has an infinite coded branch._
Proof.: Consider the following sequence of formulae
\[\phi_{n}(x):=\bigwedge_{s:\mathrm{len}(s)\leq n}\bigl{(}t(s)\equiv s\in x \bigr{)}.\]
The above conjunction is of the form \((\phi_{s_{0}}\wedge(\phi_{s_{1}}\wedge(\ldots)\ldots)\) where \(\{s_{i}\}_{i<2^{n}}\) is an enumeration of all binary sequences of length \(\leq n\) according to the length-first lexicographic ordering. By Smith's result [9, Theorem 2.19] there is \(a\in M\) such that for all \(n\in\omega\), \(T(\phi_{n}(a))\) holds. Hence \(\{s\in\{0,1\}^{<\omega}\ \mid\ s\in a\}\) is an infinite finitely branching tree, so it has a coded infinite branch, \(b\). Since \(b\subseteq a\), for every \(i\in\omega\) we have \((\mathcal{M},S)\models T(b(i))\).
**Proposition 40**.: _Let \(F\) be an additive idempotent sentential operator. Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{end}\mathcal{M}\) is such that_
\[I=\{x:\forall y\leq x\forall\phi\bigl{(}T(F(y,\phi))\equiv T(\phi)\bigr{)}\}.\]
_Then either there is no least \(+\)-closed gap above \(I\) or \(I\) is separable._
Proof.: Suppose there is a least \(+\)-closed gap above \(I\) and let \(a\) code a sequence such that \((a)_{n+1}=\lfloor\frac{(a)_{n}}{2}\rfloor\) and \(\inf_{n\in\omega}(a)_{n}=I.\) Let \(c\) be the length of \(a.\) Observe that \(\sup(I\cap im(a))=I\), so by Proposition 10 it is sufficient to show that \(I\cap im(a)\) is separable. Fix \(\phi\) such that \((\mathcal{M},S)\models\neg T(F((a)_{0},\phi)\equiv\phi)\). Then for every \(n\) it holds that
\[(\mathcal{M},S)\models\neg T(F((a)_{n+1},\phi)\equiv\phi)\vee\neg T\bigl{(}F ((a)_{n+1},F((a)_{n+1},\phi))\equiv F((a)_{n+1},\phi)\bigr{)}.\]
Define the labelling \(t\) of a full binary tree of height \(c\) by recursion as follows:
\[t_{\varepsilon}= \neg(F((a)_{0},\phi)\equiv\phi)\] \[t_{s^{-}0}= \neg\bigl{(}F((a)_{n+1},t_{s}^{*})\equiv t_{s}^{*}\bigr{)} \text{if }\operatorname{len}(s)=n\] \[t_{s^{-}1}= \neg\bigl{(}F((a)_{n+1},F((a)_{n+1},t_{s}^{*}))\equiv F((a)_{n+1 },t_{s}^{*})\bigr{)} \text{if }\operatorname{len}(s)=n\]
In the above, \(x^{*}\) is the unique sentence \(\psi\) such that there is \(\theta\) such that \(x=\neg(\theta\equiv\psi)\). By our assumption, \(t\upharpoonright_{T}\) has arbitrarily long branches, so there is an infinite coded branch \(b\) of \(t\) such that for every \(i\in\omega\)\((\mathcal{M},S)\models T(b(i))\). Moreover, by the construction of \(t\), for every \(i\in\mathrm{dom}(b)\),
\[(\mathcal{M},S)\models T(b(i))\text{ iff }i\in\omega.\]
It follows that the set \(A=\{\psi\in im(b):T(\neg\psi)\}\) is separable. Observe that for every \(i<\operatorname{len}(b)\) we have
\[(a)_{i}\in I\iff T(\neg b(i))\iff b(i)\in A.\]
Hence \(im(a)\cap I=G^{-1}[A]\), where \(G\) is the definable function \((a)_{i}\mapsto b(i)\). By Proposition 9 this ends the proof.
**Corollary 41**.: _For a countable, recursively saturated \(\mathcal{M}\models\mathsf{PA}\), the following are equivalent:_
1. \(\mathcal{M}\) _is arithmetically saturated, and_
2. _For every idempotent sentential operator_ \(F\) _with template_ \(\Phi(p,q)\) _of depth 1, and every_ \(F\)_-closed cut_ \(I\!\subseteq_{\text{\emph{end}}}M\)_, there is_ \(S\) _such that_ \((\mathcal{M},S)\models\mathsf{CS}^{-}\) _and_ \[I=\{x:\forall y\leq x\forall\phi\big{(}T(F(y,\phi))\equiv T(\phi)\big{)}\}.\]
Note that the implication (2) \(\implies\) (1) holds in more generality: it does not rely on \(\Phi\) having syntactic depth 1.
Proof.: We show (1) \(\implies\) (2). Suppose \(\omega\) is a strong cut. Let \(a\odot n\) be \(a-n\), if \(F\) is accessible, and \(\lfloor\frac{a}{n}\rfloor\), if \(F\) is additive. By Proposition 18, if \(I\) is not separable, then \(I\) is not \(\omega\)-coded, and so there is no \(a>I\) such that \(\inf(\{a\odot n:n\in\omega\})=I\). Therefore, every \(F\)-closed cut \(I\) is either separable or has no least \(F\)-gap above it. The result follows from Theorem 35.
Conversely, if \(\mathcal{M}\) is not arithmetically saturated, let \(I\!\subseteq_{\text{\emph{end}}}M\) be any cut with a least \(F\)-gap above it. For example, fix a nonstandard \(c\) and let \(I=\inf(\{c\odot n:n\in\omega\})\). Since \(\omega\) is not strong, by Proposition 18, \(I\) is not separable. It follows by Proposition 38 for accessible \(F\), and by Proposition 40 for additive \(F\), that there is no \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and
\[I=\{x:\forall y\leq x\forall\phi(T(F(y,\phi))\equiv T(\phi))\}.\]
## 5. Disjunctively Correct Cut
We proceed to the strongest correctness property, that of full disjunctive correctness (\(\operatorname{DC}_{S}\)). As usual we shall focus on \(I(\operatorname{DC}_{S})\). The first proposition states that the intuitive strength of full disjunctive correctness is reflected in the closure properties of \(I(\operatorname{DC}_{S})\):
**Proposition 42**.: _For every \((\mathcal{M},S)\), \(I(\operatorname{DC}_{S})\) is closed under multiplication._
Proof.: We shall use a result from [1]: define the sequential induction cut \(\operatorname{SInd}_{S}\) to be the set of those \(c\) such that the following is true in \((\mathcal{M},S):\)
\[\forall x\leq c\forall\langle\phi_{i}:i\leq x\rangle\big{(}T(\phi_{0})\wedge \forall y<x(T(\phi_{y})\to T(\phi_{y+1}))\big{)}\to\forall i\leq xT(\phi_{i}).\]
Then the proof of [1, Theorem 8] directly shows that \(\operatorname{DC}_{S}\subseteq\operatorname{SInd}_{S}\). Now we proceed to the main argument: fix any \(c\in\operatorname{DC}_{S}\) and let \(b\leq c^{2}\). Fix any \(d,r\) such that \(b=dc+r\) and \(r<c\). Fix any \(\langle\phi_{i}:i\leq b\rangle\) and assume first that \(T(\bigvee_{i\leq b}\phi_{i})\) and, aiming at a contradiction that for every \(i\leq b\), \(T(\neg\phi_{i})\). Define the auxiliary sequence of length \(d\): for each \(i\leq d\) let \(\theta_{i}=\bigvee_{j\leq ic}\phi_{j}\) and let \(\theta_{d+1}=\phi_{b}.\) We show that for every \(i<d+1\), \(T(\neg\theta_{i})\to T(\neg\theta_{i+1}).\) Fix any \(i\) and assume \(T(\neg\theta_{i})\). Let \(c^{\prime}\) be \(c\) if \(i<d\) and \(r\) if \(i=d\). Consider the sequence \(\psi_{k}=\bigvee_{j\leq ic+k}\phi_{j}\). We claim that
for any \(k<c^{\prime}\)\(T(\neg\psi_{k})\to T(\neg\psi_{k+1}).\) Indeed, fix any \(k<c^{\prime}\) and assume \(T(\neg\psi_{k})\). Observe that by the definition of \(T\), the definition of \(\psi_{k+1}\) and the compositional axioms we have
\[T(\neg\psi_{k+1})\equiv S(\neg\psi_{k+1},\emptyset)\equiv S(\neg (\psi_{k}\vee\phi_{ic+k+1}),\emptyset)\equiv S(\neg\psi_{k},\emptyset)\wedge S (\neg\phi_{ic+k+1},\emptyset)\\ \equiv T(\neg\psi_{k})\wedge T(\neg\phi_{ic+k+1}).\]
The last sentence is clearly true by our assumptions. Hence, since \(c^{\prime}\in\mathrm{SInd}_{S}\), we conclude that \(T(\neg\psi_{c^{\prime}})\). Since by definition \(\psi_{c^{\prime}}=\theta_{i+1}\), we established that for any \(i<d\)\(T(\neg\theta_{i})\to T(\neg\theta_{i+1}).\) Since \(d+1\in\mathrm{SInd}_{S}\), we conclude that \(T(\neg\theta_{d+1})\). By definition, we obtain that \(T(\neg\bigvee_{i\leq b}\phi_{i})\), which contradicts our assumption.
Now assume that for some \(e\leq b\), \(T(\phi_{e})\) holds. In particular, it holds that \(T(\bigvee_{i\leq e}\phi_{i})\). Let us fix \(d^{\prime},r^{\prime}\) such that \(b-e=d^{\prime}c+r^{\prime}\) and for \(j\leq d^{\prime}\) define \(\theta_{j}=\bigvee_{i\leq e+jc}\phi_{i}\) and \(\theta_{d^{\prime}+1}=\bigvee_{i\leq b}\phi_{i}\). As in the above proof we show that for each \(j\leq d^{\prime}\), \(\bar{T}(\theta_{j})\to T(\theta_{j+1})\) and obtain \(T(\bigvee_{i\leq b}\phi_{j})\), which concludes the proof of the reverse implication and the whole argument.
We conclude with a limitative result which shows that methods used to prove the main results of previous sections are unsufficient for obtaining the analogous results in the context of \(\mathrm{DC}_{S}\). This is because, as conjectured, our methods show that, in an arithmetically saturated model, any cut can be characterized as \(I(\mathrm{IDC}_{S})\) for some regular satisfaction class which satisfies the internal induction axiom, For such a satisfaction class \(S\), \(S(\phi,\emptyset)\) behaves like a truth predicate satisfying the axioms of \(\mathrm{CT}^{-}\) and we have the following small insight. Below \(\mathrm{Con}_{\mathsf{PA}}(x)\) is a formula with a free variable \(x\) which canonically expresses that there is no proof of \(0=1\) in \(\mathsf{PA}\) whose code is smaller than \(x\).
**Proposition 43**.: _Suppose that \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(S\) is regular and satisfies the internal induction axiom. Then, for every \(a\in\mathrm{DC}_{S}\), \(\mathcal{M}\models\mathrm{Con}_{\mathsf{PA}}(a).\)_
Sketch.: Let \(\mathrm{CT}^{-}(x)\) denote a formula of the language \(\mathcal{L}_{\mathsf{PA}}\cup\{T\}\) with a free variable \(x\) which expresses "\(T(x)\) satisfies Tarski's inductive truth conditions for sentences of logical depth at most \(x\)". By inspection of the proof of Theorem 3.1 from [11] one sees that if \(a\in\mathrm{DC}_{S}\), then there is (typically nonstandard) \(\psi\in\mathcal{M}\) with a unique free variable such that \(\mathcal{M}\models\mathrm{Form}_{\mathcal{L}_{\mathsf{PA}}}(\psi(x))\) and \((\mathcal{M},S)\models\mathrm{CT}^{-}(a)[T*\psi(x)/T(x)]\). \(T*\psi(x)\) denotes a formula with a free variable \(x\) which expresses "The result of substituting the numeral of \(x\) for the unique free variable in \(\psi\) is true" (we use the notation from [8], Lemma 3.6) and \(\mathrm{CT}^{-}(a)[T*\psi(x)/T(x)]\) is the formula obtained by substituting \(T*\psi(x)\) for \(T(x)\). As in [8], Lemma 3.7 we conclude that \(T*\psi(x)\) satisfies full induction scheme in \((\mathcal{M},S)\). It follows that no proof with a code less than \(a\) can be the proof of \(0=1\) from the axioms of \(\mathsf{PA}\), because each formula in this proof is of complexity at most \(a\) and all the premises are made true by \(T*\psi(x)\).
## 6. Appendix
In this Appendix we indicate how to modify the proof of Theorem 12 in order to obtain a much better-behaved satisfaction class. In particular we would like the constructed satisfaction classes to define a truth predicate. We start with introducing the notion of _regularity_. The definition is taken from [10]:
**Definition 44**.: For every formula \(\phi\) and a term substitution \(\gamma\), \(\phi[\gamma]\) denotes the result of subtituting \(\gamma(v)\) for every free occurrence of \(v\) in \(\phi\), for every \(v\) in the domain of \(\gamma\).
We shall treat assignments as substitution of numerals: if \(\alpha\) is an assignment, then by writing \(\phi[\alpha]\) we treat \(\alpha\) as a substitution which to every \(v\) assigns the canonical numeral naming \(\alpha(v)\) (i.e. the term expressing the sum of \(0\) and \(\alpha(v)\)-many \(1\)'s).
For example, if \(\alpha(v_{0})=3\) and \(\alpha(v_{1})=1\), then \((\exists v_{0}(v_{0}=v_{1})\lor v_{0}+1=v_{2})[\alpha]=\exists v_{0}(v_{0}=0+1 )\lor 0+1+1+1=v_{2}\).
**Definition 45** (\(\mathsf{PA}\)).: If \(\phi\in\operatorname{Form}_{\mathcal{L}_{\mathsf{PA}}}\), we say that \(\widehat{\phi}\) is its _structural template_ iff
* No constant symbol occurs in \(\widehat{\phi}\).
* No free variable occurs in \(\widehat{\phi}\) twice.
* No complex term containing only free variables occurs in \(\widehat{\phi}\).
* No variable occurs in \(\widehat{\phi}\) both as a bound and as a free variable.
* The formula \(\phi\) can be obtained from \(\widehat{\phi}\) by renaming bound variables and substituting terms for free variables in such a way that no variable appearing in those terms becomes bound.
* \(\widehat{\phi}\) is the smallest formula with those properties (recall that we identify formulae with their Godel codes).
We say that formulae \(\phi,\psi\) are _structurally similar_, \(\phi\sim\psi\) iff \(\widehat{\phi}=\widehat{\psi}\).
Suppose that \(\kappa\) is an _occurrence of a subformula_ of \(\phi\) (not necessarily direct). With \(\kappa_{\widehat{\phi}}\) we denote the subformula of \(\widehat{\phi}\) whose occurrence in \(\widehat{\phi}\) corresponds to \(\kappa\) (recall that \(\phi\) and \(\widehat{\phi}\) have the same syntactic structure). For a formula \(\psi\), \([\psi]_{\widehat{\phi}}\) denotes the set \(\{\kappa_{\widehat{\phi}}\ \ :\ \ \kappa\text{ is an occurrence of }\psi\text{ in }\phi\}\).
Note that the definition of structural similarity formalizes in \(\mathsf{PA}\) and the relation is an equivalence relation, provably in \(\mathsf{PA}\). Moreover we can assume that if \(\phi\) is of standard complexity, then \(\widehat{\phi}\) is a standard formula.
**Example 46**.: The structural template of \(0=0\lor 0=0\) is \(v_{0}=v_{1}\lor v_{2}=v_{3}\), while the syntactic template of \(\exists v_{2}(v_{2}+1=v_{1}+1+1)\) is \(\exists v_{0}(v_{0}+v_{1}=v_{2})\), where in both cases \(v_{i}\) are chosen in such a way to minimize the formula. \(0=0_{\widehat{0\lor 0=0}}=\{v_{0}=v_{1},v_{2}=v_{3}\}\).
Formulae \(\forall v_{0}(v_{0}=v_{1}+1)\vee\neg(v_{1}=v_{0}+1)\) and \(\forall v_{3}(v_{3}=v_{2}+1+1)\vee\neg(v_{2}+1=v_{0})\) are structurally similar.
**Remark 47** (\(\mathsf{PA}\)).: For every two formulae \(\psi,\phi\) such that \(\psi\) is a subformula of \(\phi\) (not necessarily direct), \(\widehat{\psi}\) differs from every formula from the set \([\psi]_{\widehat{\phi}}\) at most by a permutation of free variables and renaming bound variables. For every \(\theta\in[\psi]_{\widehat{\phi}}\) we shall denote with \(\sigma_{\theta,\widehat{\psi}}\) the permutation of free variables such that \(\sigma_{\theta,\widehat{\psi}}[\theta]=\widehat{\psi}\).
**Definition 48** (\(\mathsf{PA}\)).: Let \(\phi\) be any formula and \(\gamma\) be a term substitution such that \(\widehat{\phi}[\gamma]\) differs from \(\phi\) only modulo renaming the bound variables. Then for every assignment \(\alpha\) for \(\phi\) let \(\widehat{\alpha}_{\phi}\) be the assignment for \(\widehat{\phi}\) given by \(\widehat{\alpha_{\phi}}(v)=\gamma(v)^{\alpha}\). We recall that for a term \(t\) and assignment \(\alpha\), \(t^{\alpha}\) denotes the value of term \(t\) under assignment \(\alpha\).
In other words, \(\widehat{\alpha}_{\phi}\) assigns to a variable \(v\), the value of the term \(\gamma(v)\) under the assignment \(\alpha\). For illustration assume that \(\theta\) is either a true atomic sentence or the negation of a true atomic sentence and \(F\) is a local idempotent operator for \(\theta\) with a template \(\Phi(p,q)\) (as in Definition 6). Then for any \(x\), \(\widehat{F(x)}\) can differ from \(F(x)\) only in that
* \(\widehat{F(x)}\) may use different free and bound variables;
* each element of \([\theta]_{\widehat{F(x)}}\) is of the form \(v_{i}=v_{j}\) for some variables \(v_{i}\) and \(v_{j}\) (if \(\theta\) is a true atomic sentence) or each element of \([\theta]_{\widehat{F(x)}}\) is of the form \(\neg v_{i}=v_{j}\) for some variables \(v_{i}\) and \(v_{j}\) (if \(\theta\) is the negation of a true atomic sentence. Moreover all the variables in \(\widehat{F(x)}\) occur only in formulae from \([\theta]_{\widehat{F(x)}}\). In particular \(\widehat{F(x)}\) is not a sentence.
Moreover, observe that, since \(F(x)\) is a sentence, then \(\emptyset\) is the unique assignment for \(F(x)\). Hence, if \(\theta\) is either \(s=t\) or \(\neg s=t\), where \(s\) and \(t\) are closed terms whose value is \(a\), then \(\widehat{\theta_{F(x)}}\) is constantly equal to \(a\).
The above described situation of a local idempotent operator for \(\theta\) will be the only one which we shall consider in this section.
**Definition 49**.: An \(X\)-satisfaction class \(S\) is _regular_ if for every formulae \(\phi,\widehat{\phi}\in X\) and every assignment \(\alpha\) for \(\phi\), \((\phi,\alpha)\in S\) iff \((\widehat{\phi},\widehat{\alpha}_{\phi})\in S\).
We are now proceeding to strengthening Theorem 12. For notational reasons we write \(\mathcal{M},\alpha\models\phi\) instead of \(\mathcal{M}\models\phi[\alpha]\) to mean that a formula \(\phi\) is satisfied in \(\mathcal{M}\) by an assignment \(\alpha\).
**Definition 50**.: Fix \(\mathcal{M}\models\mathsf{PA}\), \(X\subseteq M\), \(\theta\), \(F\) and \(\Phi\) such that \(F\) is local idempotent sentential operator for \(\theta\) with syntactic template \(\Phi(p,q)\).
1. We say that a formula \(\phi\) is an \(F\)-_intermediate formula_ if for some \(x\), \(F(x)\) is a subformula of \(\phi\) (not necessarily direct or proper) and \(\phi\) is a subformula (not necessarily direct or proper) of \(F(x+1)\).
2. For an intermediate formula \(\phi\), the \(F\)-length of \(\phi\) is the maximal \(x\) such that \(F(x)\) is a subformula of \(\phi\).
3. Recall that \(\operatorname{compl}(\phi)\) denotes the complexity of a formula \(\phi\) (defined in Preliminaries). For an \(F\)-intermediate formula \(\phi\), assignment \(\alpha\) for \(\widehat{\phi}\) and \(x\) such that for some \(n\in\omega\), \(\operatorname{compl}(\phi)=\operatorname{compl}(F(x))+n\), we say that \(\alpha\)_\((X,x)\)-satisfies_\(\widehat{\phi}\) if \(\mathcal{M},\alpha\models\widehat{\phi}[A/F(x)]\) where \(A\) is \(0=0\) if \(x\in X\) and \(0=1\) otherwise and \(\widehat{\phi}[A/F(x)]\) denotes the result of replacing in \(\widehat{\phi}\) every occurrence of \(F(x)_{\widehat{\phi}}\) with \(A\). We say that \(\alpha\), \(X\)-_satisfies_\(\widehat{\phi}\) if \(\alpha\)\((X,x)-\)satisfies \(\phi\) where \(x\) is the \(F\)-length of \(\phi\).
We note that the above definition makes sense, since \(\widehat{\phi}[A/F(x)]\) is a formula of standard complexity (possibly with variables with nonstandard indices).
**Proposition 51**.: _Fix any \(\mathcal{M}\models\mathsf{PA}\) and \(X\subseteq M\) which is closed predecessor. For an arbitrary intermediate formula \(\phi\) of nonstandard complexity and assignment \(\alpha\) for \(\widehat{\phi}\) the following are equivalent:_
1. \(\alpha\)__\(X\)_-satisfies_ \(\widehat{\phi}\)_._
2. _For every_ \(x\) _such that_ \(\operatorname{compl}(\phi)-\operatorname{compl}(F(x))\in\omega\)_,_ \(\alpha\)__\((X,x)\)_-satisfies_ \(\widehat{\phi}\)_._
3. _For some_ \(x\) _such that_ \(\operatorname{compl}(\phi)-\operatorname{compl}(F(x))\in\omega\)_,_ \(\alpha\)__\((X,x)\)_-satisfies_ \(\widehat{\phi}\)_._
Proof.: Follows immediately from the definition of \(F\) and the fact that \(\theta\), \(\Phi\) are chosen so that \(\Phi(\theta,q)\) is equivalent to \(q\).
**Theorem 52**.: _Let \(\theta\) be either a true atomic sentence or a negation of a true atomic sentence and \(F\) a local idempotent sentential operator for \(\theta\) with template \(\Phi(p,q)\). Let \(X\subseteq M\) be separable, closed under successors and predecessors, and for each \(n\in\omega\), \(n\in X\) if and only if \(\mathcal{M}\models\theta\). Then \(\mathcal{M}\) has an expansion \((\mathcal{M},S)\models\mathsf{CS}^{-}\) such that \(X=\{x\in M:(\mathcal{M},S)\models T(F(x))\equiv T(\theta)\}\) and \(S\) is a regular satisfaction class._
Proof.: The initial structure of the argument is very similar to that used in proving Theorem 12. Let \(D=\{F(x):x\in M\}\) and \(A=\{F(x):x\in X\}\). Note that \(A\) is separable from \(D\). We build sequences \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) such that:
* each \(F_{i}\) is a finitely generated set of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\),
* each \(S_{i}\) is a regular full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated,
* \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\), and
* for each \(\phi\in D\cap F_{i}\), \((\phi,\emptyset)\in S_{i}\) if and only if \(\phi\in A\).
Given such a sequence, \(S=\cup(S_{i}\cap F_{i}\times M)\) would be the required full satisfaction class on \(\mathcal{M}\).
Externally fix an enumeration of \(\operatorname{Form}^{\mathcal{M}}\) in order type \(\omega\). We can assume, without loss of generality, that \(\theta\) appears first in this enumeration. We let \(F_{0}\) be \(\{\theta\}\) and \(S_{0}\) be any regular full satisfaction class which satisfies internal induction. Let \(F_{i+1}\) be generated by \(F_{i}\) and the least \(x\in\operatorname{Form}^{\mathcal{M}}\setminus F_{i}\) in the aforementioned enumeration. Let \(F^{\prime}=F_{i}\cup(F_{i+1}\cap D)\). Let \(a\) be a sequence such that \(\{F((a)_{n}):n\in\omega\}=F_{i+1}\cap D\). Note that such a sequence exists since \(F_{i+1}\) is finitely generated. Let \(c\) be as in the definition of separability for \(a\).
We shall now show how to construct an elementary extension \(\mathcal{N}\) of \(\mathcal{M}\) and a full regular satisfaction class \(S^{\prime}\) on \(\mathcal{N}\) such that
* \(S^{\prime}\cap F_{i}\times M=S_{i}\cap(F_{i}\times M)\).
* For each \(n\in\omega\), \((F((a)_{n}),\emptyset)\in S^{\prime}\iff\mathcal{M}\models n\in c\).
By a straightforward resplendence argument one can then copy \(S^{\prime}\) to \(\mathcal{M}\). This last step crucially uses the fact that \((\mathcal{M},S_{i})\) is recursively saturated and the facts that (1) \(F_{i+1}\) is finitely generated and (2) that we can code the membership in \(F_{i+1}\cap X\) via the parameter \(c\). The construction of \(S^{\prime}\) follows the lines of a standard Enayat-Visser construction (as presented in [3]): we build a sequence of models \(\mathcal{M}=\mathcal{M}_{0}\preceq\mathcal{M}_{1}\preceq\mathcal{M}_{2},\ldots\) and sets \(S^{\prime}_{1},S^{\prime}_{2},\ldots\) such that
1. \(S^{\prime}_{i}\subseteq M^{2}_{i}\), \(S_{i}\cap(F_{i}\times M)=S^{\prime}_{1}\cap(F_{i}\times M)\) and for all \(i>0\), \(S^{\prime}_{i+1}\cap M^{2}_{i-1}=S^{\prime}_{i}\cap M_{i-1}\)
2. \((\mathcal{M}_{i+1},S^{\prime}_{i+1})\models\mathsf{CS}^{-}\upharpoonright_{ \operatorname{Form}^{\mathcal{M}_{i}}}\);
3. for each \(n\in\omega\), \((F((a)_{n}),\emptyset)\in S^{\prime}_{i}\iff\mathcal{M}\models n\in c\)
4. for every \(\phi\in\operatorname{Form}^{\mathcal{M}_{i}}\) and \(\alpha\in M_{i+1}\), if \(\alpha\) is an assignment for \(\phi\), then \[(\phi,\alpha)\in S^{\prime}_{i+1}\iff(\widehat{\phi},\widehat{\alpha}_{\phi}) \in S^{\prime}_{i+1}.\]
Then one easily checks that for \(\mathcal{N}=\bigcup_{i}\mathcal{M}_{i}\) and \(S^{\prime}=\bigcup_{i}S^{\prime}_{i+1}\cap(\operatorname{Form}^{\mathcal{M}_{i} }\times M_{i+1})\), \((\mathcal{N},S^{\prime})\) satisfy the conditions A,B,C above and \(S^{\prime}\) is a full regular satisfaction class. We note that condition 4 does not contradict the fact that \(S^{\prime}_{i+1}\) is defined only for
formulae in \(M_{i}\), because the operations \(\widehat{\ \ \ }\) are \(\mathcal{L}_{\mathsf{PA}}\) definable, so if \(\phi\) in \(M_{i}\) then \(\widehat{\phi}\in M_{i}\).
We show how to construct \(\mathcal{M}_{1}\) and \(S^{\prime}_{1}\) and the rest of cases is fully analogous (but simpler because we do not have to care about condition (3) from the above list). Consider the theory in the language \(\mathcal{L}_{\mathcal{M}}\cup\{S^{\prime}_{1}\}\) which is given as the union of the following sets:
1. \(\mathrm{ElDiag}(\mathcal{M})\)
2. \(\{\mathrm{Comp}(\phi,\psi,\theta)\ \ :\phi,\psi,\theta\in\mathrm{Form}^{\mathcal{M}}\}\)
3. \(\{\forall\alpha\big{(}S^{\prime}_{1}(\phi,\alpha)\equiv S^{\prime}_{1}(\psi, \widehat{\alpha}_{\phi})\big{)}\ \ :\ \ \phi,\psi\in\mathrm{Form}^{\mathcal{M}},\psi=\widehat{\phi}\}\).
4. \(\{S^{\prime}_{1}(F((a)_{n}),\emptyset)\equiv n\in c\ \ :n\in\omega\}\).
5. \(\{S^{\prime}_{1}(\phi,\alpha)\ \ :\phi\in F_{i},(\phi,\alpha)\in S_{i}\}\)
We argue that the above theory is consistent, which is enough to obtain \(\mathcal{M}_{1}\) and \(S_{1}\). So fix \(A\) - a finite portion of the above theory. Let \(B\) consists of all \(\phi\in\mathrm{Form}^{\mathcal{M}}\) which occur in one of the axioms in \(A.\) We build the extension of \(S^{\prime}_{1}\subset M^{2}\) such that \((\mathcal{M},S^{\prime}_{1})\models A\) by induction on the complexity of \(\phi\in B\). We note that this is meaningful, since \(B\) is finite. Moreover we always define \(S^{\prime}_{1}\) on \(\widehat{\phi}\) and then extend \(S^{\prime}_{1}\) canonically to all formulae in \(\sim\) equivalence class. In the construction we shall not refer to the fragment of \(X\) given by \(c\) and \(a\), but rather to the whole of \(X\). \(c\) and \(a\) were introduced to enable the resplendency argument.
Assume \(\phi\) has the least possible complexity among formulae in \(B\). We put \((\widehat{\phi},\alpha)\in S^{\prime}_{1}\) iff \(\alpha\) is an assignment for \(\widehat{\phi}\) and one of the following holds:
1. \(\widehat{\phi}\) is standard and \(\mathcal{M},\alpha\models\widehat{\phi}\).
2. \((\widehat{\phi},\alpha)\in S_{i}\) and \(\phi\in F_{i}\).
3. \(\alpha\) is a constant function, \(\phi\) is an \(F\)-intermediate formula and \(\alpha\)\(X\)-satisfies \(\widehat{\phi}\).
Then, for every formula \(\psi\in B\) which has the least possible complexity, we put \((\psi,\alpha)\in S^{\prime}_{1}\) iff \((\widehat{\psi},\widehat{\alpha}_{\psi})\in S^{\prime}_{1}.\) The base step of our induction process is finished.
Now for \(\psi\in B\) we assume that for every \(\phi\in B\) of complexity lower than the complexity of \(\psi\) and every \(\psi^{\prime}\) such that \(\psi^{\prime}\sim\psi\), \(S^{\prime}_{1}\) has been defined. If all immediate subformulae of \(\psi\) are in \(B\), then by induction we can assume that \(S^{\prime}_{1}\) is defined for their templates and so we can extend \(S^{\prime}_{1}\) to \(\widehat{\psi}\) using the compositional clauses. Otherwise, we put \((\widehat{\psi},\alpha)\in S^{\prime}_{1}\) iff \(\alpha\) is an assignment for \(\widehat{\psi}\) and one of the conditions a,b,c above holds. This concludes the inductive step.
It is left to be checked that so defined \(S^{\prime}_{1}\) satisfies the chosen finite \(A\) of the theory. Conditions i,ii,iii and v follow easily by construction. To verify iv we first observe that for every \(x\), every subformula \(\psi\) of \(F(x)\) is a sentence, \(\emptyset\) is the unique assignment for \(\psi\) and \(\widehat{\emptyset}_{\psi}\) is constant. By induction on the complexity of \(\phi\in B\) we check that whenever \(\phi\) is an \(F\)-intermediate formula, then
( \[\phi,\emptyset\)\(\in S^{\prime}_{1}\iff\widehat{\phi}\] is \[X\] -satisfied by
\[\widehat{\emptyset}_{\phi}\]
This is clearly the case for formulae of minimal complexity. We consider the induction step for \(\phi=\psi_{0}\vee\psi_{1}\). If it is not the case that both \(\psi_{0},\psi_{1}\) are in \(B\), then the claim follows by definition. So assume \(\psi_{0}\) and \(\psi_{1}\) are both in \(B\). Hence
\[(\phi,\emptyset)\in S^{\prime}_{1}\iff(\psi_{0},\emptyset)\in S^{\prime}_{1} \text{ or }(\psi_{1},\emptyset)\in S^{\prime}_{1}.\]
By the inductive assumption, the last condition is equivalent to:
\[\widehat{\emptyset}_{\psi_{0}}X-\text{satisfies }\widehat{\psi_{0}}\text{ or } \widehat{\emptyset}_{\psi_{1}}X-\text{satisfies }\widehat{\psi_{1}}.\]
Let \(\kappa^{0}\) be the occurrence of \(\psi_{0}\) in \(\phi\) as the left disjunct, and \(\kappa^{1}\) be the occurrence of \(\psi_{1}\) in \(\phi\) as the right disjunct. Then \((\kappa^{0})_{\widehat{\phi}}\) differs from \(\widehat{\psi}\) only up to the bound variables renaming and a permutation of free variables. Let \(\sigma\) be the permutation of free variables such that \(\sigma[(\kappa^{0})_{\widehat{\phi}}]\) is (up to bounded variables renaming) the same as \(\widehat{\psi_{0}}\). By unraveling the definitions it follows that \(\widehat{\emptyset}_{\phi}\!\restriction_{(\kappa^{0})_{\widehat{\phi}}}= \widehat{\emptyset}_{\psi_{0}}\circ\sigma\). The same holds for the pair \((\kappa^{1})_{\widehat{\phi}}\) and \(\widehat{\psi_{1}}\). So we conclude that \((**)\) is equivalent to
\[\widehat{\emptyset}_{\phi}\!\restriction_{(\kappa^{0})_{\widehat{\phi}}}X- \text{satisfies }(\kappa^{0})_{\widehat{\phi}}\text{ or }\widehat{\emptyset}_{\phi}\! \restriction_{(\kappa^{1})_{\widehat{\phi}}}X-\text{satisfies }(\kappa^{1})_{\widehat{\phi}}.\]
The above however is clearly equivalent to the right-hand side of \((*)\).
|
2309.03374 | Physics Informed Neural Networks for Modeling of 3D Flow-Thermal
Problems with Sparse Domain Data | Successfully training Physics Informed Neural Networks (PINNs) for highly
nonlinear PDEs on complex 3D domains remains a challenging task. In this paper,
PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations
at moderate to high Reynolds numbers for complex geometries. The presented
method utilizes very sparsely distributed solution data in the domain. A
detailed investigation on the effect of the amount of supplied data and the
PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach
is used to generate a surrogate model of a realistic flow-thermal electronics
design problem. This surrogate model provides near real-time sampling and was
found to outperform standard data-driven neural networks when tested on unseen
query points. The findings of the paper show how PINNs can be effective when
used in conjunction with sparse data for solving 3D nonlinear PDEs or for
surrogate modeling of design spaces governed by them. | Saakaar Bhatnagar, Andrew Comerford, Araz Banaeizadeh | 2023-09-06T21:52:14Z | http://arxiv.org/abs/2309.03374v3 | # Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data
###### Abstract
Successfully training Physics Informed Neural Networks (PINNs) for highly nonlinear PDEs on complex 3D domains remains a challenging task. In this paper, PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations at moderate to high Reynolds numbers for complex geometries. The presented method utilizes very sparsely distributed solution data in the domain. A detailed investigation on the effect of the amount of supplied data and the PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach is used to generate a surrogate model of a realistic flow-thermal electronics design problem. This surrogate model provides near real-time sampling and was found to outperform standard data-driven neural networks when tested on unseen query points. The findings of the paper show how PINNs can be effective when used in conjunction with sparse data for solving 3D nonlinear PDEs or for surrogate modeling of design spaces governed by them.
**Keywords:** Physics Informed Neural Networks; Navier-Stokes Equations; Surrogate Modeling; Design Optimization
## 1 Introduction
Over the last few years, there has been significant growth in the popularity of machine learning algorithms to solve partial differential equations (PDE) or assist PDE solvers, such as computational fluid dynamics (CFD) solvers [1, 2]. A particular application where CFD solvers struggle, due to the computational cost, is iterative design optimization. This is the process of continually updating a design (e.g. an electronics assembly layout) and computing the solution (e.g. flow or thermal fields) to optimize the performance (e.g. constrain the temperatures or reduce the pressure drop). The challenge for CFD is the input-output relationship is one-to-one. Therefore, any changes to the input vector (e.g. geometric variations) need to be re-simulated, leading to high costs when iterating on different design scenarios [3]. Overall high-fidelity iterative design requires a prohibitive level of resources, both computationally and monetarily, and often leads to a sub-optimal outcome. The attraction of Machine Learning (ML) algorithms in these scenarios is the ability to rapidly find solutions for such problem setups that are challenging in conventional CFD, such as large design space explorations [4], turbulence model closure [5] or solving incomplete/ill-posed problems [6].
Conventional ML algorithms usually require large amounts of data to train. This represents a challenge when using ML in engineering applications such as CFD, since experimental data can be difficult and expensive to obtain and may suffer from measurement noise. Furthermore, in many engineering experiments, field data such as temperature and velocity fields can sometimes only be captured at specific locations, and it is difficult to get full field solution results from physical experiments. Research has turned to using simulation data for training ML models, but the computational cost of generating large amounts of data to train models is a major bottleneck.
Physics Informed Neural Networks (PINNs) [7] represent an advance in scientific machine learning that has the potential to solve many of the aforementioned issues. By adding the physics that governs the problem into the loss function, and optimizing the loss, it is possible to have the network learn the
solution of the problem represented by that equation in a data-free manner. PINNs can be used in cases where sporadic experimental field data is available [8, 9] to calculate the rest of the field variable and can be used to solve problems with incomplete or missing physics [10, 11].
Another application area, in which PINNs could be very beneficial is machine learning-based surrogate modeling. Although a relatively new field, several ML architectures and methods have been utilized in the literature. These include: Proper Orthogonal Decomposition (POD) [12], Gappy POD [13] and Manifold Learning [14]. More recently, increased attention has been given to statistical methods like Gaussian processes and neural networks that incorporate Machine Learning (ML) to create surrogate models. Bhatnagar et al. [15] used a CNN architecture to predict aerodynamic flow fields over airfoils and created a surrogate model that generalized between flow conditions and airfoil geometries. Guo et al. [16] also used a Convolutional Neural Network (CNN) architecture to predict steady flows over automotive vehicles. Lee and You [17] used Generative Adversarial Networks (GANs) coupled with physical laws to predict unsteady flow around a cylinder, demonstrating the benefits of using embedded physics. Raissi and Karniadakis [18] use Gaussian processes to model and identify several complex PDEs.
Several of the aforementioned studies used purely data-driven models and required the creation of large amounts of training data to generate accurate and generalizable models. PINNs have the capability to greatly reduce these data generation costs, and it has been shown that training surrogates using the physics embedded in the loss function greatly improves predictive accuracy, across a wide range of applications [17, 19, 20, 21].
However, there is currently a lack of research articles applying PINNs to 3-dimensional (3D) problems, particularly for highly nonlinear PDEs like the Navier-Stokes equations. These problems are challenging for PINNs due to a variety of reasons that are discussed later in this paper. Yet, these problems are the most lucrative to solve, as most industrial applications of CFD are done in 3D. This paper provides results that aim to address this gap, by solving several problems with realistic physical parameters, over complex geometries in a data-assisted manner, using very sparse domain data. Further, this paper solves a realistic flow-thermal design optimization problem using a hybrid data-PINN surrogate model and shows how PINN models outperform standard data-driven neural network (NN) surrogates for every test point queried in the Design of experiments (DoE) space for the surrogate modeling problem.
The paper is divided as follows; Section 2 introduces PINNs in more detail and discusses some of the technical challenges with training PINNs. Section 3 outlines some of the important features the authors incorporate in the creation and training of PINNs to enable accurate and fast convergence. Section 4 demonstrates several problems solved using PINNs, and showcases a design optimization problem using PINN-based surrogates. Section 5 discusses how the work shown in this paper can be improved upon.
## 2 Physics Informed Neural Networks (PINNs)
### Setting up a PINN Training
Physics-informed neural networks (PINNs) leverage automatic differentiation to obtain an analytical representation of an output variable and its derivatives, given a parametrization using the trainable weights of the network. By employing the underlying static graph, it is possible to construct the differential equations that govern physical phenomena.
A PDE problem in the general form reads:
\[\mathcal{N}_{\mathbf{x}}[u]=0,\mathbf{x}\in\Omega, \tag{1}\]
\[\Phi(u(\mathbf{x}))=\mathbf{g}(\mathbf{x}),\mathbf{x}\in\partial\Omega \tag{2}\]
where \(\Phi\) can be the identity operator (Dirichlet B.C) or a derivative operator (Neumann/Robin B.C). In order to solve the PDE using the PINN method, the residual of the governing PDE is minimized, which is defined by
\[r_{\theta}(\mathbf{x})=\mathcal{N}_{\mathbf{x}}[f_{\theta}(\mathbf{x})], \tag{3}\]
where \(f_{\theta}\) is the predicted value by the network. The residual value, along with the deviation of the prediction from boundary/initial conditions, is used to construct the loss, which takes the form:
\[L(\theta)=L_{r}(\theta)+\sum_{i=1}^{M}\lambda_{i}L_{i}(\theta), \tag{4}\]
where the index i refers to different components of the loss function, relating to initial conditions, boundary conditions, and measurement/simulation data. \(\lambda_{i}\) refers to the weight coefficient of each loss term. The individual loss terms are constituted as follows:
\[L_{r}=\frac{1}{N_{r}}\sum_{i}^{N_{r}}[r(\mathbf{x}_{r}^{i})]^{2},\ L_{b}=\frac{1 }{N_{b}}\sum_{i}^{N_{b}}[\Phi(\hat{u}(\mathbf{x}_{b}^{i}))-g_{b}^{i}]^{2},\ L_{d}=\frac{1}{N_{d}}\sum_{i}^{N_{d}}[u( \mathbf{x}_{d}^{i})-\hat{u}(x_{d}^{i},t_{d}^{i})]^{2}, \tag{5}\]
where the subscripts r, b, and d refer to collocation, boundary, initial, and data points, respectively. The loss term \(L(\theta)\) can then be minimized to have the network learn the solution to the PDE described by 1,2. A popular method is to use gradient-based optimizers like Adam [22] and L-BFGS to optimize the network weights.
### Current Challenges with PINNs
Although the PINN method shows great promise, it still has a number of unresolved issues. The biggest challenges with PINNs currently lie in the scalability of the algorithms to large 3D problems as well as problems with complex nonlinearities, and unsteady problems. Some of the issues described henceforth are tackled by methods described in Section 3.
#### 2.2.1 Weak imposition of Boundary Conditions
The solution of a PDE problem must obey all initial and boundary conditions imposed on it while minimizing the residual of the governing equation. However, for neural network based solvers it is difficult to impose boundary and initial conditions in an exact manner. This is because the standard way to impose B.C in PINNs is to create a linear combination of loss functions (as described mathematically in the previous section). Each loss either describes the deviation of the network output from a specific boundary condition, or the magnitude of the residual of the governing equations. Therefore, boundary conditions are only satisfied in a weak manner. There has been research demonstrating the utility of exact imposition of boundary conditions [23, 24, 25] or creative multi-network approaches [26], such implementations are mostly problem-specific and do not generalize well.
Weak imposition of boundary conditions also creates another issue, one that is fairly common in multi-task learning and multi-objective optimization: choosing the values of loss term coefficients that make up the linear combination. Choosing these weights is a nontrivial exercise that would require calibration via hyper-parameter search, which is not feasible. Wang et al. [27] introduced a heuristic dynamic weighting algorithm to update and select these weights automatically and continuously during the training, to enable convergence to the correct answer. Additionally, there have been several other algorithms proposed to choose the correct scheme for weighting the losses [28, 29, 30]. This continues to be an active area of research in the PINNs community. Finally, methods have been proposed to impose the boundary conditions in a strong manner by manipulating the output formulations [23] or by utilizing operator networks [31].
#### 2.2.2 Difficult Optimization Problem
A second problem is the nature of the loss landscape itself, in which a reasonable local minimum is required to be found. As seen in Krishnapriyan et al. [32], Gopakumar et al. [33],Subramanian et al. [34] and Basir and Senocak [35], as well as the author's own experiments, different non-dimensional quantities (e.g. Reynolds number) in the governing equations, the number of dimensions of the problem, the point cloud/discretization, the boundary conditions and the complexity of the solution to be predicted can adversely affect the loss landscape of the neural network training. This makes the optimization challenging and can fail to find an adequate local minimum via a gradient descent-based algorithm. Recently, methods borrowing concepts from optimization theory have shown alternate formulations
(e.g. augmented lagrangian method for the loss functions) can aid the convergence properties of the training problem [35, 36]. There have also been efforts towards imposing physical constraints in an integral form [37].
#### 2.2.3 Cost of training
Constructing the PDE loss functions involves several backward passes through the network, which is a costly operation. PINNs on average take longer to train than their data-driven counterparts for exactly this reason; the computation graph of a PINN training is much more complex. Moreover, for the Navier-Stokes equations, it has been seen that although the stream function formulation provides better results (due to exact enforcement of continuity), it is costlier in terms of training time. As seen in NVIDIA's experiments [38], it can take several million iterations for the more complex problems to be solved via PINNs. To reduce the cost of training approaches such as automatic differentiation for finite difference formulations [39], or using first-order formulations [40], have been proposed. However, these solutions tend to be mostly problem-specific and do not necessarily generalize well to increased problem complexity and grid definitions. Meta-learning algorithms [41] have also recently gained significance as an effective way to reduce the cost of training neural networks on new tasks, and some of this work has been extended to PINNs [42] as well.
## 3 Important Features for Creating PINN Models
In this section, the important techniques used to create PINN-based models cost-effectively are outlined. The PINN models in subsequent sections are created by combining these features that have been found to have an effect on the accuracy of the model and the speed of training.
### Hybrid Data-Physics Training
Compared with the original PINNs method proposed by Raissi et al. [7], a plethora of research has been undertaken to improve and expand on the method [43, 44]. From these developments, the PINNs method has been applied to solve PDE-based problems of increasing complexity and dimensionality. However, the PINNs method is currently not suited for solving engineering problems often encountered in industry in a data-free manner. The optimization issues and cost of model training outlined above make the method, presently, unsuitable for use as a forward solver. To get the best of both worlds, the PINNs method can be augmented with data. Figure 1 depicts the tradeoff between using only data or only physics, and that the sweet spot lies in using both. In addition to the discussed benefit of hybrid data-physics training reducing the cost of generating data, there have been several examples showing that the inclusion of sparse solution data in the training loss function significantly improves the convergence capabilities of the PINNs method [33, 43, 45].
In this paper, we take inspiration from this and use very sparse solution data to solve 3D flow-thermal problems and inform our surrogate models with physics while creating them.
Figure 1: The spectrum of data-driven versus physics-informed models. Incorporating governing physics information into the models during creation serves as an effective form of regularization and often helps reduce the amount of data required to achieve the same accuracy levels.
### Modified Learning Rate Annealing
As described in Section 2.2.1, the learning rate annealing algorithm has proved to be very effective in mitigating the stiffness of the PINN training problem. However, utilizing this method over a broader spectrum of problems highlighted an issue with stability. The following outlines this issue:
As shown in Equation 4 the PINN loss function being optimized takes the form:
\[L(\theta)=L_{r}(\theta)+\sum_{i=1}^{M}\lambda_{i}L_{i}(\theta) \tag{6}\]
At any training step, the update to the loss coefficient is calculated [27] as
\[\hat{\lambda}_{i}=\frac{max_{\theta}|\nabla_{\theta}L_{r}(\theta)|}{|\nabla_{ \theta}L_{i}(\theta)|},i=1,....,M\]
It can be seen that if the loss \(L_{i}\) decreases much faster than \(L_{r}\) during the training, the value of \(\hat{\lambda}_{i}\) increases. This then leads to a larger coefficient for that loss term and an associated faster decay of the loss.
This instability has the unintended consequence of the optimizer getting stuck in minima where it minimizes the loss \(L_{i}\) very well but is unable to optimize for the loss of the other constraints. The proposed updated algorithm to mitigate this issue is shown in Algorithm 1. The values of thresholds are hyper-parameters, but if the inputs and outputs of the network have been normalized (using standard score normalization, for example), then selecting values between \(10^{-3}\) and \(10^{-5}\) works well in practice.
```
for update step = 1 to \(N\)do if\(L_{i}(\theta)\leq(threshold)_{i}\)then \[\hat{\lambda}_{i}=0\] else Compute \(\hat{\lambda}_{i}\) by \[\hat{\lambda}_{i}=\frac{max_{\theta}|\nabla_{\theta}L_{r}(\theta)|}{|\nabla_{ \theta}L_{i}(\theta)|},i=1,....,M\] endif Update weights \(\lambda_{i}\) as \[\lambda_{i}=(1-\alpha)\lambda_{i}+\alpha\hat{\lambda}_{i}\] Update network parameters via gradient descent: \[\theta_{n+1}=\theta_{n}-\eta\nabla_{\theta}L_{r}(\theta)-\eta\sum_{i=1}^{M} \lambda_{i}\nabla_{\theta}L_{i}(\theta)\] endfor We set the hyper-parameter \(\alpha=0.1\) and \(\eta=10^{-3}\). Threshold values are chosen somewhere between \(10^{-3}\) and \(10^{-5}\).
```
**Algorithm 1** Modified Learning Rate Annealing
For a problem with the loss function
\[L(\theta)=L_{r}(\theta)+\lambda_{neu}L_{neu}(\theta)+\lambda_{dir}L_{dir}(\theta) \tag{7}\]
where \(L_{r}(\theta)\), \(L_{neu}(\theta)\) and \(L_{dir}(\theta)\) correspond to the PDE, Neumann and Dirichlet loss respectively, Figure 2 shows the training curves for the individual losses, and the value of the adaptive coefficients when they are calculated using Algorithm 1. It can be seen that when the boundary loss terms in Figures 1(c) and 1(d) go below their thresholds (set to \(10^{-5}\)), the associated coefficients shown in Figures 1(a) and 1(b) start decaying. Following this, the PDE loss starts improving much faster. If the term \(L_{i}(\theta)\) goes above its threshold, it leads to a spike in the adaptive constant \(\lambda_{i}\) which brings it down again.
### Fourier Feature Embeddings
As described in Tancik et al. [46], Artificial Neural Networks suffer from a spectral bias problem. To overcome this, they introduced a Fourier feature embedding that allows models to capture high-frequency components of the solution effectively. This has the effect of markedly improving the ability of the networks to capture sharp gradients in the solutions, which requires the network to be able to learn high-frequency components of the solution quickly.
Following the implemenation in Tancik et al. [46], for an input vector
Figure 2: Adaptive coefficients and loss terms from Equation 7 during training. (a) Evolution of the Dirichlet loss adaptive constant during training.(b) Evolution of the Neumann loss adaptive constant during training. (c) Dirichlet B.C loss term \(L_{dir}(\theta)\) (d) Neumann B.C loss term \(L_{neu}(\theta)\) (e) The PDE loss during training. Once the values of both the adaptive constants start dropping, the PDE loss improves much more rapidly.
\[\mathbf{v}=\left[\begin{array}{c}x\\ y\\ z\end{array}\right]\]
instead of using \(\mathbf{v}\) as the input we compute the Fourier feature mapping:
\[\gamma(\mathbf{v})=[\cos(2\pi\mathbf{b}_{1}^{T}\mathbf{v}),\sin(2\pi\mathbf{b}_ {1}^{T}\mathbf{v}),.....,\cos(2\pi\mathbf{b}_{m}^{T}\mathbf{v}),\sin(2\pi \mathbf{b}_{m}^{T}\mathbf{v})] \tag{8}\]
where m is a hyper parameter and the frequencies \(\mathbf{b}_{j}^{T}\) are selected randomly from an isotropic distribution. Then \(\gamma(\mathbf{v})\) is passed into the network.
The Fourier feature embedding was shown to be highly effective in training PINNs models by Wang et al. [47], and several results were shown for 1D and 2D problems. We extend this implementation to solve 3D flow problems via PINNs and use it to create our hybrid data-PINN surrogate for flow thermal problems.
In addition, there have been other proposed solutions for the spectral bias problem for applications to PDE problems, such as the Siren activation [48], Fourier Neural Operators [49], and weighting schemes derived from the theory of Neural Tangent Kernels (NTK) [28].
## 4 Experiments and Results
In this section, some example problems are solved using PINNs. Sections 4.1 and 4.2 solve the 3D incompressible Navier-Stokes equations through a data-assisted approach, where very sparse solution data is provided in the domain.
Section 4.3 uses a hybrid data-PINN approach to generate a surrogate model for a given design space of a heat sink with a chip underneath it, undergoing cooling via forced convection. Then, given certain constraints on the running metrics of the chip-sink setup (like max temperature in the chip), the optimal set of parameters in the Design of Experiments (DoE) space that satisfy the constraints while maximizing an objective are obtained via rapid design optimization using the created surrogate.
Details on hyper-parameters used in the model training for each experiment that follows can be found in Appendix Section A.1.
### Forward Solve of 3D Stenosis Problem
Flow through an idealized 3D stenosis geometry at a physiologically relevant Reynolds number is demonstrated, see Figure 3 for details about the geometry. To the author's best knowledge, flow through a stenosis has been solved using PINNs only at a low Reynolds number of approximately 6 (based on inlet diameter) [23]. Flow through irregular geometries has been solved at a higher Re (500), but in 2D [50]. In this paper, the stenosis problem is solved at Re 150, and in 3 dimensions.
As discussed in Section 2.2, at higher Reynolds numbers the standard PINN implementation struggles to achieve a good local minimum. This was confirmed using a standard PINN implementation. To alleviate this issue a data-assisted approach where sporadic solution data can be added throughout the domain of interest (depicted on a slice in Figure 4). The data was given in the form of concentric rings at the radii depicted on the cut plane.
#### 4.1.1 Problem Setup
The flow problem through the stenosis is solved by solving the steady-state incompressible Navier-Stokes equations:
\[\nabla\cdot\mathbf{u}=0, \tag{9}\]
\[(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1}{\rho}\nabla\mathbf{p}+\nu\nabla \cdot(\nabla\mathbf{u}), \tag{10}\]
subject to
\[\mathbf{u}(x_{b1})=g(x_{b1}),\,x_{b1}\in\partial\Omega_{1},\]
\[\mathbf{u}(x_{b2})=0,\,x_{b2}\in\partial\Omega_{2},\]
\[\nabla u_{i}(x_{b3})\cdot\mathbf{n}=0,\,x_{b3}\in\partial\Omega_{3},i=1,2,3\]
\[p(x_{b3})=0,\,x_{b3}\in\partial\Omega_{3}\]
where \(g(x_{b3})\) represents a profiled input to the stenosis. \(\rho\) and \(\nu\) are the density and kinematic viscosity of the fluid(air) respectively, and \(\mathbf{u}\) and p are the velocity vector and pressure respectively.
In the present problem, a parabolic profiled input is provided with a peak value inflow of 0.15 m/s. The ratio of areas of the throat to the inlet is 0.36.
The output of the network is approximated as \(G_{\theta}\), which is a 4-component output:
\[G_{\theta}=\left[\begin{array}{c}u\\ v\\ w\\ p\end{array}\right]\]
#### 4.1.2 Results
Figure 5 compares the velocity magnitude returned by the trained PINN model and Altair AcuSolve(r) through a 2D slice of the stenosis. As can be seen, the essential features of the flow are captured. Figure 5(a) and 5(b) compare the velocity and pressure profile through the center of the stenosis. The differences between the line plots are attributed to differences in mesh density between the two cases. The CFD mesh was an unstructured mesh of around 63,000 nodes with a boundary layer, while the
Figure 4: Stenosis diagram (not to scale) showing planes where solution data is provided randomly.
Figure 3: Visual description of stenosis problem
point cloud used with the PINN was around 87,000 nodes randomly distributed points except near the boundary where the sampling was finer.
Another approach that was investigated to solve the 3D stenosis problem was that of using "continuity planes" as defined by Hennigh et al. [38] in their experiments solving 3D flow problems using PINNs. In this approach, the authors added constraints on the mass flow through a plane and added these constraints to the loss function. While this approach was found to aid the convergence of the PINN model to the correct solution, there were several issues found to exist with this method:
1. It is difficult to generate continuity planes for complex geometries such as those shown in Sections 4.2 and 4.3.
2. The quality of the solution from the PINN depends heavily on the integration scheme used to calculate the mass flow rate, and the fineness of the points on the continuity plane.
Figure 5: Solution Comparison. (a) Altair AcuSolve® Solution to stenosis problem (b) PINN forward solve to stenosis problem.
Figure 6: Centerline solution comparisons: PINN versus Altair AcuSolve® (a) Total Velocity Comparison (b) Pressure Comparison
Hence, in the next section, random and sparsely distributed data was used in the domain to aid convergence.
### Flow over a Printed Circuit Board (PCB)
#### 4.2.1 Problem Setup
Flow over a PCB consisting of a heat sink, chip, and capacitor is solved at a Reynolds number of approximately 1500, based on the length of the PCB board and air as the fluid. The geometry and flow orientation are shown in Figure 7. This represents a forced convection problem common in electronics design and is a challenging problem for PINNs because it is in 3D, with a complex geometry and large gradients involved.
Let \(D\) represent the set of all nodes in the domain. To train the PINN model, the CFD solution was first computed. Next, 1% of the nodes in the solution domain were randomly selected (call this set \(D_{1}\subset D\))). This is a selection of roughly 2,300 node points (from a mesh of roughly 230,000 nodes). The experiment was then divided into three parts:
1. **Case A**: A network was trained on the CFD solution at all points in \(D_{1}\) (i.e \(\forall\mathbf{x}\in D_{1}\)) following which the physics of the problem was **enforced at every node location** in \(D\) (i.e \(\forall\mathbf{x}\in D\)), by including the physics-based loss in the training, and then the network was asked to predict the solution in the entire domain \(D\).
2. **Case B**: A network was trained on the CFD solution at the points contained in \(D_{1}\) (i.e, \(\forall\mathbf{x}\in D_{1}\)) **without any physics enforcement** and then asked to predict the solution in the entire domain (i.e \(\forall\mathbf{x}\in D\)).
3. **Case C**: Finally, the same experiment as Case A was repeated but with a new set \(D_{2}\) consisting of only 0.2% of the nodes in \(D\), which were again randomly selected.
The governing equations for this problem are the Reynolds Averaged Navier-Stokes Equations:
\[\nabla\cdot\mathbf{u}=0, \tag{11}\]
\[(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1}{\rho}\nabla\mathbf{p}+(\nu+\nu_{t })\nabla\cdot(\nabla\mathbf{u}), \tag{12}\]
\(\rho\), \(\nu\), and \(\nu_{t}\) represent the density, kinematic viscosity and eddy viscosity of the system. The inflow is set to a constant velocity of 0.15 m/s and the outflow is set to the stress-free condition. It should be noted that in the current study, eddy viscosity is obtained directly from the CFD solver using the Spalart-Allmaras turbulence model. Turbulence modeling in PINNs is a field of active research with a few articles investigating it [38, 51, 52], and it is a future work to effectively incorporate turbulence models into PINN-based models.
Figure 7: Geometry of a PCB with a chip, sink, and capacitor assembly.
#### 4.2.2 Results
Figure 8 shows the ANN predictions for the different cases. It is evident that by using sparse data, the network is able to better converge toward the CFD solution (shown in Figure 8d) using the physics-based regularizer. However, as evident in Figure 8c, the network failed to converge to a physical solution when the amount of data provided was insufficient, highlighting the importance of a certain amount and fineness of the required data. Table 1 shows the Mean Squared Errors (MSE) for each experiment, for the velocity and the pressure, taking the CFD solution as the ground truth. The MSE is calculated as
\[\text{MSE}=\sqrt{\frac{\sum_{i=1}^{N_{nodes}}(x_{i,pred}-x_{i,truth})^{2}}{N_{ nodes}}} \tag{13}\]
Figure 9 shows the fraction of node points for each case that are above a certain Mean Absolute Error (MAE) value. The lower the fraction, the better the solution. We note from Figure 9 that even for Case A, there are outliers to the solution where the MAE is relatively high, indicating poor convergence to the solution at those nodes. The convergence of PINNs toward the correct solution for highly nonlinear systems is an open and challenging problem, especially in 3 dimensions. Nonetheless, these results open exciting possibilities about using physics-based regularizers in the future and represent a step forward for solving the 3D Navier-Stokes Equations at high Reynolds Numbers using PINNs. Furthermore, data generation costs to create surrogate models using PINNs can be greatly reduced by providing solution data on a coarser grid and solving the physics on a finer grid.
### Surrogate Modeling and Design Optimization of a Heat Sink
In this section, the PINNs surrogate modeling technique is demonstrated for rapid design optimization of a heat sink assembly. The assembly utilizes a chip that generates heat and a fin-type heatsink on top to dissipate heat into the surrounding fluid. The chip-heatsink assembly is cooled by forced convection of air. The geometry and setup are shown in Figure 10.
The goal is to optimize the heat sink design and the running conditions of the assembly, subject to feasibility constraints placed on chip temperature and channel pressure drop. This represents a common design optimization problem in electronics cooling. More specifically, if \(\dot{Q}_{src}\) is the total power being generated by the chip, the optimization problem can be framed as follows:
\[\text{Maximize }\dot{Q}_{src}\text{ s.t} \tag{14}\]
\[\text{Pressure drop across the heat sink channel ($\Delta$P)}\leq 11\text{ Pa} \tag{15}\]
\[\text{Maximum temperature anywhere on the chip }\leq 350\text{ K} \tag{16}\]
The pressure at the outflow is fixed to 0 Pa, and the pressure drop across the heat sink channel is hence calculated as the average pressure over the inflow of the channel:
\[\Delta\text{P}=\overline{\text{P}_{\text{inlet}}} \tag{17}\]
The term to be maximized \(\dot{Q}_{src}\) is also one of the design axes and an input parameter(P3) to the network.
The design variables that can be altered for this present optimization are:
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Description & MSE (Velocity) & MSE (Pressure) \\ \hline Case A & 1\% domain data + physics & **0.0135** & **0.0037** \\ \hline Case B & 1\% domain data only & 0.0222 & 0.00472 \\ \hline Case C & 0.2\% domain data + physics & 0.0245 & 0.00545 \\ \hline \end{tabular}
\end{table}
Table 1: Mean Squared Errors (MSE) for velocity and pressure for cases A,B, and C
* Inflow Velocity
* Fin height
* Source term in the chip (has to be maximized)
The upper and lower limits of each of the design variables mentioned above are summarized in Table 2. The inlet velocity is set based on typical values found in literature [53] and corresponds to a Reynolds number range of Re 10,300 to Re 24,000.
The governing equations solved for this conjugate heat transfer problem are the same as in Section 4.2 for the flow problem, subject to no-slip boundary conditions on the chip-heatsink assembly with a
Figure 8: Neural Network (NN) prediction with and without physics, for very coarse data supplied on a plane through the domain. (a) **Case A:** Trained on 1% data and physics (b) **Case B:** Trained on 1% solution data only (c) **Case C:** Trained on 0.2% data and physics (d) True Solution from CFD solver
Figure 9: Node fractions of points above a certain MAE value, for each case. (a) MAE of Velocity (b) MAE of Pressure
variable freestream inflow velocity, causing forced convection. As in Section 4.2, the eddy viscosities are taken from the CFD solutions.
The energy equation in both fluid and solid reads:
\[k\nabla^{2}T+\dot{q}_{src}-\rho s\mathbf{u}\cdot\nabla T=0, \tag{18}\]
where T represents the temperature, \(\dot{q}_{src}\) represents the volumetric source term, and \(k\) and \(s\) are the conductivity and specific heat of the material respectively. At the interface between the fluid and solid domain (fluid-sink, sink-chip, and fluid-chip) the interface condition is applied by minimizing the following loss terms as shown in [54];
\[L_{flux}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(f_{d_{1}}(\mathbf{u}(x_{i})) \cdot\mathbf{n}_{d_{1}}+f_{d_{2}}(\mathbf{u}(x_{i}))\cdot\mathbf{n}_{d_{2}})^ {2}, \tag{19}\]
\[L_{val}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(\mathbf{u}_{d_{j}}(x_{i})- \overline{\mathbf{u}_{d_{j}}(x_{i})})^{2}, \tag{20}\]
where \(\mathbf{n}_{d1}=-\mathbf{n}_{d2}\) and j=1,2. The average is taken over j. \(d_{1}\) and \(d_{2}\) refer to the domains on both sides of the interface, and \(N_{int}\) is the number of node points on the interface.
#### Model Creation and Evaluation
The sampling of the above Design of Experiments (DoE) space is done via an efficient space-sampling method to optimally fill the DoE space [55]. The sampled DoE space for training is shown in Figure 11, along with the location of points at which the surrogate model is tested. The reader is referred to Section A.3.1 for a complete tabular description of the DoE space. Note that for this
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Parameter No. & Parameter Name & Lower Value & Upper Value \\ \hline P1 & Inflow Velocity (\(m/s\)) & 3 & 7 \\ \hline P2 & Fin Height (\(mm\)) & 15 & 23 \\ \hline P3 & Source Term (\(W\)) & 30 & 60 \\ \hline \end{tabular}
\end{table}
Table 2: Design of Experiments space axes ranges for the heat sink design optimization
Figure 10: Basic problem geometry and flow depiction
example, we use full field data at each DoE point to train the surrogate as opposed to a small fraction of it (like in Section 4.1 and 4.2), as the objective is to get a surrogate that is as accurate as possible. Table 3 shows the MSE for the predictions by the hybrid data-PINN model at the test points, calculated w.r.t the CFD solution at the same mesh resolution. Also shown is the MSE for predictions by a standard data-driven NN without leveraging key features described in Section 3, which are used extensively in industry for surrogate modeling applications. The hybrid data-PINN model outperforms the standard data-driven NN for all predictions. Section A.4 shows some more qualitative comparisons between test point results from the PINNs model versus standard data-driven NNs.
#### Solving the Design Optimization Problem
The surrogate model is used to solve the design optimization problem described in Equations 14-16. The goal is to show that the surrogate model can accurately predict the solution in the entire DoE space by returning a solution that satisfies all applied constraints while maximizing the objective. The created surrogate models are interfaced with an optimizer that solves a generic constrained optimization problem via an iterative process, described thoroughly in Appendix Section A.2.
Each snapshot in Figure 12 represents a design iteration, and each particle represents a point in the DoE space. Each axis of a plot represents a parameter axis.
Figure 11: Training and testing points in the 3D DoE space
Figure 12: Design optimization iterations of the heat sink problem (a) Iteration 0 (b) Iteration 5 (c) Iteration 10
For the given constraints, the particles converge to a much smaller region of the DoE space. The design point returned by the optimizer in this case is:
**Inflow Velocity**: 6 m/s
**Chip Power**: 50W
**Fin Height**: 17mm
To test that the result satisfies the constraints, the returned design point is solved by the Altair AcuSolve(r), at the same mesh fineness and another mesh with 10x fineness, keeping all essential mesh features such as boundary layers and refinement zones. As shown in Figures 13 and 14, not only does the given design point satisfy the design constraints, but the finer mesh solution is very close to the coarser solution, and a little tweaking of the design point using CFD with the higher resolution mesh will yield a highly optimized solution to the designer. This optimization is done several orders of magnitude faster than if using traditional CFD, and the reader is referred to Appendix Section A.3.2 for a quantitative description of the same.
## 5 Conclusions and Future Work
In this paper, Physics Informed Neural Networks were used to solve the 3D Navier-Stokes equations in a data-assisted setting, for complex geometries with realistic physical parameters. It was shown that even for problems being solved at high Reynolds Numbers in 3D, PINNs can be trained to produce a
Figure 13: Temperature plot through a slice at the rear of the sink (from bottom to top).
The comparison between the high-fidelity solution on the fine mesh and the PINN prediction on a coarser mesh shows good agreement.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Velocity MSE** & **Pressure MSE** & **Temperature MSE** \\ \hline
**Test Point 1** & & & \\ Hybrid data-PINN Model & **0.65** & **2.62** & **1.81** \\ Standard Data-driven NN & 0.93 & 2.83 & 2.05 \\ \hline
**Test Point 2** & & & \\ Hybrid data-PINN Model & **0.39** & **1.19** & **2.67** \\ Standard Data-driven NN & 0.58 & 1.42 & 2.97 \\ \hline
**Test Point 3** & & & \\ Hybrid data-PINN Model & **0.76** & **3.31** & **1.86** \\ Standard Data-driven NN & 1.10 & 3.51 & 2.18 \\ \hline
**Test Point 4** & & & \\ Hybrid data-PINN Model & **0.33** & **0.99** & **2.87** \\ Standard Data-driven NN & 0.52 & 1.19 & 3.15 \\ \hline \hline \end{tabular}
\end{table}
Table 3: MSE for 4 test points shown in Table 5. The PINN-based model consistently outperforms the standard data-driven NN on all test points.
good solution in the presence of very sparse solution data randomly scattered in the solution domain. However, using too little solution data causes the model to converge to an unphysical solution. PINNs were also demonstrated for 3D flow-thermal surrogate modeling and the PINN-based surrogates consistently outperformed standard data-driven NN on test case examples. The PINN surrogates were also interfaced with a design optimization algorithm to solve a constrained optimization problem. This optimization returned a design point that when solved with high-fidelity CFD was consistent with the requirements of the design constraints, highlighting the suitability of the method to produce surrogates for 3D flow-thermal surrogate modeling problems.
There are multiple avenues through which the work shown in this paper can be improved. Research has to be done to improve convergence and offer guarantees of PINNs training toward the local minimums that represent physical solutions, in a data-free manner. This will further reduce data requirements for the creation of physically consistent PINN models which can greatly improve their surrogate modeling capabilities, by reducing the cost of training and improving predictive accuracy. Further work needs to be done to investigate turbulence modeling in PINNs so that high Reynolds number problems can be solved in a data-free manner. There are also many themes like uncertainty quantification [56, 57, 58] of surrogates and effective surrogate modeling of different geometries [59, 60, 61] that are active fields of research in PINNs, which could be included in future works that build on these results.
## Acknowledgements
This research did not receive any specific grant from funding agencies in the public or not-for-profit sectors, or from any external commercial entities. The authors gratefully acknowledge the use of Altair Engineering Inc.'s computing facilities for running experiments.
## CRediT authorship contribution statement
**Saakaar Bhatnagar:** Formal Analysis, Investigation, Methodology, Software, Validation, Writing-original draft. **Andrew Comerford:** Conceptualization, Investigation, Project Administration, Supervision, Writing- review and editing **Araz Banaeizadeh:** Conceptualization, Project Administration, Supervision, Writing- review and editing
|
2309.13167 | Flow Factorized Representation Learning | A prominent goal of representation learning research is to achieve
representations which are factorized in a useful manner with respect to the
ground truth factors of variation. The fields of disentangled and equivariant
representation learning have approached this ideal from a range of
complimentary perspectives; however, to date, most approaches have proven to
either be ill-specified or insufficiently flexible to effectively separate all
realistic factors of interest in a learned latent space. In this work, we
propose an alternative viewpoint on such structured representation learning
which we call Flow Factorized Representation Learning, and demonstrate it to
learn both more efficient and more usefully structured representations than
existing frameworks. Specifically, we introduce a generative model which
specifies a distinct set of latent probability paths that define different
input transformations. Each latent flow is generated by the gradient field of a
learned potential following dynamic optimal transport. Our novel setup brings
new understandings to both \textit{disentanglement} and \textit{equivariance}.
We show that our model achieves higher likelihoods on standard representation
learning benchmarks while simultaneously being closer to approximately
equivariant models. Furthermore, we demonstrate that the transformations
learned by our model are flexibly composable and can also extrapolate to new
data, implying a degree of robustness and generalizability approaching the
ultimate goal of usefully factorized representation learning. | Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling | 2023-09-22T20:15:37Z | http://arxiv.org/abs/2309.13167v1 | # Flow Factorized Representation Learning
###### Abstract
A prominent goal of representation learning research is to achieve representations which are factorized in a useful manner with respect to the ground truth factors of variation. The fields of disentangled and equivariant representation learning have approached this ideal from a range of complimentary perspectives; however, to date, most approaches have proven to either be ill-specified or insufficiently flexible to effectively separate all realistic factors of interest in a learned latent space. In this work, we propose an alternative viewpoint on such structured representation learning which we call Flow Factorized Representation Learning, and demonstrate it to learn both more efficient and more usefully structured representations than existing frameworks. Specifically, we introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations. Each latent flow is generated by the gradient field of a learned potential following dynamic optimal transport. Our novel setup brings new understandings to both _disentanglement_ and _equivariance_. We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models. Furthermore, we demonstrate that the transformations learned by our model are flexibly composable and can also extrapolate to new data, implying a degree of robustness and generalizability approaching the ultimate goal of usefully factorized representation learning.
## 1 Introduction
Developing models which learn useful representations of data has become an increasingly important focus in the machine learning community [5; 55]. For example, Large Language Models such as GPT [9] rely on an extensive pre-training phase to learn valuable representations, enabling quick finetuning on a diversity of tasks. However, a precise definition of what makes an ideal representation is still debated. One line of work has focused on 'disentanglement' of the underlying ground truth generative factors [5; 35; 13]. In general, the definition of 'disentanglement' often refers to learning and controlling statistically independent factors of variation [5; 36]. Over the years, many disentanglement methods have been proposed, including axis-aligned single-dimensional manipulation [35; 13], linear multi-dimensional traversals [78; 77; 90; 66], and, more recently, dynamic non-linear vector-based traversals [84; 79]. Although these methods have been met with significant success (and even linked to single-neuron brain activity [37; 91]), there are known theoretical limitations which make them ill-specified, including the presence of topological defects [7]. This has limited their deployment beyond toy settings.
Another line of work has focused on developing representations which respect symmetries of the underlying data in their output space [15; 36]. Specifically, equivariant representations are those for which the output transforms in a known predictable way for a given input transformation.
They can be seen to share many similarities with disentangled representations since an object undergoing a transformation which preserves its identity can be called a symmetry transformation [36]. Compared with disentanglement methods, equivariant networks are much more strictly defined, allowing for significantly greater control and theoretical guarantees with respect to the learned transformation [16; 50; 73; 20; 39]. However, this restriction also limits the types of transformations to which they may be applied. For example, currently only group transformations are supported, limiting real-world applicability. To avoid this caveat, some recent attempts propose to learn general but approximate equivariance from disentangled representations [49; 45; 79].
In this work, we provide an alternative viewpoint at the intersection of these two fields of work which we call Flow Factorized Representation Learning. Fig. 1 depicts the high-level illustration of our method. Given \(k\) different transformations \(p_{k}(\mathbf{x}_{t}|\mathbf{x}_{0})\) in the input space, we have the corresponding latent probabilistic path \(\int_{\mathbf{z}_{0},\mathbf{z}_{t}}q(\mathbf{z}_{0}|\mathbf{x}_{0})q_{k}(\mathbf{z}_{t}|\mathbf{z}_{0 })p(\mathbf{x}_{t}|\mathbf{z}_{t})\) for each of the transformations. Each latent flow path \(q_{k}(\mathbf{z}_{t}|\mathbf{z}_{0})\) is generated by the gradient field of some learned potentials \(\nabla u^{k}\) following fluid mechanical dynamic Optimal Transport (OT) [4]. Our framework allows for novel understandings of both _disentanglement_ and _equivariance_. The definition of disentanglement refers to the distinct set of tangent directions \(\nabla u^{k}\) that follow the OT paths to generate latent flows for modeling different factors of variation. The concept of equivariance in our case means that the two probabilistic paths, _i.e.,_\(p_{k}(\mathbf{x}_{t}|\mathbf{x}_{0})\) in the image space and \(\int_{\mathbf{z}_{0},\mathbf{z}_{t}}q(\mathbf{z}_{0}|\mathbf{x}_{0})q_{k}(\mathbf{z}_{t}|\mathbf{z}_{0 })p(\mathbf{x}_{t}|\mathbf{z}_{t})\) in the latent space, would eventually result in the same distribution of transformed data.
We build a formal generative model of sequences and integrate the above latent probability evolution as condition updates of the factorized sequence distribution. Based on the continuity equation, we derive a proper flow of probability density for the time evolution of both the prior and posterior. To perform inference, we approximate the true posterior of latent variables and train the parameters as a Variational Autoencoder (VAE) [47]. When the transformation type \(k\) is not observed (_i.e.,_ available as a label), we treat \(k\) as another latent variable and incorporate its posterior into our framework by learning it from sequences. Extensive experiments and thorough analyses have been conducted to show the effectiveness of our method. For example, we demonstrate empirically that our representations are usefully factorized, allowing flexible composability and generalization to new datasets. Furthermore, we show that our methods are also approximately equivariant by demonstrating that they commute with input transformations through the learned latent flows. Ultimately, we see these factors combine to yield the highest likelihood on the test set in each setting. Code is publicly available at [https://github.com/KingJamesSong/latent-flow](https://github.com/KingJamesSong/latent-flow).
## 2 The generative model
In this section, we first introduce our generative model of sequences and then describe how we perform inference over the latent variables of this model in the next section.
### Flow factorized sequence distributions
The model in this work defines a distribution over sequences of observed variables. We further factorize this distribution into \(k\) distinct components by assuming that each observed sequence is generated by one of the \(k\) separate flows of probability mass in latent space. Since in this work we
Figure 1: Illustration of our flow factorized representation learning: at each point in the latent space we have a distinct set of tangent directions \(\nabla u^{k}\) which define different transformations we would like to model in the image space. For each path, the latent sample evolves to the target on the potential landscape following dynamic optimal transport.
model discrete sequences of observations \(\bar{\mathbf{x}}=\{\mathbf{x}_{0},\mathbf{x}_{1}\ldots,\mathbf{x}_{T}\}\), we aim to define a joint distribution with a similarly discrete sequence of latent variables \(\bar{\mathbf{z}}=\{\mathbf{z}_{0},\mathbf{z}_{1}\ldots,\mathbf{z}_{T}\}\), and a categorical random variable \(k\) describing the sequence type (observed or unobserved). Explicitly, we assert the following factorization of the joint distribution over \(T\) timesteps:
\[p(\bar{\mathbf{x}},\bar{\mathbf{z}},k)=p(k)p(\mathbf{z}_{0})p(\mathbf{x}_{0}|\mathbf{z}_{0})\prod_{ t=1}^{T}p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)p(\mathbf{x}_{t}|\mathbf{z}_{t}). \tag{1}\]
Here \(p(k)\) is a categorical distribution defining the transformation type, \(p(\mathbf{x}_{t}|\mathbf{z}_{t})\) asserts a mapping from latents to observations with Gaussian noise, and \(p(\mathbf{z}_{0})=\mathcal{N}(0,1)\). A plate diagram of this model is depicted through the solid lines in Fig. 2.
### Prior time evolution
To enforce that the time dynamics of the sequence define a proper flow of probability density, we compute the conditional update \(p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)\) from the continuous form of the continuity equation: \(\partial_{t}p(\mathbf{z})=-\nabla\cdot(p(\mathbf{z})\nabla\psi^{k}(\mathbf{z}))\), where \(\psi^{k}(\mathbf{z})\) is the \(k\)'th potential function which advects the density \(p(\mathbf{z})\) through the induced velocity field \(\nabla\psi^{k}(\mathbf{z})\). Considering the discrete particle evolution corresponding to this density evolution, \(\mathbf{z}_{t}=f(\mathbf{z}_{t-1},k)=\mathbf{z}_{t-1}+\nabla_{z}\psi^{k}(\mathbf{z}_{t-1})\), we see that we can derive the conditional update from the continuous change of variables formula [69; 11]:
\[p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)=p(\mathbf{z}_{t-1})\Big{|}\frac{df(\mathbf{z}_{t-1},k)}{d \mathbf{z}_{t-1}}\Big{|}^{-1} \tag{2}\]
In this setting, we see that the choice of \(\psi\) ultimately determines the prior on the transition probability in our model. As a minimally informative prior for random trajectories, we use a diffusion equation achieved by simply taking \(\psi^{k}=-D_{k}\log p(\mathbf{z}_{t})\). Then according to the continuity equation, the prior evolves as:
\[\partial_{t}p(\mathbf{z}_{t})=-\nabla\cdot\Big{(}p(\mathbf{z}_{t})\nabla\psi\Big{)}=D _{k}\nabla^{2}p(\mathbf{z}_{t}) \tag{3}\]
where \(D_{k}\) is a constant coefficient that does not change over time. The density evolution of the prior distribution thus follows a constant diffusion process. We set \(D_{k}\) as a learnable parameter which is distinct for each \(k\).
## 3 Flow factorized variational autoencoders
To perform inference over the unobserved variables in our model, we propose to use a variational approximation to the true posterior, and train the parameters of the model as a VAE. To do this, we parameterize an approximate posterior for \(p(\mathbf{z}_{0}|\mathbf{x}_{0})\), and additionally parameterize a set of
Figure 2: Depiction of our model in plate notation. (Left) Supervised, (Right) Weakly-supervised. White nodes denote latent variables, shaded nodes denote observed variables, solid lines denote the generative model, and dashed lines denote the approximate posterior. We see, as in a standard VAE framework, our model approximates the initial one-step posterior \(p(\mathbf{z}_{0}|\mathbf{x}_{0})\), but additionally approximates the conditional transition distribution \(p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)\) through dynamic optimal transport over a potential landscape.
functions \(u^{k}(\mathbf{z})\) to approximate the true latent potentials \(\psi^{*}\). First, we will describe how we do this in the setting where the categorical random variable \(k\) is observed (which we call the supervised setting), then we will describe the model when \(k\) is also latent and thus additionally inferred (which we call the weakly supervised setting).
### Inference with observed \(k\) (supervised)
When \(k\) is observed, we define our approximate posterior to factorize as follows:
\[q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)=q(\mathbf{z}_{0}|\mathbf{x}_{0})\prod_{t=1}^{T}q(\mathbf{z}_ {t}|\mathbf{z}_{t-1},k) \tag{4}\]
We see that, in effect, our approximate posterior only considers information from element \(\mathbf{x}_{0}\); however, combined with supervision in the form of \(k\), we find this is sufficient for the posterior to be able to accurately model full latent sequences. In the limitations section we discuss how the posterior could be changed to include all elements \(\{\mathbf{x}_{t}\}_{0}^{T}\) in future work.
Combing Eq. (4) with Eq. (1), we can derive the following lower bound to model evidence (ELBO):
\[\begin{split}\log p(\bar{\mathbf{x}}|k)&=\mathbb{E}_{q_ {\theta}(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\left[\log\frac{p(\bar{\mathbf{x}},\bar{\bm {z}}|k)}{q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\frac{q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}{ p(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\right]\\ &\geq\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\left[ \log\frac{p(\bar{\mathbf{x}}|\bar{\mathbf{z}},k)p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar {\mathbf{x}},k)}\right]\\ &=\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\left[\log p (\bar{\mathbf{x}}|\bar{\mathbf{z}},k)\right]+\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|\bar {\mathbf{x}},k)}\left[\log\frac{p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\right] \end{split} \tag{5}\]
Substituting and simplifying, Eq. (5) can be re-written as
\[\begin{split}\log p(\bar{\mathbf{x}}|k)&\geq\sum_{t=0}^ {T}\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|k)}\big{[}\log p(\mathbf{x}_{t}|\mathbf{z}_{t},k )\big{]}-\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|k)}\big{[}\mathrm{D}_{\text{KL} }\left[q_{\theta}(\mathbf{z}_{0}|\mathbf{x}_{0})||p(\mathbf{z}_{0})\right]\big{]}\\ &\quad-\sum_{t=1}^{T}\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|k)}\big{[} \mathrm{D}_{\text{KL}}\left[q_{\theta}(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)||p(\mathbf{z}_{t }|\mathbf{z}_{t-1},k)\right]\big{]}\end{split} \tag{6}\]
We thus see that we have an objective very similar to that of a traditional VAE, except that our posterior and our prior now both have a time evolution defined by the conditional distributions.
### Inference with latent \(k\) (weakly supervised)
When \(k\) is not observed, we can treat it as another latent variable, and simultaneously perform inference over it in addition to the sequential latent \(\bar{\mathbf{z}}\). To achieve this, we define our approximate posterior and instead factorize it as
\[q(\bar{\mathbf{z}},k|\bar{\mathbf{x}})=q(k|\bar{\mathbf{x}})q(\mathbf{z}_{0}|\mathbf{x}_{0})\prod_ {t=1}^{T}q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k) \tag{7}\]
Following a similar procedure as in the supervised setting, we derive the new ELBO as
\[\begin{split}\log p(\bar{\mathbf{x}})&=\mathbb{E}_{q_ {\theta}(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}\left[\log\frac{p(\bar{\mathbf{x}},\bar{\mathbf{ z}},k)}{q(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}\frac{q(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}{p( \bar{\mathbf{z}},k|\bar{\mathbf{x}})}\right]\\ &\geq\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}},k|\bar{\mathbf{z}})}\left[ \log\frac{p(\bar{\mathbf{x}}|\bar{\mathbf{z}},k)p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar {\mathbf{x}},k)}\frac{p(k)}{q(k|\bar{\mathbf{x}})}\right]\\ &=\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}\left[\log p (\bar{\mathbf{x}}|\bar{\mathbf{z}},k)\right]+\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}},k| \bar{\mathbf{x}})}\left[\log\frac{p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar{\mathbf{x}}, k)}\right]+\mathbb{E}_{q_{\gamma}(k|\bar{\mathbf{x}})}\left[\log\frac{p(k)}{q(k|\bar{\mathbf{x}})}\right] \end{split} \tag{8}\]
We see that, compared with Eq. (5), only one additional KL divergence term \(\mathrm{D}_{\text{KL}}\left[q_{\gamma}(k|\bar{\mathbf{x}})||p(k)\right]\) is added. The prior \(p(k)\) is set to follow a categorical distribution, and we apply the Gumbel-SoftMax trick [43] to allow for categorical re-parameterization and sampling of \(q_{\gamma}(k|\bar{\mathbf{x}})\).
### Posterior time evolution
As noted, to approximate the true generative model which has some unknown latent potentials \(\psi^{k}\), we propose to parameterize a set of potentials as \(u^{k}(\mathbf{z},t)=\texttt{MLP}([\mathbf{z};t])\) and train them through the ELBOs above. Again, we use the continuity equation to define the time evolution of the posterior, and thus we can derive the conditional time update \(q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)\) through the change of variables formula. Given the function of the sample evolution \(\mathbf{z}_{t}=g(\mathbf{z}_{t-1},k)=\mathbf{z}_{t-1}+\nabla_{\mathbf{z}}u^{k}\), we have:
\[q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)=q(\mathbf{z}_{t-1})\Big{|}\frac{dg(\mathbf{z}_{t-1},k)}{d \mathbf{z}_{t-1}}\Big{|}^{-1} \tag{9}\]
Converting the above continuous equation to the discrete setting and taking the logarithm of both sides gives the normalizing-flow-like density evolution of our posterior:
\[\log q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)=\log q(\mathbf{z}_{t-1})-\log|1+\nabla_{\mathbf{z}}^ {2}u^{k}| \tag{10}\]
The above relation can be equivalently derived from the continuity equation (_i.e.,_\(\partial_{t}q(\mathbf{z})=-\nabla\cdot\big{(}q(\mathbf{z})\nabla u^{k}\big{)}\)). Notice that we only assume the initial posterior \(q(\mathbf{z}_{0}|\mathbf{x}_{0})\) follows a Gaussian distribution. For future timesteps, we do not pose any further assumptions and just let the density evolve according to the sample motion.
### Ensuring optimal transport of the posterior flow
As an inductive bias, we would like each latent posterior flow to follow the OT path. To accomplish this, it is known that when the gradient \(\nabla u^{k}\) satisfies certain PDEs, the evolution of the probability density can be seen to minimize the \(L_{2}\) Wasserstein distance between the source distribution and the distribution of the target transformation. Specifically, we have:
**Theorem 1** (Benamou-Brenier Formula [4]).: _For probability measures \(\mu_{0}\) and \(\mu_{1}\), the \(L_{2}\) Wasserstein distance can be defined as_
\[W_{2}(\mu_{0},\mu_{1})^{2}=\min_{\rho,v}\bigg{\{}\int\int\frac{1}{2}\rho(x,t) |v(x,t)|^{2}\,dx\,dt\bigg{\}} \tag{11}\]
_where the density \(\rho\) and the velocity \(v\) satisfy:_
\[\frac{d\,\rho(x,t)}{dt}=-\nabla\cdot(v(x,t)\rho(x,t)),\;v(x,t)=\nabla u(x,t) \tag{12}\]
The optimality condition of the velocity is given by the generalized Hamilton-Jacobi (HJ) equation (_i.e.,_\(\partial_{t}u+\nicefrac{{1}}{{2}}||\nabla u||^{2}\leq 0\)). The detailed derivation is deferred to the supplementary. We thus encourage our potential to satisfy the HJ equation with an external driving force as
\[\frac{\partial}{\partial t}u^{k}(\mathbf{z},t)+\frac{1}{2}||\nabla_{\mathbf{z}}u^{k}( \mathbf{z},t)||^{2}=f(\mathbf{z},t)\;\;\;\text{subject to}\;\;\;f(\mathbf{z},t)\leq 0 \tag{13}\]
Here we use another MLP to parameterize the external force \(f(\mathbf{z},t)\) and realize the negativity constraint by setting \(f(\mathbf{z},t)=-\texttt{MLP}([\mathbf{z};t])^{2}\). Notice that here we take the external force as learnable MLPs simply because we would like to obtain a flexible negativity constraint. The MLP architecture is set the same for both \(u(\mathbf{z},t)\) and \(f(\mathbf{z},t)\). To achieve the PDE constraint, we impose a Physics-Informed Neural Network (PINN) [67] loss as
\[\mathcal{L}_{HJ}=\frac{1}{T}\sum_{t=1}^{T}\Big{(}\frac{\partial}{\partial t}u^ {k}(\mathbf{z},t)+\frac{1}{2}||\nabla_{\mathbf{z}}u^{k}(\mathbf{z},t)||^{2}-f(\mathbf{z},t) \Big{)}^{2}+||\nabla u^{k}(\mathbf{z}_{0},0)||^{2} \tag{14}\]
where the first term restricts the potential to obey the HJ equation, and the second term limits \(u(\mathbf{z}_{t},t)\) to return no update at \(t{=}0\), therefore matching the initial condition.
## 4 Experiments
This section starts with the experimental setup, followed by the main qualitative and quantitative results, then proceeds to discussions about the generalization ability to different composability and unseen data, and ends with the results on complex real-world datasets.
### Setup
**Datasets.** We evaluate our method on two widely-used datasets in generative modeling, namely MNIST [54] and Shapes3D [10]. For MNIST [54], we manually construct three simple transformations including Scaling, Rotation, and Coloring. For Shapes3D [10], we use the self-contained four transformations that consist of Floor Hue, Wall Hue, Object Hue, and Scale.
Besides these two common benchmarks, we take a step further to apply our method on Falcon3D and Isaac3D [61], two complex _large-scale_ and _real-world_ datasets that contain sequences of different transformations. Falcon3D consists of indoor 3D scenes in different lighting conditions and viewpoints, while Isaac3D is a dataset of various robot arm movements in dynamic environments.
**Baselines.** We mainly compare our method with SlowVAE [49] and Topographic VAE (TVAE) [45]. These two baselines could both achieve approximate equivariance. Specifically, TVAE introduces some learned latent operators, while SlowVAE enforces the Laplacian prior \(p(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\prod\nicefrac{{\alpha}}{{2}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Metrics.** We use the approximate equivariance error \(\mathcal{E}_{k}\) and the log-likelihood of transformed data \(\log p(\mathbf{x}_{t})\) as the evaluation protocols. The equivariance error is defined as \(\mathcal{E}_{k}=\sum_{t=1}^{T}|\mathbf{x}_{t}-\texttt{Decode}(\mathbf{z}_{t})|\) where \(\mathbf{z}_{t}=\mathbf{z}_{0}+\sum_{t=1}^{T}\nabla_{\mathbf{z}}u^{k}\). For TVAE, the latent operator is changed to \(\texttt{Roll}(\mathbf{z}_{0},t)\). For unsupervised disentanglement baselines [35; 46] and SlowVAE [49], we carefully select the latent dimension and tune the interpolation range to attain the traversal direction and range that correspond to the smallest equivariance error. Since the vanilla VAE does not have the corresponding learned transformation in the latent space, we simply set \(\nabla_{\mathbf{z}}u^{k}=0\) and take it as a lower-bound baseline. For all the methods, the results are reported based on \(5\) runs.
Notice that the above equivariance error is defined in the output space. Another reasonable evaluation metric is instead measuring error in the latent space as \(\mathcal{E}_{k}=\sum_{t=1}^{T}|\texttt{Encode}(\mathbf{x}_{t})-\mathbf{z}_{t}|\). We see the first evaluation method is more comprehensive as it further involves the decoder in the evaluation.
### Main Results
**Qualitative results.** Fig. 3 and 4 display decoded images of the latent evolution on MNIST [54] and Shapes3D [10], respectively. On both datasets, our latent flow can perform the target transformation precisely during evolution while leaving other traits of the image unaffected. In particular, for the weakly-supervised setting, the decoded images (_i.e.,_ the bottom rows of Fig. 3 and 4) can still reproduce the given transformations well and it is even hard to visually tell them apart from the generated images under the supervised setting. This demonstrates the effectiveness of the weakly-supervised setting of our method, and implies that qualitatively our latent flow is able to learn the sequence transformations well under both supervised and weakly-supervised settings.
**Quantitative results.** Tables 1 and 2 compare the equivariance error and the log-likelihood on MNIST [54] and Shapes3D [10], respectively. Our method learns the latent flows which model the transformations precisely, achieving the best performance across datasets under different supervision settings. Specifically, our method outperforms the previous best baseline by \(69.74\) on average in the equivariance error and by \(32.58\) in the log-likelihood on MNIST. The performance gain is also consistent on Shapes3D: our method surpasses the second-best baseline by \(291.70\) in the average equivariance error and by \(120.42\) in the log-likelihood. In the weakly-supervised setting, our method also achieves very competitive performance, falling behind that of the supervised setting in the average equivariance error slightly by \(6.22\) on MNIST and by \(67.88\) on Shapes3D.
### Discussion
**Extrapolation: switching transformations.** In Fig. 5 we demonstrate that, empowered by our method, it is possible to switch latent transformation categories mid-way through the latent evolution
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Supervision?**} & \multicolumn{4}{c|}{**Equariance Error (\(\downarrow\))**} & \multirow{2}{*}{**Log-likelihood (\(\uparrow\))**} \\ \cline{3-3} \cline{5-6} & & **Scaling** & & **Rotation** & **Coloring** \\ \hline VAE [47] & No (✗) & 1275.31\(\pm\)1.89 & 1310.72\(\pm\)2.19 & 1368.92\(\pm\)2.33 & -2206.17\(\pm\)1.83 \\ \(\beta\)-**VAE**[35] & No (✗) & 741.58\(\pm\)4.57 & 751.32\(\pm\)5.22 & 808.16\(\pm\)5.03 & -2224.67\(\pm\)2.35 \\
**FactorVAE**[46] & No (✗) & 659.71\(\pm\)4.89 & 632.44\(\pm\)5.76 & 662.18\(\pm\)5.26 & -2209.33\(\pm\)2.47 \\
**SlowVAE**[49] & Weak (✗) & 461.59\(\pm\)5.37 & 447.46\(\pm\)5.46 & 398.12\(\pm\)4.83 & -2197.68\(\pm\)2.39 \\
**TVAE**[45] & Yes (✗) & 505.19\(\pm\)2.77 & 493.28\(\pm\)3.37 & 451.25\(\pm\)2.76 & -2181.13\(\pm\)1.87 \\
**PoFlow**[79] & Yes (✗) & 234.78\(\pm\)2.91 & 231.42\(\pm\)2.98 & 240.57\(\pm\)2.58 & -2145.03\(\pm\)2.01 \\
**Ours** & Yes (✗) & **185.42\(\pm\)2.35** & **153.54\(\pm\)3.10** & **158.57\(\pm\)2.95** & **-2112.45\(\pm\)1.57** \\
**Ours** & Weak (✗) & 193.84\(\pm\)2.47 & 157.16\(\pm\)3.24 & 165.19\(\pm\)2.78 & -2119.94\(\pm\)1.76 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Equivariance error \(\mathcal{E}_{k}\) and log-likelihood \(\log p(\mathbf{x}_{t})\) on MNIST [54].
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Supervision?**} & \multicolumn{4}{c|}{**Equariance Error (\(\downarrow\))**} & \multirow{2}{*}{**Log-likelihood (\(\uparrow\))**} \\ \cline{3-3} \cline{5-6} & & **Floor Hue** & **Wall Time** & **Object Hue** & **Scale** \\ \hline VAE [47] & No (✗) & 6924.63\(\pm\)8.92 & 7746.37\(\pm\)8.77 & 4383.54\(\pm\)9.26 & 2609.59\(\pm\)7.41 & -11784.69\(\pm\)4.87 \\ \(\beta\)-**VAE**[35] & No (✗) & 2243.95\(\pm\)12.48 & 22279.23\(\pm\)13.97 & 2188.73\(\pm\)12.61 & 2037.94\(\pm\)11.72 & -11924.83\(\pm\)5.64 \\
**FactorVAE**[46] & No (✗) & 1985.75\(\pm\)13.26 & 1876.41\(\pm\)11.93 & 1902.83\(\pm\)12.27 & 1657.32\(\pm\)11.05 & -11802.17\(\pm\)5.69 \\
**SlowVAE**[49] & Weak (✗) & 1247.36\(\pm\)12.49 & 1314.86\(\pm\)11.41 & 11022.28\(\pm\)12.17 & 1058.74\(\pm\)10.96 & -11674.89\(\pm\)5.74 \\
**TVAE**[45] & Yes (✗) & 1225.47\(\pm\)9.82 & 126.23\(\pm\)9.54 & 261.79\(\pm\)9.86 & 11424.01\(\pm\)9.37 & -11475.48\(\pm\)5.18 \\
**PoFlow**[79] & Yes (✗) & 885.46\(\pm\)10.37 & 916.71\(\pm\)10.49 & 912.48\(\pm\)9.66 & 924.39\(\pm\)10.05 & -11335.84\(\pm\)4.95 \\
**Ours** & Yes (✗) & **613.29\(\pm\)8.93** & **653.45\(\pm\)9.48** & **605.79\(\pm\)8.63** & **599.71\(\pm\)9.34** & **41215.42\(\pm\)5.71** \\
**Ours** & Weak (✗) & 690.84\(\pm\)9.57 & 717.74\(\pm\)10.65 & 681.59\(\pm\)9.02 & 665.85\(\pm\)9.57 & -11279.61\(\pm\)5.89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Equivariance error \(\mathcal{E}_{k}\) and log-likelihood \(\log p(\mathbf{x}_{t})\) on Shapes3D [10].
and maintain coherence. That is, we perform \(\mathbf{z}_{t}=\mathbf{z}_{t-1}+\nabla_{\mathbf{z}}u^{k}\) for \(t\leq\nicefrac{{T}}{{2}}\) and then change to \(\mathbf{z}_{t}=\mathbf{z}_{t-1}+\nabla_{\mathbf{z}}u^{j}\) where \(j\neq k\) for \(t>\nicefrac{{T}}{{2}}\). As can be seen, the factor of variation immediately changes after the transformation type is switched. Moreover, the transition phase is smooth and no other attributes of the image are influenced.
**Extrapolation: superposing transformations.** Besides switching transformations, our method also supports applying different transformations simultaneously, _i.e.,_ consistently performing \(\mathbf{z}_{t}=\mathbf{z}_{t-1}+\sum_{k}^{K}\nabla_{\mathbf{z}}u^{k}\) during the latent flow process. Fig. 6 presents such exemplary visualizations of superposing two and all transformations simultaneously. In each case, the latent evolution corresponds to simultaneous smooth variations of multiple image attributes. This indicates that our method also generalizes well to superposing different transformations.
Notice that we only apply single and separate transformations in the training stage. Switching or superposing transformations in the test phase can be thus understood as an extrapolation test to measure the generalization ability of the learned equivariance to novel compositions.
**Equivariance generalization to new data.** We also test whether the learned equivariance holds for Out-of-Distribution (OoD) data. To verify this, we validate our method on a test dataset that is different from the training set and therefore unseen to the model. Fig. 7 displays the exemplary visualization results of the VAE trained on MNIST [54] but evaluated on dSprites [59]. Although the reconstruction quality is poor, the learned equivariance is still clearly effective as each transformation still operates as expected: scaling, rotation, and coloring transformations from top to bottom respectively.
### Results on Complex Real-world and Large-scale Datasets
Table 3 and 4 compare the equivariance error of our methods and the representative baselines on Falcol3D and Isaac3D, respectively. Notice that the values are much larger than previous datasets due to the increased image resolution. Our method still outperforms other baselines by a large margin
Figure 5: Exemplary visualization of switching transformations during the latent sample evolution.
Figure 6: Examples of combining different transformations simultaneously during the latent evolution.
and achieves reasonable equivariance error. Fig. 8 displays the qualitative comparisons of our method against other baselines. Our method precisely can control the image transformations through our latent flows. _Overall, the above results demonstrate that our method can go beyond the toy setting and can be further applied to more complex real-world scenarios._
More visualization results of exemplary latent flows are kindly referred to in the supplementary.
## 5 Related work
**Disentangled representation learning.** The idea of learning disentangled representation dates back to factorizing non-redundant input patterns [74] but is recently first studied by InfoGAN [13] and \(\beta\)-VAE [35]. InfoGAN [13] achieves disentanglement by maximizing the mutual information between a subset of latent dimensions and observations, while \(\beta\)-VAE [35] induces the factorized posterior \(q(\mathbf{z})\) by penalizing the Total Correlation (TC) through an extra hyper-parameter \(\beta\)\(>\)\(1\) controlling the strength of the KL divergence. Following infoGAN, many attempts have been made to facilitate the discovery of semantically meaningful traversal directions through regularization [33; 42; 89; 34; 100; 66; 77; 90; 98; 84; 99; 78; 62]. The follow-up research of \(\beta\)-VAE mainly explored different methods to factorize the aggregated posterior [22; 25; 52; 46; 12; 44; 96; 23; 76; 58; 80; 28]. More recently, some works proposed to discover meaningful directions of diffusion models in the bottleneck of denoising networks [53; 64; 95; 41]. The previous literature mainly considers disentanglement as learning different transformations per dimension or per linear direction. Our method generalizes this concept to learning a distinct tangent bundle \(\nabla u^{k}\) that moves every latent sample via dynamic OT.
We see the most similar method to ours is the work of [79]. In [79], the authors also apply the gradient of a potential function to move the latent code. However, their potentials are restricted to obey the wave equations, which do not really correspond to the OT theory. Also, they do not consider the posterior evolution but instead use the loss \(||\mathbf{z}_{t}-\mathtt{Encode}(\mathbf{z}_{t})||^{2}\) to match the latent codes. By contrast, we propose a unified probabilistic generative model that encompasses the posterior flow that follows dynamic OT, the flow-like time evolution, and different supervision settings.
**Equivariant neural networks.** A function is said to be an equivariant map if it commutes with a given transformation, _i.e.,_\(T^{\prime}[f(x)]=f(T[x])\) where \(T\) and \(T^{\prime}\) represent operators in different domains. Equivariance has been considered a desired inductive bias for deep neural networks as this property can preserve geometric symmetries of the input space [38; 75; 56; 57; 1]. Analytically equivariant
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline
**Methods** & **Robot X-move** & **Robot Y-move** & **Camera Height** & **Object Scale** & **Lighting Intensity** & **Lighting Y-dir** & **Object Color** & **Wall Color** \\ \hline
**TVAE**[45] & 8441.65 & 8348.23 & 8495.31 & 8251.34 & 8291.70 & 8741.07 & 8456.78 & 8512.09 \\
**PoFlow**[79] & 6572.19 & 6489.35 & 6319.82 & 6188.59 & 6517.40 & 6712.06 & 7056.98 & 6343.76 \\
**Ours** & **8605.72** & **3999.33** & **4719.27** & **4809.78** & **4225.34** & **4998.84** & **5814.97** & **3870.601** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Equivariance error (\(\downarrow\)) on Isaac3D [61].
Figure 7: Equivariance generalization to unseen OoD input data. Here the model is trained on MNIST [54] but the latent flow is tested on dSprites [59].
networks typically enforce explicit symmetry to group transformations in neural networks [16; 17; 68; 93; 92; 85; 31; 39]. Another line of research proposed to directly learn approximate equivariance from data [21; 18; 49; 20; 45]. Our framework re-defines approximate equivariance by matching the latent probabilistic flow to the actual path of the given transformation in the image space.
**Optimal transport in deep learning.** There is a vast literature on OT theory and applications in various fields [87; 88]. Here we mainly highlight the relevant applications in deep learning. The pioneering work of [19] proposed a light-speed implementation of the Sinkhorn algorithm for fast computation of entropy-regularized Wasserstein distances, which opened the way for many differentiable Sinkhorn algorithm-based applications [32; 29; 14; 27; 51]. In generative modeling, the Wasserstein distance is often used to minimize the discrepancy between the data distribution and the model distribution [2; 81; 72; 65]. Inspired by the fluid mechanical interpretation of OT [4], some normalizing flow methods [69; 24; 48] considered regularizing the velocity fields to satisfy the HJ equation, thus matching the dynamic OT plan [94; 30; 83; 63; 60]. Our method applies PINNs [67] to directly model generalized HJ equations in the latent space and uses the gradient fields of learned potentials to generate latent flows, which also aligns to the theory of dynamic fluid mechanical OT.
## 6 Conclusion
In this paper, we introduce Flow Factorized Representation Learning which defines a set of latent flow paths that correspond to sequences of different input transformations. The latent evolution is generated by the gradient flow of learned potentials following dynamic optimal transport. Our setup re-interprets the concepts of both _disentanglement_ and _equivariance_. Extensive experiments demonstrate that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously achieving smaller equivariance error. Furthermore, we show that the learned latent transformations generalize well, allowing for flexible composition and extrapolation to new data.
## 7 Limitations
For flexibility and efficiency, we use PINN [67] constraints to model the HJ equation. However, such PDE constraints are approximate and not strictly enforced. Other PDE modeling approaches include accurate neural PDE solvers [40; 8; 70] or other improved PINN variants such as competitive PINNs [97] and robust PINNs [3]. Also, when infering with observed \(k\), we change the posterior from \(q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)\) to \(q(\bar{\mathbf{z}}|\mathbf{x}_{0},k)\) because we assume \(k\) contains sufficient information of the whole sequence. To keep the posterior definition of \(q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)\), we need to make \(q(\mathbf{z}_{t})\) also a function of \(\mathbf{x}_{t}\). This can be achieved either by changing the potential to \(u(\mathbf{z}_{t-1},\mathbf{x}_{t},t-1)\) or modifying the external driving force to \(f(\mathbf{z}_{t-1},\mathbf{x}_{t},t-1)\). Nonetheless, we see these modifications would make the model less flexible than our current formulations as the element \(\mathbf{x}_{t}\) might be needed during inference.
## Acknowledgments and Disclosure of Funding
This work was supported by the MUR PNRR project FAIR (PE00000013) funded by the NextGenerationEU, by the PRIN project CREATIVE (Prot. 2020ZSL9F9), by the EU H2020 project AI4Media (No. 951911), and by the Bosch Center for Artificial Intelligence.
Figure 8: Qualitative comparison of our method against TVAE and PoFlow on Falcon3D and Isaac3D. |
2309.13173 | BenLLMEval: A Comprehensive Evaluation into the Potentials and Pitfalls
of Large Language Models on Bengali NLP | Large Language Models (LLMs) have emerged as one of the most important
breakthroughs in NLP for their impressive skills in language generation and
other language-specific tasks. Though LLMs have been evaluated in various
tasks, mostly in English, they have not yet undergone thorough evaluation in
under-resourced languages such as Bengali (Bangla). To this end, this paper
introduces BenLLM-Eval, which consists of a comprehensive evaluation of LLMs to
benchmark their performance in the Bengali language that has modest resources.
In this regard, we select various important and diverse Bengali NLP tasks, such
as text summarization, question answering, paraphrasing, natural language
inference, transliteration, text classification, and sentiment analysis for
zero-shot evaluation of popular LLMs, namely, GPT-3.5, LLaMA-2-13b-chat, and
Claude-2. Our experimental results demonstrate that while in some Bengali NLP
tasks, zero-shot LLMs could achieve performance on par, or even better than
current SOTA fine-tuned models; in most tasks, their performance is quite poor
(with the performance of open-source LLMs like LLaMA-2-13b-chat being
significantly bad) in comparison to the current SOTA results. Therefore, it
calls for further efforts to develop a better understanding of LLMs in
modest-resourced languages like Bengali. | Mohsinul Kabir, Mohammed Saidul Islam, Md Tahmid Rahman Laskar, Mir Tafseer Nayeem, M Saiful Bari, Enamul Hoque | 2023-09-22T20:29:34Z | http://arxiv.org/abs/2309.13173v2 | BenLLMEval: A Comprehensive Evaluation into the Potentials and Pitfalls of Large Language Models on Bengali NLP
###### Abstract
Large Language Models (LLMs) have emerged as one of the most important breakthroughs in natural language processing (NLP) for their impressive skills in language generation and other language-specific tasks. Though LLMs have been evaluated in various tasks, mostly in English, they have not yet undergone thorough evaluation in under-resourced languages such as Bengali (Bangla). In this paper, we evaluate the performance of LLMs for the low-resourced Bangla language. We select various important and diverse Bangla NLP tasks, such as abstractive summarization, question answering, paraphrasing, natural language inference, text classification, and sentiment analysis for zero-shot evaluation with ChatGPT, LLAMA-2, and Claude-2 and compare the performance with state-of-the-art fine-tuned models. Our experimental results demonstrate an inferior performance of LLMs for different Bangla NLP tasks, calling for further effort to develop better understanding of LLMs in low-resource languages like Bangla.
## 1 Introduction
From the introduction of word embeddings (Bengio et al., 2003) to the growth of language models (Rogers et al., 2020), Natural Language Processing (NLP) has witnessed revolutionary advancements over the decades. Particularly since the advent of pretrained language models (Devlin et al., 2019; Liu et al., 2019), these models have produced state-of-the-art results on a variety of NLP tasks with little task-specific fine-tuning (Laskar et al., 2020, 2022). Specialized Bangla pretrained models like BanglaBERT (Bhattacharjee et al., 2022) and BanglaT5 (Bhattacharjee et al., 2023) have demonstrated exciting progress in many of the downstream Bangla NLP tasks like natural language inference (NLI), question answering (Ekram et al., 2022), natural language generation (Akash et al., 2023) etc. However, one concern for these pretrained models is that they require fine-tuning using domain-specific large annotated datasets and Bangla has remained an underrepresented language in NLP literature (Joshi et al., 2020; Chakraborty et al., 2021; Chowdhury et al., 2021) despite being the sixth most spoken language in the world with over \(300\) million native speakers (Wikipedia, 2023).
Recent developments in large language models (LLMs), such as GPT-3 (Brown et al., 2020), Megatron (Shoeybi et al., 2019), Gopher (Rae et al., 2022), and OPT-175B (Zhang et al., 2022), have transformed the landscape and practices in NLP. These LLMs, with parameter sizes exceeding a hundred billion, are pre-trained on vast amounts of data and demonstrate strong generalization capability in few-shot and zero-shot learning which involves prompt-based learning avoiding parameter update for underlying architectures. A model with excellent zero-shot learning capabilities may reduce the need for huge annotated datasets by allowing the model to perform well on tasks that it was not trained on. However, because of their auto-regressive training objective, LLMs may frequently generate untruthful facts/toxic attitudes that diverge from the original input, preventing them from reaching widespread appeal (Ouyang et al., 2022).
To this end, ChatGPT is one of the latest developments in the GPT-3.5 series models from OpenAI1 that has mitigated limitations of the previous LLMs and gained widespread popularity. ChatGPT and other LLMs (Touvron et al., 2023; Anil et al., 2023) are trained in multiple languages although English possesses the majority of the training data. The combination of multilingual training data has enabled the LLMs to accept inputs and
generate responses in different languages. Though ChatGPT like LLMs has demonstrated strong zero-shot performance in various NLP tasks in English Laskar et al. (2023) and some other languages Lai et al. (2023) and domains Jahan et al. (2023), it has yet to be extensively investigated in the low-resourced Bangla language domain.
This paper presents a comprehensive evaluation of LLMs' performance on various NLP tasks in the Bangla language, including abstractive summarization, question answering (QA), paraphrasing, natural language inference (NLI), text classification, and sentiment analysis. The evaluation incorporates meticulously crafted prompts to ensure rigorous assessment and accurate analysis. For the purpose of evaluating the zero-shot performance of the LLMs (i.e., ChatGPT, LLaMA-2, and Claude-2) on benchmark Bangla NLP datasets for the aforementioned tasks, we conducted a comparative analysis with state-of-the-art models. To the best of our knowledge, this study represents the first attempt to assess the performance of the LLMs in the Bangla language for these specific tasks. Despite some exceptional cases, our experimental results can be summarized as follows:
* The zero-shot learning performance of LLMs is inferior compared to the state-of-the-art supervised models across the majority of the evaluated tasks in Bangla language. Given the substantial performance disparities observed, it is reasonable to deduce that LLMs, in their current form, are not suitable for serving as a comprehensive solution for diverse NLP tasks in Bangla.
* Considering LLMs like ChatGPT's remarkable proficiency in zero-shot learning within the English language and its subpar performance in low-resource languages like Bangla, this paper emphasizes the significance of investigating the intricate reasoning capabilities of LLMs in low-resource language contexts. Furthermore, it suggests the potential for the development of LLMs tailored to diverse low-resource language groups, thereby addressing the challenges associated with linguistic scarcity and paving the way for improved language understanding and generation models.
## 2 Methodology
The objective of our study is to assess the efficacy of LLMs in the context of NLP tasks specific to the Bangla language. We cover \(6\) diverse and important Bangla NLP tasks, i.e., Abstractive Summarization, Question Answering (QA), Paraphrasing, Natural Language Inference (NLI), Text Classification, and Sentiment Analysis over \(7\) bench
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Dataset** & **Type** & **Data Split (Train/ Valid / Test)** & **Prompt** \\ \hline XL-Sam (Iscan et al., 2021) & Abstractive Summarization & 810271012 / 1012 & Please provide an one-sentence summary of the following Bangla text input. The input will be a long Bangla paragraph, the output should be a short Bangla paragraph summarizing only the wild information of the input text into one sentence. Please make sure the input contains the most essential statistical task. Note: Please no code provide anything other than the summarized Bangla output. [T]/T \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe-2022) & Question-Answering & 118.7.38/ 2.3k & Please provide an instance to the input Zhangli equation based on the given Bangla context. The input will contain a Bangla question followed by a context. The output should be the answer in Bangla based on the context. Note: Please no code provide anything other than the Bangla answer to the question [CMTEXT] (UBITO1:1) \\ \hline IndErighnisse (Kumar et al., 2022) & Pangharing & 590.7/10k/10k \\ \hline IndErighnisse (Kumar et al., 2022) & & 590.7/10k/10k \\ \hline BNLLI (Bianchalupe et al., 2022) & Natural Language Inference & 381.7/ 2.42k/ 4.8k \\ \hline BNLLI (Bianchalupe et al., 2022) & Natural Language Inference (NLI) & 381.7/ 2.42k/ 4.8k \\ \hline BNLLI (Bianchalupe et al., 2022) & Natural Language Inference (NLI) & 381.7/ 2.42k/ 4.8k \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe et al., 2022) & (NLI) & & \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe et al., 2022) & Question-Answering & 118.7/ 2.3k/ 2.3k \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe et al.
mark datasets. For this purpose, we evaluate ChatGPT (GPT-3.5), Cluade-22, and LLaMA-2-13b-Chat (Touvron et al., 2023) models. As it is not necessary to fine-tune the base architecture for inference, we focus on designing a zero-shot learning setting for the models. As a reference for comparison, we also report the state-of-the-art (SOTA) performance of the supervised models for each task. We prepare a task instruction \(T\) for a given test sample \(X\) and concatenate the text in the test sample with the task instruction to construct the prompt \(P\). The prompt \(P\) is then passed as input to the LLMs, which generates the response \(R\). A comprehensive description of the tasks, datasets, and prompts devised for evaluating each specific task is presented below and also summarized in Table 1. Prompts with sample inputs and outputs for each task are depicted in Table 3.
Footnote 2: [https://www.anthropic.com/index/claude-2](https://www.anthropic.com/index/claude-2)
**Abstractive Summarization:** Summarization is the process of automatically generating a concise and coherent summary of a longer text document (Nayeem and Chali, 2017, 2018; Laskar et al., 2020), preserving the most important information while reducing the length (Nayeem et al., 2018). Given a text sequence \(S\), the summarization task aims to generate a concise summary of \(S\). In this paper, we evaluate the XL-Sum dataset (Hasan et al., 2021) that consists of 1 million manually annotated data samples from \(44\) languages. We only took the Bangla samples for evaluation.
**Question Answering:** For the question answering task, we evaluate the performance of LLMs on the SQuAD_Bangla dataset (Bhattacharjee et al., 2022). This dataset was constituted using two benchmark English datasets: SQuAD 2.0 (Williams et al., 2018) and TyDi QA (Clark et al., 2020). The objective of this task is to determine whether the answer to a given question \(Q\) can be inferred from the reference context \(C\). We provide the reference context along with the question, and ask LLMs whether they can infer the answer of the question from the given reference context.
**Paraphrasing:** Given a Bangla text sequence \(S\), the paraphrasing task aims to generate a paraphrase of input \(S\). To evaluate this task, we choose the Bangla samples from the IndicParaphrase dataset (Kumar et al., 2022), the largest Indic paraphrasing dataset across \(11\) Indic languages (Wikipedia, 2023) with around \(5.5\) million samples.
**Natural Language Inference:** Natural Language Inference (NLI) aims to predict the entailment/contradiction relations between two input sentences, i.e., a premise and a hypothesis. To evaluate LLMs for Bangla NLI, we utilize the BNLI dataset (Bhattacharjee et al., 2022) that provides annotated data with three categories, i.e., _Entailment, Contradiction,_ and _Neutral_, originally curated from the benchmark XNLI dataset (Conneau et al., 2018).
**Text Classification:** Text classification in NLP refers to the task of specifying a label or category for a given input text. We experimented with the _Soham Bengali News Classification_ dataset that is included in the IndicGLUE (Kakwani et al., 2020) benchmark, to evaluate the text classification capability of the LLMs in Bangla. The dataset contains six news categories i.e., _kolkata, state, national, international, sports,_ and _entertainment_. This dataset is part of the News Category Classification of the IndicGLUE benchmark.
**Sentiment Analysis:** We evaluated the Sentiment Analysis capability of the LLMs with two datasets, i.e., SentNoB (Islam et al., 2021), and IndicSentiment (Doddapaneni et al., 2022). The SentNoB dataset comprises of texts that are informally written in Bangla, collected from public comments on news and videos in social media covering a wide range of domains like, _politics, agriculture, education_ etc. The second dataset, IndicSentiment, was manually curated for the IndicXTREME benchmark contains product reviews of multiple categories. We only used the Bangla samples from the dataset out of the \(12\) indic languages.
## 3 Results and Discussion
Based on our prompt-based zero-shot learning experiment, we report the performance of LLMs for different tasks and compare their performance with the current SOTA supervised fine-tuned models (see Table 2).
**Abstractive Summarization Evaluation:** In the XL-Sum dataset, we find that ChatGPT performs the best across all LLMs. We also find a strong similarity in the performance of ChatGPT and the fine-tuned mT5 model on the 'Rouge-1' and 'Rouge-L' metrics. However, there is a noticeable decline in ChatGPT's performance when considering 'Rouge-2'. A manual review of the ChatGPT responses demonstrates that ChatGPT typically produces output that is longer on average and has a higher word count than the gold label contain
ing more details about the input text. Moreover, we find that LLaMA-2-13b performs very poorly in abstractive summarization, as it ended up generating the summaries in English, resulting in very low ROUGE scores.
**Question Answering Evaluation:** Since LLMs frequently generate responses that aren't exactly like the gold label but are nonetheless correct, our QA evaluation requires human intervention considering two metrics: Exact Match (**EM**) and F1 score. We find that among the LLMs, the zero-shot Claude-2 performs the best in this dataset, almost similar to the SOTA result (79.34%) achieved by BanglaBERT. We also find that ChatGPT performs reasonably well in the F1 metrics for this task on SQuAD_Bangla dataset. It reaches \(78.67\)% F1 score without any fine-tuning. with \(120\)k training samples. However, LLM performance on the EM metric is below par, which is expected given that it is an open-domain generative model that typically generates a wide variety of responses.
**Paraphrasing Evaluation:** For the paraphrasing task, we consider BLEU score as the evaluation metric which compares the \(n\)-grams of LLMs paraphrased sentences to the \(n\)-gram of five sentences used as references in the IndicParaphrase dataset. The paraphrase generation task also experienced a very low BLEU score, which is a phenomenon similar to what happened with the **EM** metric for the QA task.
**Natural Language Inference Evaluation:** We observe that while ChatGPT achieves the best performance among the LLMs, it only achieves \(52.71\)% accuracy in the NLI task on the BNLI dataset, which is inferior compared to the SOTA BanglaBERT model's 83% accuracy. For this task, prompt tuning is carried out by delivering task descriptions in several ways explaining how to categorize a sample into _Entailment, Contradiction,_ and _Neutral_. It has been observed that adding examples to the task description may marginally improve ChatGPT's performance. This indicates that LLMs logical reasoning capabilities in Bangla are still lacking.
**Text Classification Evaluation:** LLMs performed poorly on the test set of the Soham News Article classification dataset from the IndicGLUE benchmark (Kakwani et al., 2020), with the best-performing LLM in this dataset, the Claude-2 model achieving only \(20.76\)% overall accuracy (while the LLaMA-2-13b-chat being the worst performer with an accuracy of \(1.06\)%). On the contrary, he XLM-R model, which is the SOTA for this task, obtained an accuracy of \(87.60\)% in the test set. This significant performance disparity may have been caused because the target classes are not specified in the prompt. However, when prompted with the target classes, i.e., _state_, _kolkata_, _national_, _sports_, _entertainment_, _international_, the overall accuracy of the classification was significantly increased. For instance, the accuracy for ChatGPT increases to \(48.48\)% from \(18.36\)% when such descriptive prompting was used.
**Sentiment Analysis Evaluation:** In the sentiment classification task, we observe that ChatGPT performed exceptionally well on the test set of the IndicSentiment dataset (Doddapaneni et al., 2022), attaining an impressive accuracy of \(90.20\)% outperforming the SOTA IndicBERT in a small margin which achieved \(89.3\)% accuracy score. To further evaluate the performance of LLMs in the Sentiment Analysis task, we utilize the test set of the SentNoB (Islam et al., 2021). In this case, we use
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline & **XL-Sen (AS)** & **SQuAD** & **Bangla (QA)** & **IndGLPA(TP)** & **ENLI (XL)** & **SNAC (TC)** & **IndGLUE(S)** & **SentNoB (SA)** \\ \hline Model & R-1 & R-2 & R-1 & **MM F1** & BLEU & Acc. & Acc. & Acc. & **P** & R F1 \\ \hline
**ChaoF1** & 27.11 & 8.07 & 20.86 & 44.85/87.67 & 2.81 & 52.71 & 18.36 & **90.20** & 57.79 & 54.56 & 53.17 \\
**Llama-2-13b-chat** & 0.51 & 0.17 & 0.42 & 31.73/67.95 & 0.01 & 42.37 & 1.06 & 69.16 & 48.39 & 48.49 & 48.43 \\ Claude-2 & 21.97 & 60.6 & 17.55 & 49.92/97.04 & 1.89 & 32.20 & 20.76 & 88.48 & 53.25 & 54.38 & 52.79 \\ \hline
**n15** & **28.32** & **11.40** & **24.23** & & 4.45 & - & - & - & - & - \\
**BanglaBERT** & - & - & - & 72.63/**79.34** & - & **82.8** & - & - & - & - & - \\
**BanglaBERT** & - & - & - & 72.43/78.40 & - & 80.95 & - & - & - & - & - \\
**XL-M-R** (Large) & - & - & - & **73.15**/79.06 & - & 82.4 & - & - & - & - & - \\
**XL-M-R** & - & - & - & - & - & **87.60** & - & - & - & - & - \\
**IndGLUEART** & - & - & - & **11.57** & - & - & - & - & - & - & - \\
**IndGLBERT** & - & - & - & & - & - & 78.45 & 89.3 & - & - & - & - \\
**mBERT** & - & - & - & & - & - & 80.23 & 72.0 & 49.85 & 56.43 & 52.79 \\
**m-LSTM + Attn. (w/ FastText embed)** & - & - & - & - & - & - & - & 52.24 & 63.09 & 57.15 \\
**B-LSTM + Attn. (w/ Rand intn)** & - & - & - & - & - & - & - & - & 56.16 & **64.97** & **60.25** \\ \hline \end{tabular}
\end{table}
Table 2: Performance Comparison between LLMs & SOTA models on Abstractive Summarization (AS), Question Answering (QA), Paraphrasing (PP), Natural Language Inference (NLI), Text Classification (TC), and Sentiment Analysis (SA). EM, Acc., P, R, and F1 denote Exact Match, Accuracy, Precision, Recall, and F1 score respectively. Best results are **boldfaced**.
the precision score, recall score, and F1 score to evaluate the performance of LLMs. While ChatGPT had the highest precision score of \(57.70\)%, its recall score and F1 score are comparatively lower at \(54.56\)% and \(53.13\)%, respectively. The SOTA model Bi-LSTM with Attention, which utilized randomly initialized text embeddings, obtained a precision score of \(56.16\)%, a recall score of \(64.97\)%, and a F1 score of \(60.25\)%.
### Error Analysis
We found some interesting cases where LLMs failed to generate coherent responses given the input prompt. We discuss these cases for the overall best-performing LLM, ChatGPT, in the following:
#### 3.1.1 Abstractive Summarization
In the summarization task, we notice that ChatGPT's responses are highly inconsistent. While ChatGPT occasionally recognizes the context and provides a good summary of the key points, it frequently misses these details in the reference text and overstuffs the response. One example can be demonstrated as follows:
**Prompt:** Please provide an one-sentence summary of the following Bangla text input. The input will be a long Bangla paragraph, the output should be a short Bangla paragraph summarizing only the vital information of the input text in one sentence. Please make sure that the output contains the most essential statistical data. Note: Please do not provide anything other than the summarized Bangla output.
[MISSING_PAGE_POST]
no news about the fire as there was no untoward incident.] This is a matter of concern and presents additional evidence that ChatGPT is not currently suitable for serving as a universal problem solver in the Bangla language.
#### 3.1.4 Natural Language Inference
The confusion matrix obtained by evaluting ChatGPT for the NLI task is demonstrated in Figure 1. The matrix reveals that ChatGPT demonstrates high accuracy in predicting the _Contradiction_ and _Entailment_ labels, but encounters significant challenges in accurately predicting the _Neutral_ labels. Approximately 49% of the misclassifications arise when attempting to predict the _Neutral_ class. A thorough manual examination uncovers that ChatGPT often exhibits bias towards expressing a particular opinion polarity (_Contradiction, Entailment_) when dealing with logical relationships in Bangla, and it fails to appropriately recognize and convey neutrality even in cases where it is evident.
**Prompt:** Please determine the logical relationship between the given hypothesis and premise. The input will consist of two sentences written in the Bangla language. The first sentence represents the premise, while the second sentence represents the hypothesis. Your task is to determine whether the hypothesis is false (contradiction), true (entailment), or inconclusive (neutral) given the premise. Please output a number indicating the logical relationship between them: 0 for false (contradiction), true (entailment), and 2 for inconclusive (neutral) for neutrality. Note: Please avoid providing any additional information beyond the logical relationship.
**Premise:**
**Hypothesis:**
**Expected Response:**
1 (Entailment)
**ChatGPT Response:**
2 (Neutral)
#### 3.1.5 News Article Classification
Our experimental results show that ChatGPT misclassified the category _kolkata_ the most, failing to generate the correct response in 503 of the test set's 569 examples (88.4%). The subsequent category with the highest frequency of misclassification is _national_. ChatGPT was unable to accurately classify this particular category in 95 out of 175 instances, representing a misclassification rate of 54.20%. On the other hand, ChatGPT effectively identified the category labeled as _entertainment_ in 110 out of 130 occurrences, resulting in a success rate of 92.30%.
#### 3.1.6 Sentiment Analysis
The examples listed in the Table 4, illustrates the error cases where ChatGPT misclassifies the sentiment of the given input. The first seven examples are from the IndicSentiment test set and the rest of the examples are from the SentNoB dataset. Notably, the _positive_ class exhibits the most frequent misclassification in both test sets, suggesting that ChatGPT still has a considerable way to go toward attaining a complete understanding of the Bangla language. In the instances where the class is _neutral_, the examples are mostly simple statement with no sentiment word associated with them (table entry 11, and 12), yet ChatGPT classified them as either _negative_ or _positive_. Furthermore, ChatGPT demonstrates difficulties in capturing challenging Bangla sentiment words, i.e., (_Accommodating_ in English), (_Advantage_ in English), (_Uncompromising_ in English) etc (example 2, 8, 13 in Table 4).
**Unexpected, out of range response:** In the tasks of text classification and sentiment analy
Figure 1: Confusion matrix obtained by evaluating ChatGPT for the NLI task on BNLI dataset
sis, the prompts are designed to include the target classes in order to identify any unexpected or non-class responses. Based on our experimental findings, we observe that the outputs generated by ChatGPT exhibit outcomes that deviate from the expected range of outputs.
**News Article Classification.**
**Prompt:** For the Bengali news article given in the input, identify the appropriate section title for the article from the following classes: kolkata, state, sports, national, entertainment, international. Note: Do not output any unnecessary words other than just the section title. The response should be in English language and should be one word. Input : _2000_ (_long text_)
**Expected Response:** State
**ChatGPT Response:** Development
Our study reveals that ChatGPT, in particular, generated the word _Development_, which deviates from the prompted range of responses. The generated output is considered out of range as the prompt specifically instructs the model to produce text within the given categories of _kolkata_, _state_, _sports_, _national_, _entertainment_, and _international_. In addition, the results of our experiment indicate that ChatGPT produced 12 classes that are outside the target class range, thus demonstrating the model's inability to comply with the specified instructions for this specific task.
**Sentiment Analysis.**
**Prompt:** For the given Input, is the sentiment in the input positive or negative? Note: Please do not output anything other than the sentiment. Exclude any word like, Sentiment in the response.
Input : _2000_ (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_200_) (_2000_
## Acknowledgements
This research is supported by the Generic research funds of York University. We also thank Anthropic for providing free access to the Claude-2 model and Compute Canada for its computing resources.
|
2309.08235 | PRIEST: Projection Guided Sampling-Based Optimization For Autonomous
Navigation | Efficient navigation in unknown and dynamic environments is crucial for
expanding the application domain of mobile robots. The core challenge stems
from the nonavailability of a feasible global path for guiding
optimization-based local planners. As a result, existing local planners often
get trapped in poor local minima. In this paper, we present a novel optimizer
that can explore multiple homotopies to plan high-quality trajectories over
long horizons while still being fast enough for real-time applications. We
build on the gradient-free paradigm by augmenting the trajectory sampling
strategy with a projection optimization that guides the samples toward a
feasible region. As a result, our approach can recover from the frequently
encountered pathological cases wherein all the sampled trajectories lie in the
high-cost region. Furthermore, we also show that our projection optimization
has a highly parallelizable structure that can be easily accelerated over GPUs.
We push the state-of-the-art in the following respects. Over the navigation
stack of the Robot Operating System (ROS), we show an improvement of 7-13% in
success rate and up to two times in total travel time metric. On the same
benchmarks and metrics, our approach achieves up to 44% improvement over MPPI
and its recent variants. On simple point-to-point navigation tasks, our
optimizer is up to two times more reliable than SOTA gradient-based solvers, as
well as sampling-based approaches such as the Cross-Entropy Method (CEM) and
VPSTO. Codes: https://github.com/fatemeh-rastgar/PRIEST | Fatemeh Rastgar, Houman Masnavi, Basant Sharma, Alvo Aabloo, Jan Swevers, Arun Kumar Singh | 2023-09-15T08:12:48Z | http://arxiv.org/abs/2309.08235v1 | # PRIEST: Projection Guided Sampling-Based Optimization For Autonomous Navigation
###### Abstract
Efficient navigation in unknown and dynamic environments is crucial for expanding the application domain of mobile robots. The core challenge stems from the non-availability of a feasible global path for guiding optimization-based local planners. As a result, existing local planners often get trapped in poor local minima. In this paper, we present a novel optimizer that can explore multiple homotopies to plan high-quality trajectories over long horizons while still being fast enough for real-time applications. We build on the gradient-free paradigm by augmenting the trajectory sampling strategy with a projection optimization that guides the samples toward a feasible region. As a result, our approach can recover from the frequently encountered pathological cases wherein all the sampled trajectories lie in the high-cost region. Furthermore, we also show that our projection optimization has a highly parallelizable structure that can be easily accelerated over GPUs. We push the state-of-the-art in the following respects. Over the navigation stack of the Robot Operating System (ROS), we show an improvement of 7-13% in success rate and up to two times in total travel time metric. On the same benchmarks and metrics, our approach achieves up to 44% improvement over MPPI and its recent variants. On simple point-to-point navigation tasks, our optimizer is up to two times more reliable than SOTA gradient-based solvers, as well as sampling-based approaches such as the Cross-Entropy Method (CEM) and VPSTO. Codes: [https://github.com/fatemeh-rastgar/PRIEST](https://github.com/fatemeh-rastgar/PRIEST)
## I Introduction
Smooth and collision-free navigation in unknown and dynamic environments is crucial for the deployment of mobile robots in places like hospitals, warehouses, airports, etc. In these human-habitable environments, the layout of the static obstacles can change over time. Moreover, human movement can create additional dynamic obstacles obstructing the robot's movements. As a result, the prior computed global plan invariably becomes infeasible during the robot's motion and can no longer guide the local planner toward safe state-space regions. One possible workaround is to make the local planners themselves capable of planning over long horizons while exploring different trajectory homotopies in real time. Our work is geared towards imparting such capabilities to mobile robots.
In this paper, we consider optimization-based local planners because of their ability to satisfy constraints and produce smooth motions. Moreover, this approach also allows us to encode some desired higher-level behaviors through appropriately designed cost functions. There are two broad classes of approaches to solving optimization problems encountered during trajectory planning. On one end of the spectrum, we have gradient-based approaches [1, 2] that require the cost and constraint functions to be differentiable. Typically, these methods depend heavily on the user providing a good guess for the solution to initialize the optimization solver. However, finding good trajectory initializations is challenging in fast-changing environments. Some approaches, e.g., based on integer programming [3], can cope with poor initialization. But they are typically computationally too slow, especially in very cluttered environments with tens of obstacles.
On the other end of the spectrum, we have planners based on sampling-based optimizers such as Cross-Entropy Method (CEM) [4] and Covariance Matrix Adaptation-Evolution Strategy (CMA-ES) [5]. These optimizers perform a random sampling of the state-space trajectories to obtain a locally optimal solution. Due to this exploration property, they can often come up with better solutions than purely gradient-based approaches [6]. Moreover, sampling-based optimizers are easily parallelizable and thus can be accelerated over GPUs. However, one fundamental drawback is that these optimizers typically fail when all the sampled trajectories lie in the high-cost region (refer Fig.1(a-c)).
Our main motivation in this paper is to combine the benefits of both sampling-based and gradient-based approaches. Existing efforts in this direction are mostly restricted to using sampling-based optimizers for computing a good initialization trajectory, which is then subsequently fed to the gradient-based solver [7]. Our experiments in Section V-C3 show that such approaches do not work reliably in difficult benchmarks since they still inherit the issues prevalent with
Fig. 1: A comparison of CEM and our proposed approach. Fig.(a)-(c) shows how typical CEM (or any sampling-based optimizer) struggles when all the sampled initial trajectories lie in the high-cost/infeasible region. Our approach, PRIEST, embeds a projection optimizer within any standard sampling-based approach that pushes the samples toward feasible regions before evaluating their cost.
both individual classes of approaches. Moreover, at a more fundamental level, the sampling optimizer is unaware of the capabilities of the downstream gradient-based solver or how it refines the initial guess.
Our main idea in this paper is to use gradient-based approaches to improve the inner working of sampling-based optimization. In particular, we formulate a projection optimization to guide the sampling process toward low-cost regions at each iteration. As a result, our approach can recover from the pathological cases where all sampled trajectories are infeasible, e.g., due to violation of collision constraints(see Fig.1(d-f)). Our core innovations and their benefits are summarized below.
**Algorithmic Contribution:** We present Projection Guided Sampling Based Optimization (PRIEST). The key building block is a novel optimizer that can take a set of trajectories and project each of them onto the feasible set. This allows us to guide sampled trajectories toward feasible regions before evaluating their cost and subsequent refinement of the sampling distribution. We show how our projection optimizer can be effectively parallelized and accelerated over GPUs by reformulating the underlying collision and kinematic constraints into polar form and using an Alternating Minimization (AM) approach to the resulting problem. Finally, we show how our projection optimizer naturally integrates with decentralized variants of sampling-based optimizers [8], wherein multiple sampling distributions are refined in parallel to improve the optimality of the solution. See Section IV for a summary of contributions over authors' prior works.
**Improvement over the State-of-the-art (SOTA):** We show that PRIEST outperforms existing approaches in terms of success rate, time-to-reach the goal, and computation time, etc. In particular, we show at least 7% improvement over the ROS Navigation stack in success rate on the BARN dataset [9], while reducing the travel time by a factor of two. On the same benchmarks, our success rate is at least 35% better than SOTA local sampling-based optimizers like MPPI [10] and log-MPPI [11]. Additionally, we consider a point-to-point navigation task and compare PRIEST with the SOTA gradient-based solvers, ROCKIT [1](a collection of optimizers like IPOPT, ACADO, etc) and FATROP [2], and sampling-based methods CEM and VPSTO [5]. We show up to \(2\mathrm{x}\) improvement in success rate over these baselines. Furthermore, we show that PRIEST respectively has 17% and 23% higher success rates than the ROS Navigation stack and other SOTA approaches in dynamic environments.
## II Mathematical Preliminaries
_Symbols and Notations:_ Small case letters with regular and bold font represent scalars and vectors, respectively. Matrices have upper-case bold fonts. The variables \(t\) and \(T\) are time stamps and transpose, respectively. The number of planning steps, obstacles, decision variables, and samples are shown as \(n_{p},n_{o},n_{v}\) and \(N_{b}\). The left subscript \(k\) denotes the trajectory optimizer's iteration. The rest of the symbols will be defined in the first place of use.
### _Problem Formulation_
#### Ii-A1 Differential Flatness
We leverage differential flatness to make our approach applicable to a large class of systems such as wheeled mobile robots, quadrotors, etc. Specifically, we assume \(\mathbf{u}\!=\!\mathbf{\Phi}(x^{(q)}(t),y^{(q)}(t),z^{(q)}(t))\): the control inputs can be obtained through some analytical mapping \(\mathbf{\Phi}\) of \(q^{th}\) level derivatives of the position-level trajectory. For example, for a quadrotor, the pitch, roll, yaw angles, and thrust can be analytically expressed in terms of axis-wise accelerations.
#### Ii-A2 Trajectory Optimization
We are interested in solving the following 3D trajectory optimization:
\[\min_{x(t),y(t),z(t)}c_{1}(x^{(q)}(t),y^{(q)}(t),z^{(q)}(t)), \tag{1a}\] \[x^{(q)}(t),y^{(q)}(t),z^{(q)}(t)|_{t=t_{0}}\!=\!\mathbf{b}_{0},\] \[x^{(q)}(t),y^{(q)}(t),z^{(q)}(t)|_{t=t_{f}}\!=\!\mathbf{b}_{f},\] (1b) \[\dot{x}^{2}(t)+\dot{y}^{2}(t)+\dot{z}^{2}(t)\leq v_{max}^{2},\] \[\ddot{x}^{2}(t)+\ddot{y}^{2}(t)+\ddot{z}^{2}(t)\leq a_{max}^{2},\] (1c) \[s_{min}\leq(x(t),y(t),z(t))\leq s_{max}\] (1d) \[-\frac{(x(t)\!-\!x_{o,j}(t))^{2}}{a^{2}}\!-\!\frac{(y(t)\!-\!y_{o,j}(t))^{2}}{a^{2}}\!-\!\frac{(z(t)\!-\!z_{o,j}(t))^{2}}{b^{2}}\!\!+\!1\leq 0, \tag{1e}\]
where \((x(t),y(t),z(t))\) and \((x_{o,j}(t),y_{o,j}(t),z_{o,j}(t))\) respectively denote the robot and the \(j^{th}\) obstacle position at time \(t\). The function \(c_{1}(.)\) is defined in terms of derivatives of the position-level trajectories and can encompass commonly used penalties on accelerations, velocities, curvature, etc. We can also leverage differential flatness to augment control costs in \(c_{1}(.)\) as well. The affine inequalities (1d) model bounds on the robot workspace. The vectors \(\mathbf{b}_{0}\) and \(\mathbf{b}_{f}\) in (1b) represent the initial and final values of boundary condition on the \(q^{th}\) derivative of the position-level trajectory. In our formulation, \(q\!=\!\{0,1,2\}\). Inequalities (1c) denotes the velocity and acceleration bounds with their respective maximum values being \(v_{max}\) and \(a_{max}\). In (1e), we enforce collision avoidance, assuming obstacles are modeled as axis-aligned ellipsoids with dimensions \((a,a,b)\).
**Remark 1**.: _The cost functions \(c_{1}(.)\) need not be convex, smooth or even have an analytical form in our approach._
#### Ii-A3 Trajectory Parametrization and Finite Dimensional Representation
To ensure smoothness in the trajectories, we parametrize the optimization variables \((x(t),y(t),z(t))\) as
\[\begin{bmatrix}x(t_{1})\\ \vdots\\ x(t_{n_{p}})\end{bmatrix}\!\!=\!\mathbf{P}\ \mathbf{c}_{x,}\!\!\!\begin{bmatrix}y(t_{1}) \\ \vdots\\ y(t_{n_{p}})\end{bmatrix}\!\!=\!\mathbf{P}\mathbf{c}_{y,}\!\!\!\begin{bmatrix}z(t_ {1})\\ \vdots\\ z(t_{n_{p}})\end{bmatrix}\!\!=\!\mathbf{P}\ \mathbf{c}_{z} \tag{2}\]
where \(\mathbf{P}\) is a matrix created using polynomial basis functions that are dependent on time and \(\mathbf{c}_{x},\mathbf{c}_{y},\mathbf{c}_{z}\) represent the coefficients of the polynomial. The expression remains applicable for derivatives by utilizing \(\dot{\mathbf{P}}\) and \(\ddot{\mathbf{P}}\).
By incorporating the parametrized optimization variables stated in (2) and compact representation of variables, we can reframe the optimization problem (1a)-(1e) as follows:
\[\min_{\mathbf{\xi}}c_{1}(\mathbf{\xi}) \tag{3a}\] \[\mathbf{A}\mathbf{\xi}=\mathbf{b}_{eq}\] (3b) \[\mathbf{g}(\mathbf{\xi})\leq\mathbf{0}, \tag{3c}\]
where \(\boldsymbol{\xi}\)=\(\left[\mathbf{c}_{T}^{T}\,\mathbf{c}_{y}^{T}\,\mathbf{c}_{z}^{T}\right]\). With a slight abuse of notation, we have now used \(c_{1}(.)\) to denote a cost function dependent on \(\boldsymbol{\xi}\). The matrix \(\mathbf{A}\) is a block diagonal where each block on the main diagonal consists of \(\left[\mathbf{P}_{0}\ \tilde{\mathbf{P}}_{0}\ \tilde{\mathbf{P}}_{0}\ \mathbf{P}_{-1}\right]\). The subscript \(0\), \(-1\) signify the first and last row of the respective matrices and pertain to the initial and final boundary constraints. The vector \(\mathbf{b}_{eq}\) is simply the stack of \(\mathbf{b}_{0}\) and \(\mathbf{b}_{f}\). The function \(\mathbf{g}\) contains all the inequality constraints (1c)-(1e).
## III Main Algorithmic Results
This section presents our main algorithmic contributions. An overview of our approach is shown in Fig.2. The main differentiating factor from existing baselines lies in the insertion of the projection optimizer between the sampling and cost evaluation block. The projection optimizer aids in constraint handling by pushing the sampled trajectories toward feasible regions. In this sense, our approach combines the benefits of both a gradient-free approach and those based on differentiable cost/constraint functions. As shown in Appendix VII, our projection block, in particular, leverages tools from convex optimization.
We next present our main building block: the projection optimizer, followed by its integration into a sampling-based optimizer.
### _Projection Optimization_
Consider the following optimization problem
\[\underset{\boldsymbol{\xi}_{i}}{\min}\frac{1}{2}\|\boldsymbol{\xi}_{i}- \boldsymbol{\xi}_{i}\|_{2}^{2},\ i=1,2,...,N_{b} \tag{4}\]
\[\mathbf{A}\boldsymbol{\xi}_{i}=\mathbf{b}_{eq},\qquad\mathbf{g}(\boldsymbol{ \overline{\xi}}_{i})\leq\boldsymbol{0} \tag{5}\]
The cost function (4) aims to minimally modify the \(i^{th}\) sampled trajectory \(\boldsymbol{\xi}_{i}\) to \(\boldsymbol{\overline{\xi}}_{i}\) in order to satisfy the equality and inequality constraints. In Appendix VII, we show that for a certain class of constraint functions \(\mathbf{g}\) formed with quadratic and affine constraints, optimization (4)-(5) can be reduced to the fixed-point iteration of the following form:
\[{}^{k+1}\boldsymbol{\overline{\xi}}_{i}\!\!=\!\!\arg\underset{ \boldsymbol{\xi}_{i}}{\min}\frac{1}{2}\|\boldsymbol{\overline{\xi}}_{i}\!- \!\boldsymbol{\xi}_{i}\|_{2}^{2}\!+\!\frac{\rho}{2}\left\|\boldsymbol{\overline {\xi}}_{i}\!-\!\boldsymbol{\xi}_{i}\right\|_{2}^{2}\!+\!\frac{1}{2}\left\| \boldsymbol{\overline{\xi}}_{i}\!-\!\boldsymbol{\xi}_{i}\right\|_{2}^{2}\!+\! 1\!\!\boldsymbol{\lambda}_{i}^{T}\boldsymbol{\overline{\xi}}_{i},\] \[\mathbf{A}\boldsymbol{\overline{\xi}}_{i}=\mathbf{b}_{eq} \tag{6b}\]
In (6a)-(6b), \(\mathbf{F}\) represents a constant matrix and \(\mathbf{h}\) is some closed-form analytical function. The vector \({}^{k+1}\boldsymbol{\lambda}_{i}\) is the Lagrange multiplier at iteration \(k+1\) of the projection optimization. We derive these entities in Appendix VII. The main computational burden of projection optimization stems from solving the QP (6a). However, since there are no inequality constraints in (6b), the QP essentially boils down to an affine transformation of the following form:
\[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(c_{aug}\) values. The final output of the optimizer is the sample from the \(ElliteSet\) with the lowest \(c_{aug}\).
#### Iii-B2 Updating the Sampling Distribution
There are several ways to perform the distribution update in line \(13\) of Alg. 1. We adopt the following MPPI-like updated rule.
\[{}^{l+1}\boldsymbol{\mu}=(1-\sigma)\ ^{l}\boldsymbol{\mu}+\sigma(\frac{1}{ \sum\limits_{m\in C}c_{m}})\,\sum\limits_{m\in C}\boldsymbol{\bar{\xi}}_{m}c _{m}, \tag{9a}\] \[{}^{l+1}\boldsymbol{\Sigma}\!\!=\!\!(1\!-\!\sigma)\ ^{l} \boldsymbol{\Sigma}\!\!+\!\sigma\frac{\sum\limits_{m\in C}c_{m}( \boldsymbol{\bar{\xi}}_{m}-\ ^{l+1}\boldsymbol{\mu})(\boldsymbol{\bar{\xi}}_{m}-\ ^{l+1}\boldsymbol{\mu})^{T}}{\sum\limits_{m\in C}c_{m}},\] (9b) \[c_{m}=\exp{(\gamma^{-1}(c_{aug}(\boldsymbol{\bar{\xi}}_{m})- \delta))}, \tag{9c}\]
where the scalar constant \(\sigma\) is the so-called learning rate. The set \(C\) consists of the top \(N_{elite}\) selected trajectories (line 11). The constant \(\gamma\) specifies the sensitivity of the exponentiated cost function \(c_{aug}(\boldsymbol{\bar{\xi}}_{m})\) for top selected trajectories. \(\delta=\min c_{aug}(^{l}\boldsymbol{\bar{\xi}}_{m})\) is defined to prevent numerical instability.
**Remark 2**.: _Alg.1 is agnostic to the distribution update rule. For example, (9b) can be replaced with CMA-ES style update and our initial experiments in this regard have shown good results._
### _Decentralized PRIEST (D-PRIEST)_
In this subsection, we build upon [8] and propose a decentralized variant of our projection-guided sampling-based optimizer. As shown in Fig.4, in the decentralized variant, we instantiate several optimizers in parallel and choose the lowest cost-optimal solution among these. As a result, such variants are shown to be more capable of escaping poor local minima. Our key objective in this section is to show that our projection optimizer naturally integrates into decentralized optimizers built along the lines of [8].
Fig.4 shows our proposed approach. We initialize \(M\) different Gaussian distributions \({}^{l}\boldsymbol{\mu}_{j},{}^{l}\boldsymbol{\Sigma}_{j}\) at \(l=0\). We sample \(\frac{N_{v}}{M}\) samples of \(\boldsymbol{\xi}\) from each of these distributions. The sampled \(\boldsymbol{\xi}_{ij}\) (\(i^{th}\) sample from \(j^{th}\) Gaussian ) are then stacked row-wise to form a matrix. Importantly, such a construction allows us to easily track which row of the matrix corresponds to samples from which of the \(M\) Gaussian distributions. The projection optimizer then simultaneously guides all the samples toward feasible regions. The output from the projection is then separated back into \(M\) sets on which the cost functions are evaluated in parallel. We then update the sampling distribution in parallel based on the cost values. Finally, after \(l\) iterations, the optimal trajectory from each optimizer instance is compared and the one with the lowest cost is selected as the optimal solution.
## IV Connections to Existing Works
**Connection to CEM-GD:** Alternate approaches of combining sampling and gradient-based approach were presented recently in [4, 13]. In these two cited works, the projection at line 5 of Alg.1 is replaced with a gradient step of the form \(\boldsymbol{\bar{\xi}}_{i}=\boldsymbol{\xi}_{i}-\sigma\nabla_{\boldsymbol{ \xi}}c_{1}\), for some learning-rate \(\sigma\). Our approach PRIEST improves [4, 13] in two main aspects. First, it can be applied to problems with non-smooth and non-analytical cost functions. Second, the gradient-descent-based can be computationally slow as it relies on taking small steps toward optimal solutions. In contrast, the projection optimizer in Alg.1 leverages convex optimization, specifically quadratic programming to ensure faster convergence.
**Exploration Strategy:** Sampling-based optimizers like MPPI [10] and its variants [11, 14] explore by injecting random perturbation into the control inputs to obtain a trajectory distribution. PRIEST, on the other hand, injects perturbation in the parameter space (polynomial coefficients), leading to one core advantage. We can inject larger perturbations into the parameter space that helps in better exploration over longer horizons (see Fig.5(a)-(b)). Moreover, the projection optimizer ensures that the trajectory distribution satisfies boundary constraints and is pushed toward the feasible
Fig. 4: Decentralized variant of PRIEST inspired by [8], wherein we instantiate and maintain different \(M\) Gaussian distributions in parallel to counter poor local minima. An important thing to note is that our projection optimizer naturally fits into the decentralized structure.
region. In contrast, increasing the covariance of control perturbation has been shown to make MPPI diverge [11].
**Improvement over Author's Prior Work [15]** PRIEST builds on our prior work [15] that used projection-augmented sampling for visibility-aware navigation. Our current work targets a much broader scope of navigation problems with potentially non-smooth costs. On the algorithmic side, we extended the projection optimizer of [15] to 3D (see Appendix VII) and improved the distribution update rule to account for actual cost values. Moreover, the decentralized variant of PRIEST is also a major contribution over [15]. On the experimental side, we present a more elaborate benchmarking with several existing methods on both open-source as well as custom data sets.
## V Validation and Benchmarking
### _Implementation Details_
We developed Alg.1, PRIEST, in Python using JAX [16] library as our GPU-accelerated algebra backend. The simulation framework for our experiments was built on top of the ROS [17] and utilized the Gazebo physics simulator. All benchmarks were executed on a Legion7 Lenovo laptop equipped with an Intel Core i7 processor and an Nvidia RTX 2070 GPU. Additionally, we used the open3d library to downsample PointCloud data [18]. For all the benchmarks, we chose \(N=13,N_{b}=110\), \(N_{proj}=80\) and \(N_{clite}=20\).
We develop a Model Predictive Control (MPC) pipeline on top of our optimizer. For each MPC iteration, we employ LIDAR scanning to infer obstacle locations. More specifically, we take each LIDAR point as an obstacle and employ voxel-downsampling through the open3d library to keep the number of obstacles tractable.
#### V-A1 Baselines and Benchmarks
We compared our approach with different baselines in three sets of benchmarks. **Comparison on BARN Dataset [9]:** This benchmark requires a mobile robot to iterative plan in a receding horizon fashion to navigate an obstacle field. We used the BARN dataset that contains 300 environments with varied levels of complexity, specifically designed to create local-minima traps for the robot. In this benchmark, we evaluate our approach against DWA [19], TEB [20] implemented in the ROS navigation stack and MPPI [10], and log-MPPI [11]. The first two baselines are still considered the workhorse of robot navigation in both industry and academia while the latter two are the SOTA gradient-free approaches for planning. No prior map was made available for any of the approaches. As a result, all approaches had to rely on their on-the-fly planning with a local cost map (or point cloud for our approach) for navigation. TEB and DWA used a combination of graph-search and optimization while MPPI and log-MPPI are purely optimization-based approaches. We used a holonomic robot modeled as a double-integrator system for the comparisons. Consequently, the cost function (\(c_{1}\)) used in our approach is given by a combination of penalty on the magnitude of the acceleration, curvature, and path-following error. The first two terms are smooth and differentiable and can be described in terms of axis-wise accelerations and velocities. The last term does not have an analytical form as it required computing the projection of a sampled trajectory way-point onto the straight-line path to the goal.
**Point to Point Navigation with Differentiable Cost:** In this benchmark, we considered the task of generating a single trajectory between a start and a goal location. The cost function \(c_{1}\) consisted of smoothness terms penalizing the norm of the acceleration. For comparison, we considered SOTA gradient-based optimizers ROCKIT [1] and FATROP [2] and sampling-based optimizers CEM, and VPSTO [5]. We designed various cluttered environments wherein obstacles are placed randomly in 2D and 3D spaces (see Fig.6). For the 2D comparisons, our experiments involved 117 trials conducted in a cluttered environment featuring 50 randomly placed obstacles, each with a radius of \(0.4\) m. For the 3D comparisons, we conducted \(100\) trials in a cluttered room with dimensions of \(7\times 7\times 3\) units and included \(25\) randomly positioned obstacles, each with a radius of \(0.68\) m.
**Comparison in a Dynamic Environment:** We also compare our approach PRIEST with CEM, log-MPPI, MPPI, TEB, and DWA in dynamic environments. The cost function used was the same as that used for the BARN dataset. In this benchmark, we introduced ten obstacles, each with a velocity of \(0.1\) m/s, moving in the opposite direction of the robot. These obstacles have a radius of \(0.3\) m. We run simulations over 30 different configurations. The start and final points remain constant for all the configurations, while the obstacle positions are varied for each configuration. The maximum velocity for the robot was fixed at \(1\) m/s.
#### V-A2 Metrics
We utilize the following metrics for benchmarking against other baselines:
**Success Rate**: A run is considered successful when the robot approaches the final point within a 0.5m radius without any collision. The success rate is calculated as the ratio of the total number of successful runs to the overall number of runs.
**Travel Time**: This refers to the duration it takes for the robot to reach the vicinity of the goal point.
**Computation Time**: This metric quantifies the time required to calculate a solution trajectory.
### _Qualitative Results_
#### V-B1 A Simple Benchmark
In Fig.(1), our objective is to contrast the behavior of CEM(a-c) and our approach PRIEST(d-e) in a scenario wherein all the initial sampled trajectories are placed within a high-cost/infeasible region. To show this, we construct an environment with a large obstacle with a radius of \(7\)m. The task is to obtain a collision-free trajectory from \((1,7)\) to \((20,13)\) while respecting the maximum velocity and acceleration limits of \(2.8\) and \(3.3\), respectively. As observed, our optimizer effectively pushes the sample trajectories out of the infeasible region. In contrast, the CEM samples persistently remain within the infeasible region. This behavior arises when the CEM variance fails and cannot navigate samples effectively out of the infeasible region.
#### V-B2 Receding Horizon Planning on Barn Dataset
In Fig. (5), we show a qualitative comparison between TEB and our approach PRIEST within one of the BARN environments.
Both methods can search over multiple homotopies but differ in their process. While TEB uses graph search, PRIEST relies on the stochasticity of the sampling process guided through the projection optimizer. As a result, the latter can search over a longer horizon and wider state space. It is worth pointing out that increasing the planning horizon of TEB dramatically increases the computation time and thus degrades the overall navigation performance instead of improving it. We present the quantitative comparison with TEB and other baselines in the next subsection.
#### V-B3 Point-to-Point Navigation Benchmark
Fig.6 shows trajectories generated by PRIEST alongside those generated by gradient-based optimizers, ROCKIT, FATROP, and sampling-based optimizers CEM, VPSTO. For the particular 2D example shown, both PRIEST and VPSTO successfully generated collision-free trajectories, while the other baselines failed. For the shown 3D environment, only PRIEST and CEM achieved collision-free trajectories. PRIEST trajectories were also smoother than other methods. In the next subsection, we present the quantitative statistical trends for all the baselines across different randomly generated environments.
#### V-B2 Comparison with Additional Gradient-Based and Sampling-based Optimizers
Table II compares the performance of the PRIEST with all the baselines in 2D and 3D cluttered environments (recall Fig.6). ROCKIT and FATROP were initialized with simple straight-line trajectories between the start and the goal that were typically not collision-free. Due to conflicting gradients from the neighboring obstacle, both these methods often failed to obtain a collision-free trajectory. Interestingly, the sampling-based approaches didn't fare particularly better as both CEM and VPSTO reported a large number of failures. We attribute the failures of VPSTO and CEM to two reasons. First, most of the sampled trajectories for both CEM and VPSTO fell into the high-cost/infeasible area, and as discussed before, this creates a pathologically difficult case for sampling-based optimizers. Second, both CEM and VPSTO roll constraints into the cost as penalties and can be really sensitive to tuning the individual cost terms. In summary, the success-rate trend of Table II shows how both classes of both gradient and sampling-based approaches struggle in highly cluttered environments. Consequently, a strong potential solution is to use something like PRIEST that can guide trajectory sampling with convex optimization toward constraint satisfaction.
are high, we note that the original author implementation that we use may not have been optimized for computation speed.
#### V-B3 Combination of Gradient-Based and Sampling-based Optimizers
A simpler alternative to PRIEST can be just to use a sampling-based optimizer to compute a good guess for the gradient-based solvers [7]. However, such an approach will only be suitable for problems with differentiable costs. Nevertheless, we evaluate this alternative for the point-to-point benchmark of Fig.6. We used CEM to compute an initial guess for ROCKIT and FATROP. The results are summarized in Table III. As can be seen, while the performance of both ROCKIT and FATROP improved in 2D environments, the success rate of the latter decreased substantially in the 3D variant. The main reason for this conflicting trend is that the CEM (or any initial guess generation) is unaware of the exact capabilities of the downstream gradient-based optimizer. It should also be noted that the trend of Table III is not general and on some other problems, FATROP might outperform ROCKIT. This unreliability forms the core motivation behind PRIEST, which outperforms all ablations in Table III. By embedding the projection optimizer within the sampling process itself (refer Alg.1) and augmenting the projection residual to the cost function, we ensure that the sampling and projection complement each other.
#### V-B4 Benchmarking in Dynamic Environments
Table IV presents the results obtained from the experiments in dynamic environments. By having a success rate of 83%, our method outperforms other approaches. Furthermore, our method shows competitive efficiency with a mean travel time of 11.95 seconds. Overall, the results show the superiority of our approach in dealing with the complexities of cluttered dynamic environments, making it a promising solution for real-world applications in human-habitable environments.
#### V-B5 Scaling of D-PRIEST
Fig.(8) shows the linear scaling of the per-iteration time of D-PRIEST with respect to the number of distributions. Typically, solutions are obtained in around 20 iterations. Thus, around 4 parallel distributions can be maintained under real-time constraints.
## VI Conclusions and Future Work
We presented PRIEST, an important contribution towards leveraging the benefits of both sampling-based optimizer and convex optimization. In particular, we used the latter to derive a GPU-accelerated projection optimizer that guides the trajectory sampling process. We also showed how the same projection set-up can be easily embedded within decentralized variants of sampling optimizers wherein multiple parallel distributions are maintained at each iteration. We performed a very extensive benchmarking showcasing the benefits of PRIEST over SOTA approaches in the context of autonomous navigation in unknown environments. Our future efforts are focused on extending PRIEST to high-dimensional manipulation problems.
## VII Appendix
**Reformulating constraints:** We reformulate collision avoidance inequality constraints (1e) as follows:
\[\textbf{f}_{\alpha,j}\!\!=\!\!\!\begin{cases}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the \(s_{min}\) and \(s_{max}\) in appropriate form. The matrix \(\mathbf{G}\) is formed by stacking \(-\mathbf{P}\) and \(\mathbf{P}\) vertically. Similarly, \(\mathbf{d}_{min}\), \(\mathbf{d}_{max}\) are formed by stacking the lower (\([1,0,0]\)), and upper bounds (\([\infty,1,1]\)) of \(\mathbf{d}_{o,i},\mathbf{d}_{v,i},\mathbf{d}_{a,i}\). Also, \(\tilde{\mathbf{F}}\), and \(\mathbf{e}\) are formed as
\[\tilde{\mathbf{F}} =\begin{bmatrix}\mathbf{F}_{o}\\ \tilde{\mathbf{P}}\\ \mathbf{0}\end{bmatrix}\mathbf{0}\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{P}}\\ \tilde{\mathbf{P}}\end{bmatrix}\mathbf{0}\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{F}}_{o}\\ \tilde{\mathbf{P}}\end{bmatrix}\mathbf{0}\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{F}}_{o}\\ \mathbf{0}\end{bmatrix}\mathbf{\hat{e}}\begin{bmatrix}\mathbf{x}_{o}+a \mathbf{d}_{o,i}\cos\boldsymbol{\alpha}_{o,i}\sin\boldsymbol{\beta}_{o,i}\\ \mathbf{d}_{v,i}s_{max}\cos\boldsymbol{\alpha}_{v,i}\sin\boldsymbol{\beta}_{v, i}\\ \mathbf{d}_{a,i}s_{max}\cos\boldsymbol{\alpha}_{a,i}\sin\boldsymbol{\beta}_{a,i}\\ \mathbf{y}_{o}+a\mathbf{d}_{o,i}\sin\boldsymbol{\alpha}_{o,i}\sin\boldsymbol{ \beta}_{o,i}\\ \mathbf{d}_{v,i}v_{max}\sin\boldsymbol{\alpha}_{v,i}\sin\boldsymbol{\beta}_{v, i}\\ \mathbf{d}_{a,i}a_{max}\sin\boldsymbol{\alpha}_{a,i}\sin\boldsymbol{\beta}_{a,i}\\ \mathbf{z}_{o}+b\,\mathbf{d}_{o,i}\cos\boldsymbol{\beta}_{o,i}\\ \mathbf{d}_{v,i}v_{max}\cos\boldsymbol{\beta}_{v,i}\\ \mathbf{d}_{a,i}a_{max}\cos\boldsymbol{\beta}_{a,i}\end{bmatrix} \tag{13}\]
where \(\mathbf{F}_{o}\) is formed by stacking as many times as the number of obstacles. Also, \(\mathbf{x}_{o},\mathbf{y}_{o},\mathbf{z}_{o}\) is obtained by stacking \(x_{o,j}(t),y_{o,j}(t),z_{o,j}(t)\) at different time stamps and for all obstacles.
**Solution process** We utilize the augmented Lagrangian method to relax the equality and affine constraints (12c)-(12e) as \(l_{2}\) penalties. Consequently, the projection cost can be rephrased as follows:
\[\mathcal{L} =\!\!\!\frac{1}{2}\|\boldsymbol{\overline{\xi}}_{i}\!-\!\! \boldsymbol{\xi}_{i}\|_{2}^{2}\!-\!\!\langle\boldsymbol{\lambda}_{i}, \boldsymbol{\overline{\xi}}_{i}\rangle\!+\!\frac{\rho}{2}\left\|\boldsymbol{ \overline{\xi}}_{\!i}\!-\!\!\boldsymbol{\xi}\right\|_{2}^{2}\!+\!\frac{\rho}{2} \left\|\boldsymbol{\mathbf{G}}\boldsymbol{\overline{\xi}}_{\!i}\!-\!\boldsymbol {\tau}\!+\!\mathbf{s}_{i}\right\|_{2}^{2},\] \[=\!\!\!\frac{1}{2}\left\|\boldsymbol{\overline{\xi}}_{i}-\! \boldsymbol{\xi}_{i}\right\|_{2}^{2}\!-\!\!\langle\boldsymbol{\lambda}_{i}, \boldsymbol{\overline{\xi}}_{i}\rangle\!+\!\frac{\rho}{2}\left\|\boldsymbol{ \mathbf{F}}\boldsymbol{\overline{\xi}}_{i}\!-\!\boldsymbol{\mathbf{e}}\right\|_ {2}^{2} \tag{14}\]
where, \(\boldsymbol{\mathbf{F}}\!=\!\!\begin{bmatrix}\tilde{\mathbf{F}}\\ \mathbf{G}\end{bmatrix}\!\!\!\boldsymbol{\mathbf{e}}\!=\!\!\begin{bmatrix} \tilde{\mathbf{e}}\\ \boldsymbol{\tau}\!-\!\mathbf{s}_{i}\end{bmatrix}\). Also, \(\boldsymbol{\lambda}_{i}\), \(\rho\) and \(\mathbf{s}_{i}\) are Lagrange multiplier, scalar constant and slack variable. We minimize (14) subject to (12b) using AM, which is reduced to the following steps.
\[{}^{k+1}\boldsymbol{\alpha}_{i}=\arg\min_{\boldsymbol{\alpha}_{i}} \mathcal{L}^{(k}\boldsymbol{\overline{\xi}}_{i},\boldsymbol{\alpha}_{i},^{k} \boldsymbol{\beta}_{i},^{k}\boldsymbol{\mathbf{d}}_{i},^{k}\boldsymbol{ \mathbf{\lambda}}_{i},^{k}\boldsymbol{\mathbf{s}}_{i}) \tag{15a}\] \[{}^{k+1}\boldsymbol{\beta}_{i}=\arg\min_{\boldsymbol{\beta}_{i}} \mathcal{L}^{(k}\boldsymbol{\overline{\xi}}_{i},^{k+1}\boldsymbol{\alpha}_{i}, ^{k}\boldsymbol{\beta}_{i},^{k}\boldsymbol{\mathbf{\lambda}}_{i},^{k}\boldsymbol {\mathbf{s}}_{i})\] (15b) \[{}^{k+1}\boldsymbol{\mathbf{d}}_{i}=\arg\min_{\boldsymbol{\mathbf{ d}}_{i}}\mathcal{L}^{(k}\boldsymbol{\overline{\xi}}_{i},^{k+1}\boldsymbol{ \alpha}_{i},^{k+1}\boldsymbol{\beta}_{i},^{k}\boldsymbol{\mathbf{\lambda}}_{i},^{k} \boldsymbol{\mathbf{s}}_{i})\] (15c) \[{}^{k+1}\boldsymbol{\mathbf{s}}_{i}=\max(\boldsymbol{0},-\mathbf{G}^ {k}\boldsymbol{\overline{\xi}}_{i}+\boldsymbol{\tau})\] (15d) \[{}^{k+1}\boldsymbol{\mathbf{\lambda}}_{i}={}^{k}\boldsymbol{ \mathbf{\lambda}}_{i}-\rho\boldsymbol{\mathbf{F}}^{T}(\mathbf{F}^{k}\boldsymbol{ \overline{\xi}}_{i}-\!\boldsymbol{\overline{\xi}})\] (15e) \[{}^{k+1}\boldsymbol{\mathbf{e}}=\begin{bmatrix}\tilde{\mathbf{e}} \left({}^{k+1}\boldsymbol{\alpha}_{i},^{k+1}\boldsymbol{\beta}_{i},^{k+1} \boldsymbol{\mathbf{d}}_{i}\right)\\ \boldsymbol{\tau}-{}^{k+1}\boldsymbol{\mathbf{s}}_{i}\end{bmatrix}\] (15f) \[{}^{k+1}\overline{\boldsymbol{\xi}}_{i}=\arg\min_{\boldsymbol{ \mathbf{f}}_{i}}\mathcal{L}(\boldsymbol{\xi}_{i},^{k+1}\boldsymbol{\mathbf{e}}_{i},^{k+1} \boldsymbol{\mathbf{\lambda}}_{i},^{k}\boldsymbol{\mathbf{\lambda}}_{i}) \tag{15g}\]
For each AM step, we only optimize one group of variables while others are held fixed. Note that stacking of right-hand sides of (15f) and (15e) provide the function \(\mathbf{h}\) presented in (6a). The steps (15a)-(15c) have closed form solutions in terms of \({}^{k}\overline{\boldsymbol{\xi}}_{i}\)[22, 21, 15]. Also, (15g) is a representation of (8a)-(8c).
**Remark 3**.: _The matrix \(\boldsymbol{F}\) and vector \(\boldsymbol{e}\) (see (13)) have the dimension of \(3(n_{o}+2)n_{p}\times 3n_{v}\) and \(3(n_{0}+2)n_{p}\) and their complexity grow linearly with the number of obstacles \(n_{o}\) and the planning horizon \(n_{p}\)._
|
2309.17315 | Data-Driven Newton Raphson Controller Based on Koopman Operator Theory | Newton-Raphson controller is a powerful prediction-based variable gain
integral controller. Basically, the classical model-based Newton-Raphson
controller requires two elements: the prediction of the system output and the
derivative of the predicted output with respect to the control input. In real
applications, the model may not be known and it is infeasible to predict the
system sometime ahead and calculate the derivative by finite difference method
as done in simulation. To solve these problems, in this work, we utilize the
Koopman operator framework to reconstruct a linear model of the original
nonlinear dynamical system and then utilize the output of the new linear system
as the predictor of the Newton-Raphson controller. This method is only based on
collected data within some time instant thus more practical. Three examples
related to highly nonlinear systems are provided to verify the effectiveness of
our proposed method. | Mi Zhou | 2023-09-29T15:24:25Z | http://arxiv.org/abs/2309.17315v1 | # Data-Driven Newton Raphson Controller Based on Koopman Operator Theory
###### Abstract
Newton-Raphson controller is a powerful prediction-based variable gain integral controller. Basically, the classical model-based Newton-Raphson controller requires two elements: the prediction of the system output and the derivative of the predicted output with respect to the control input. In real applications, the model may not be known and it is infeasible to predict the system sometime ahead and calculate the derivative by finite difference method as done in simulation. To solve these problems, in this work, we utilize the Koopman operator framework to reconstruct a linear model of the original nonlinear dynamical system and then utilize the output of the new linear system as the predictor of the Newton-Raphson controller. This method is only based on collected data within some time instant thus more practical. Three examples related to highly nonlinear systems are provided to verify the effectiveness of our proposed method.
## I Introduction
Trajectory tracking control is one of the most important topics in the robotics field such as mobile robots [1], self-driving cars [2], quadrotor UAVs [3], underwater robots [4], and so on. Many methods have been proposed to achieve real-time tracking performance. Existing techniques include Proportional-Integral-Derivative (PID) [4], Byrnes-Isidori regulator [5], model predictive control [6, 7], etc.
Newton Raphson (NR) controller, first proposed in [8], is a tracking method that based on variable-gain integrator and the Newton-Raphson method for finding zeros of a function. This technique consists of three elements: (i) output prediction which tracks the reference signal; (ii) an integral controller with variable gain; (iii) a speedup of the control action for enhancing the tracker's accuracy and guaranteeing the stability of the closed-loop system. [9] provided a detailed introduction to this technique and theoretical derivation about the convergence of the tracking controller and error analysis with persuasive illustrative simulation and laboratory experiments. Subsequently, more works appeared that used Newton Raphson controller to solve several challenging problems such as tracking control of leader-follower multi-agent systems [8, 1], distributed formation control of multi-agent mobile systems in swarms and platoons [10], driving an underactuated system in a potentially adversarial environment modeled as a pursuit-evasion game [11], tracking control of nonlinear inverted pendulums and differentially driven cars [12], and so on. All these works showed that the tracking convergence using this method is quantified for both constant or time-dependent reference signals and was quite fast and had a large region of convergence as well. The NR regulation technique described above is based on a look-ahead simulation of the systems which works as both a predictor and an observer. This mechanism, however, requires a precise model of the system in order to obtain a reliable output prediction and hence an effective tracking performance. Therefore, some works started to explore the potential of neural networks in the predictor. [11] formulated a pursuit-evasion game, regarded it as a tracking regulation problem solved using the Newton-Raphson controller, and used a deep neural network to approximate the behavior of the evader gathered online during the pursuit. In [12], the authors utilized a feed-forward neural network as an output predictor for the Newton-Raphson controller thus achieving a model-free realization. However, the training process has a high reliance on the accuracy of the data thus it lacks robustness and real-time realization.
There is an increasing need for data-driven system identification approaches with the development of more complicated robotic systems. One alternative to data-driven approaches is to use neural networks and deep learning to identify a system. However, the deep-learning method suffers from long-time training. The Koopman operator framework offers new opportunities for the control of nonlinear systems from the perspective of linearizing systems. It is a super powerful tool for identifying and linearizing a nonlinear system with higher accuracy compared to traditional linearization methods. As we know, the computational limitation due to nonlinearity is an essential challenge in robotics control. Instead of linearizing systems directly, Koopman analysis achieves linearization by representing the nonlinear dynamics in a globally linear framework. The Koopman-based approach is a perfect model-free system identification method with low time complexity. It has thus found wide applications in model predictive control, active learning, and hybrid control of robots.
There are loads of works related to Koopman operator theory with good theoretical foundations and applications in real robotic systems. A detailed introduction to Koopman operator theory can be found in [13]. In [14], the authors used model predictive control (MPC) to control soft continuum manipulator arms after using Koopman operator theory to identify the soft manipulator models. In [15], the authors used Koopman eigenfunctions to lift the nonlinear system dynamics to provide a linear approach for the design of MPC with state and input constraints. [16] proposed a high-order optimal control strategy implemented in the Koopman operator framework and tested in the Duffing oscillator and
Clohessy-Wiltshire problem which models the relative motion between two satellites. All these works present good and efficient performance of the Koopman operator in identifying systems.
The objective of this paper is thus to propose a real-time data-driven Newton-Raphson controller by using Koopman linearization. We will test our tracking algorithm on the Van Der Pol system, an overhead crane system, and a differentially driven car system. In all experiments, the system has to track a time-varying reference signal. We then compare the tracking results with the classical model-based Newton-Raphson controller.
This paper is organized as follows: in Section II, we formulate our problem. In Section III, the Koopman operator theory and the proposed controller are introduced in detail. Section IV provides three examples to illustrate the efficiency of our controller by comparing it with the classical model-based Newton-Raphson controller. We finally conclude our article in Section V.
## II Problem statement
Consider the following nonlinear system:
\[\dot{x}(t)=f(x(t),u(t)) \tag{1}\]
with the output equation as
\[y(t)=h(x(t)) \tag{2}\]
where \(x\in\mathbb{R}^{n}\), \(u(t)\in\mathbb{R}^{m}\), \(f(x(t),u(t))\) is continuously differentiable in \(x\) for every \(u\in\mathbb{R}^{m}\), and continuous in \(u\) for every \(x\in\mathbb{R}^{n}\), and \(h(x(t))\) is continuously differentiable. Moreover, to make sure Eqn. (1) has a unique solution on \(t\in[0,\infty)\), we make the following assumptions:
**Assumption 1** ([1]):
1. _For every compact set_ \(\Gamma_{1}\subset\mathbb{R}^{n}\) _and_ \(\Gamma_{2}\subset\mathbb{R}^{m}\)_, the functions_ \(f(x(t),u(t))\) _and_ \(\frac{\partial f}{\partial x}(x(t),u(t))\) _are Lipschitz continuous on_ \(\Gamma_{1}\times\Gamma_{2}\)_._
2. _For very compact set_ \(\Gamma_{2}\subset\mathbb{R}^{m}\)_, there exists_ \(K>0\) _such that, for every_ \(x\in\mathbb{R}^{n}\) _and_ \(u\in\Gamma_{2}\)_,_ \[||f(x,u)||\leq K(||x||+1).\]
Define \(r(t)\in\mathbb{R}^{k}\) as the reference signal. The output tracking control problem is defined as
\[\lim_{t\rightarrow\infty}||r(t)-y(t)||=0 \tag{3}\]
which can also be viewed as solving the root of time-dependent equations \(r(t)-y(t)=0\). This brings the idea of designing a controller with the following iterative form:
\[u_{n+1}=u_{n}-\frac{r(t)-y(t)}{(r(t)-y(t))^{\prime}} \tag{4}\]
to find the root (i.e., controller) \(u(t)\).
In the design of the Newton-Raphson controller, the prediction phase is, saying, at time \(t\), we can predict the system from time \(t\) to \(t+T\) by solving the following differential equation
\[\dot{\tilde{x}}(\tau)=f(\tilde{x}(\tau),u(t)),\quad\tau\in[t,t+T], \tag{5}\]
with the initial condition \(\tilde{x}(t)=x(t)\). Then we can define the estimator from \(t\) to \(t+T\) as
\[g(x(t),u(t)):=h(\tilde{x}(t+T)). \tag{6}\]
The Newton-Raphson controller proposed in [9] has the following form:
\[\dot{u}(t)=\alpha\left(\frac{dg}{du}(x(t),u(t))\right)^{-1}(r(t+T)-g(x(t),u(t))) \tag{7}\]
where \(r(t+T)\) is assumed to be known in advance at time \(t\). Please note that whether the system is actuated or underactuated will not influence the work of the controller but the system should be controllable.
The calculation of \(\frac{dg(x(t),u(t))}{du}\), however, has a high computation demand if the system is nonlinear. There are three ways to calculate \(\frac{dg(x(t),u(t))}{du}\), to the authors' best knowledge:
1. Finite difference method (FDM) where \[\frac{dg(x(t),u(t))}{du}=\frac{g(x,u+\delta u)-g(x,u)}{\delta u}.\] This method is direct but very time-consuming.
2. Without loss of generality, assume \(h(\tilde{x}(t+T))=\tilde{x}(t+T)\). Use Eqn. (5): \[\frac{dg(x(t),u(t))}{du(t)}=\frac{d\tilde{x}(t+T)}{du(t)}\] (8) where \[\left[\frac{d\ddot{x}(\tau)}{du(t)}\right]=\frac{\partial f(\tilde{x}(\tau), u(t))}{\partial x}\frac{d\tilde{x}(\tau)}{du(t)}+\frac{\partial f(\tilde{x}( \tau),u(t))}{\partial u}.\] (9) By defining a new variable \(\tilde{\xi}(t)=\frac{d\tilde{x}(\tau)}{du(t)}\), we can obtain the (8) by solving ODE (9).
3. The third method is based on linearized models (\(\dot{x}=Ax+Bu\) and \(y=Cx\)). If the model is linear or we linearize the nonlinear system locally, we have the predicted output \[y(t+T)=C_{t}(e^{A_{t}T}x_{t}+A_{t}^{-1}(e^{A_{t}T}-I_{n})(B_{t}u)).\] (10) where \(I_{n}\) is the \(n\times n\) identity matrix. Thus, \[\frac{\partial y(t+T)}{\partial u}=C_{t}A_{t}^{-1}(e^{A_{t}T}-I_{n})B_{t},\] (11) This method, however, only works for linear models. If a model is nonlinear and we linearized locally, it is possible that the linearized model is not controllable which makes this method not feasible in this case. For example, the Dubins car is controllable but the linearized model of the Dubins car is not controllable.
Therefore, we propose using Koopman operator theory to lift the nonlinear systems to linear systems, thus alleviating the complexity and still keeping the controllability of the original nonlinear systems.
## III Newton-Raphson controller based on Koopman Operator theory
In this section, we will first give a brief introduction to the principle of Koopman operator. Based on this, we propose our controller based on the linearized model obtained by Koopman operator theory.
### _Koopman operator theory [14]_
The Koopman operator provides a linear representation of the flow of a nonlinear system in an infinite-dimensional space of observables. Consider a dynamical system
\[\dot{x}=F(x(t))\]
where \(x(t)\in\mathbb{R}^{n}\) and \(F\) is a continuously differentiable function. The system can be lifted to an infinite-dimensional function space \(\mathcal{F}\) composed of all continuous real-valued functions. In \(\mathcal{F}\), the flow of the system is characterized by the linear Koopman operator \(U_{t}:\mathcal{F}\rightarrow\mathcal{F}\) which describes the evolution of the observables along the trajectories of the system. We seek the projection of the Koopman operator onto a finite-dimensional subspace. Denote \(\tilde{\mathcal{F}}\subset\mathcal{F}\) to be the subspace of \(\mathcal{F}\) spanned by \(N>n\) linearly independent basis function \(\{\phi_{t}:\mathbb{R}^{n}\rightarrow\mathbb{R}\}_{t=1}^{N}\). For convenience, we assume the first \(n\) basis functions are the states, i.e., \(\phi_{t}(x)=x_{i}\). Thus, written as a vector form, we have
\[\phi(x)=[x_{1},x_{2},\cdots x_{n},\phi_{n+1}(x),\cdots,\phi_{N}(x)]. \tag{12}\]
Any observables \(\tilde{f}\in\mathcal{F}\) can be expressed as a linear combination of elements of these basis functions, i.e.,
\[\tilde{f}=w_{1}\phi_{1}+w_{2}\phi_{2}+\cdots+w_{N}\phi_{N}=w^{\top}\phi(x)^{\top}\]
where \(w_{i}\in\mathbb{R}\). The \(\phi(x)\) is called the lifted state and \(w\) is the vector representation of \(\tilde{f}\). Given this representation, we can obtain an approximation of the Koopman operator \(U_{t}\in\mathbb{R}^{N\times N}\) on \(\mathcal{F}\) that satisfies
\[\tilde{U}_{t}w=w^{\prime}\]
The objective is to find the \(\tilde{U}_{t}\) based on observable data in \(\tilde{\mathcal{F}}\).
### _Proposed controller design_
For dynamical systems with inputs Eqn. (1), we aim to build a linear model from the Koopman operator theory aforementioned:
\[z[(j+1)] =Az[j]+Bu[j] \tag{13}\] \[x[j] =Cz[j] \tag{14}\]
for each \(j\in\mathbb{N}\), \(A\in\mathbb{R}^{N\times N}\) is the state transition matrix, \(z=\phi(x)\) is the lifted state, \(B\in\mathbb{R}^{N\times m}\) is the control matrix, and \(C=\begin{bmatrix}I_{n\times n}&0_{n\times(N-n)}\end{bmatrix}\) is a projection operator from the lifted space onto the state space.
Denote
\[\alpha[k]=[x[k],u[k]]\] \[\beta[k]=[F(x[k],u[k]),u[k]].\]
We then identify a finite-dimensional approximation of the Koopman operator via the Extending Dynamic Mode Decomposition (EDMD) algorithm [17] using observed data. The corresponding Koopman operator is
\[\tilde{U}_{T_{s}}=\Gamma_{c}^{\dagger}\Gamma_{n},\]
where \(\dagger\) means pseudo-inverse, \(K\) is the time horizon for collecting data, \(\beta[k]=F(\alpha[k])\), \(k=1,2,\cdots K\), and
\[\Gamma_{c}=\frac{1}{K}\sum_{k=1}^{K}\phi(\alpha[k])^{\top}\phi( \alpha[k]),\] \[\Gamma_{n}=\frac{1}{K}\sum_{k=1}^{K}\phi(\alpha[k])^{\top}\phi( \beta[k])\]
The continuous Koopman operator can then be written as \(\log(\tilde{U}_{T_{s}})/\Delta t\) where \(\Delta t\) is the sampling time. The \(\tilde{U}_{T_{s}}^{\top}\) is the best approximation of a transition matrix between the elements of snapshot pairs in the \(L^{2}\)-norm sense, i.e.,
\[\min_{U_{T_{s}}^{\top}}\sum_{k=1}^{K}\Big{\|}U_{T_{s}}^{\top}\phi(\alpha[k])- \phi(\beta[k])\Big{\|}_{2}^{2}. \tag{15}\]
The best \(A\) and \(B\) matrices in (13) can be isolated by partitioning the \(\tilde{U}_{T_{s}}^{\top}\):
\[\tilde{U}_{T_{s}}^{\top}=\begin{bmatrix}A_{N\times N}&B_{N\times m}\\ 0_{m\times N}&I_{m\times m}\end{bmatrix},\]
where \(I_{m\times m}\) is the identity matrix.
Fig. 1 shows the diagram of the proposed data-driven Newton-Raphson tracking scheme. In this algorithm, we first collect data from the nonlinear system and build a lifted linear system from the collected data. After that, the predictor \(y(t+T)\) and the derivative term \(\frac{\partial\beta(x,u)}{\partial u}\) are obtained from the linearized model. In this way, we avoid the problems aforementioned.
## IV Simulation
In this section, we provide three examples to verify the efficiency of the proposed controller. We then compare the results with that of the classical Newton-Raphson controller
Fig. 1: Diagram of the proposed control framework: A lift linear system are built based on Koopman operator theory; the derivative and prediction of Newton-Raphson controller is calculated using the linearized model.
with respect to tracking accuracy and time. All the experiments are implemented on a personal computer with MATLAB R2020b. For all the experiments, we use mean square error as the measure for tracking performance, which is defined as
\[MSE=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}||y(t_{i})-r(t_{i})||_{2}^{2},\]
where \(N_{d}=\frac{t_{f}}{dt}\) is the number of sampling points. The average MSE and time complexity of 10 experiments are taken for comparison.
### _Example 1: Van Der Pol system_
A typical Van Der Pol system has the following form:
\[\left\{\begin{matrix}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=-x_{1}+(1-x_{1}^{2})x_{2}+u\end{matrix}\right.. \tag{16}\]
The objective is to make \(y=x_{1}\) tracking the signal \(r(t)=\frac{\pi}{8}\sin t+\frac{\pi}{6}\). The basis is chosen as \(z=[x_{1},x_{2},x_{1}^{2},x_{1}^{2}x_{2}]^{\top}\)1. Using EDMD algorithm, we obtain constant matrix \(A_{4\times 4}\), \(B_{4\times 1}\), with which we rebuild a linearized system \(\dot{z}=Az+Bu\) from the collected data. The derivative \(\frac{dg(x,\pi)}{dt}\) and the predictor \(y=x_{1}\) can then be obtained by using this linearized system. For the identification using Koopman operator theory, the prediction horizon is chosen as \(T_{s}=2\)s and the numbers for trials for data collection is \(N_{s}=10\) with random initialization of the initial state and control input 2. The initial state is set as \([0,0]^{\top}\). The speedup parameter is chosen as \(\alpha=20\) for both the NR controller and KNR controller. The sampling time is 0.01 s. The prediction horizon is \(T=0.15\) s. The system is simulated for \(t_{f}=20\)s.
Footnote 1: Please note that this choice of basis is not unique.
Footnote 2: Please note that the larger the value of \(N_{b}\) and \(N_{s}\), the better the tracking results. However, to compromise the time complexity, we chose this group of parameters.
Fig. 2 is the trajectory of \(x_{1}(t)\) for both KNR and NR. As we can see, KNR has a smaller oscillation at the beginning than that of NR. Fig. 3 is the control input by using both algorithms. It shows that the KNR reaches zero faster than the NR and has fewer oscillations as well.
Table I summarizes the compared results with the classical Newton-Raphson (NR) controller. All the parameters keep the same for both algorithms. As we can see, the KNR needs less time to catch up with the reference signal and has higher accuracy. This is easy to be explained since when calculating the \(\frac{dg(x,u)}{du}\), the linearized system has higher accuracy. Regarding the time complexity of KNR in Table I, the time can be reduced on a large scale by reducing the number of trials and size of the prediction window. Thus the difference of time complexity between NR and KNR may be negligible.
state is \([0,0,0,0]^{\top}\). The objective is to let the observation \(\theta\) track a predefined signal which is the contour of the obstacles. Let the predefined signal \(r(t)=\sin(0.1t)\) and the simulation time \(t_{f}=20\). The prediction horizon is \(T=0.15\).
The speedup parameter is \(\alpha=20\). The basis for the lifted model is chosen as \([x,\dot{x},\theta,\dot{\theta},\sin(\theta),\cos(\theta)]\). Fig. 5 is the tracking result of the proposed controller and Fig. 6 is the corresponding control input. As we can see, both KNR and NR can track the reference signal very well but the control input of KNR has less magnitude.
Table II summarizes the compare results for the overhead crane system.
### _Example 3: Differentially driven car_
The vehicle's kinematic dynamic in global coordinates is as follows [12]:
\[\begin{bmatrix}\dot{x}(t)\\ \dot{y}(t)\\ \dot{\theta}(t)\end{bmatrix}=\begin{bmatrix}\frac{\rho}{2}\cos\theta(t)&\frac{ \rho}{2}\cos\theta(t)\\ \frac{\rho}{2}\sin\theta(t)&\frac{\rho}{2}\sin\theta(t)\\ -\frac{\rho}{2}&\frac{\rho}{2}\end{bmatrix}\begin{bmatrix}\omega_{L}(t)\\ \omega_{R}(t)\end{bmatrix} \tag{18}\]
where the system state is \((x(t),y(t),\theta(t))^{\top}\) and the control input is \(u(t)=(\omega_{L}(t),\omega_{R}(t))^{\top}\). Physically, \(x(t)\) (resp. \(y(t)\)) is the \(x\) (resp. \(y\)) position of the car in the world coordinates. \(\theta(t)\) is the orientation of the car with respect to the global coordinates system as shown in Fig. 7. \(\omega_{R}(t)=v_{R}/\rho\) (resp. \(\omega_{L}(t)=v_{L}/\rho\)) is the angular velocity of the right wheel (resp. left wheel). \(\rho\) is the radius of the wheels. \(D\) is the width of the vehicle. As we can see, this system is highly nonlinear.
Let \(\rho=0.1\) m, \(D=0.4\) m. Similar to [12], we define the following reference trajectory \(r(t)\):
\[r(t)=\begin{cases}(-0.0001t^{3}+0.25t,0.0475t^{3}-0.3601t^{2}+0.3t+3),\\ t<5\\ (5\sin(0.05t),\ 3\sin(0.1t)),\ t>5\end{cases} \tag{19}\]
Fig. 4: Illustration for a planar two-dimensional overhead crane system tracking a pre-defined trajectory.
Fig. 5: Tracking results for the overhead crane system.
Fig. 6: Control input (i.e., \(F(t)\)) of the overhead crane system.
Fig. 7: Illustration of a differentially driven car. |
2309.11764 | Causal inference with outcome dependent sampling and mismeasured outcome | Outcome-dependent sampling designs are extensively utilized in various
scientific disciplines, including epidemiology, ecology, and economics, with
retrospective case-control studies being specific examples of such designs.
Additionally, if the outcome used for sample selection is also mismeasured,
then it is even more challenging to estimate the average treatment effect (ATE)
accurately. To our knowledge, no existing method can address these two issues
simultaneously. In this paper, we establish the identifiability of ATE and
propose a novel method for estimating ATE in the context of generalized linear
model. The estimator is shown to be consistent under some regularity
conditions. To relax the model assumption, we also consider generalized
additive model. We propose to estimate ATE using penalized B-splines and
establish asymptotic properties for the proposed estimator. Our methods are
evaluated through extensive simulation studies and the application to a dataset
from the UK Biobank, with alcohol intake as the treatment and gout as the
outcome. | Min Zeng, Zeyang Jia, Zijian Sui, Jinfeng Xu, Hong Zhang | 2023-09-21T03:58:26Z | http://arxiv.org/abs/2309.11764v1 | # Causal inference with outcome dependent sampling and mismeasured outcome
###### Abstract
Outcome-dependent sampling designs are extensively utilized in various scientific disciplines, including epidemiology, ecology, and economics, with retrospective case-control studies being specific examples of such designs. Additionally, if the outcome used for sample selection is also mismeasured, then it is even more challenging to estimate the average treatment effect (ATE) accurately. To our knowledge, no existing method can address these two issues simultaneously. In this paper, we establish the identifiability of ATE and propose a novel method for estimating ATE in the context of generalized linear model. The estimator is shown to be consistent under some regularity conditions. To relax the model assumption, we also consider generalized additive model. We propose to estimate ATE using penalized B-splines and establish asymptotic properties for the proposed estimator. Our methods are evaluated through extensive simulation studies and the application to a dataset from the UK Biobank, with alcohol intake as the treatment and gout as the outcome.
Average treatment effect; Causal inference; Outcome dependent sampling; Outcome mismeasured outcome
## 1 Introduction
Numerous studies in the fields of biomedical and social sciences are focused on determining the causal impact of a binary treatment on a specific outcome. Although randomized controlled trials (RCTs) serve as the gold standard for establishing causal relationships, they may not always be feasible due to financial, logistical, or ethical constraints. As a result, researchers often rely on observational studies. Various methodologies, such as propensity score techniques (Rosenbaum and Rubin, 1983; Rubin and Thomas, 2000) and instrumental variable estimation methods (Angrist et al., 1996; Angrist and Krueger, 2001), have been developed to estimate average treatment effect (ATE) in observational studies. However, the efficacy of these methodologies relies on sampling randomness. In cases where sample selection is not random, these methods are no longer valid.
Outcome-dependent sampling (ODS) represents a non-random sampling design in which the selection of sample units depends on the outcome of interest. ODS offers some advantages over simple random sampling, such as enhanced statistical power in the situation where the outcome is rare (Schlesselman, 1982). However, ODS greatly complicates the statistical data analysis and result interpretation. The case-control design, along with its variations, is the most prevalent form of ODS design. Ideally, in such designs, the sampling process relies exclusively on the outcome rather than any other variables in the study. If the unique characteristics of ODS designs are not considered, the conventional causal inference methods may be subject to selection bias (Gabriel et al., 2022; Bareinboim and Pearl, 2016). A large quantity of research based on ODS designs has been published(Wacholder et al., 1992; Breslow and Holubkov, 1997), but the majority of these studies focus on association analysis instead of causal inference. Several researchers (L. Penning de Vries and Groenwold, 2022; Mansson et al., 2007; Robins, 1999) have attempted to avoid this issue by focusing on causal risk ratio. Van der Laan (2008) and Van der Laan and Rose (2011), on the other
hand, proposed an ATE estimator based on a weighted targeted maximum likelihood by incorporating information on disease prevalence.
However, implementing an ideal ODS design in practice can often be challenging, as sample selection may be influenced, at least partially, by diagnosis or measurement. As a result, the true outcome of interest may be unobserved, and the measured outcome may differ from the true outcome. Various factors contribute to mismeasurement in outcome variables, such as the unavailability of costly measurements, the propensity to misreport responses to sensitive questions, and the inevitable recall bias. Numerous studies have investigated the impact of mismeasurement of outcome variables such as bias and efficiency loss (Copeland et al., 1977; Neuhaus, 1999; Shu and Yi, 2019; Fox et al., 2022). Some researchers have opted to develop sensitivity analyses of mismeasurement binary outcomes to reduce bias (Lash and Fink, 2003; Fox et al., 2005; Lyles and Lin, 2010). Shu and Yi (2019, 2020, 2019) derived asymptotic bias of the conventional inverse probability of treatment weighting (IPTW) and doubly robust (DR) estimators ignoring the measurement error, and proposed a modified weighting method by rectifying the mismeasurement bias.
Although research on addressing either selection bias or mismeasurement bias has garnered much attention, very few methods have been developed to deal with these two types of bias simultaneously except for Beesley and Mukherjee (2022) and Jurek et al. (2013). However, these studies focus on association analysis. To our knowledge, no causal inference method has been developed to simultaneously address both issues. In this paper, we derived a novel generalized linear model (GLM) to establish the relationship between the observed samples and the target population. This allows for an intuitive understanding of the combined effects of ODS and measurement error on ATE estimation. Then we derive estimation equations (EE) to estimate unknown parameters, through which we obtain an ATE estimator. We call this method GLM-EE. The GLM-EE estimator is proven to be consistent and asymptotically
normal. Furthermore, to relax model assumption, we introduce a generalized additive model (GAM) based estimator (Hastie and Tibshirani, 1987; Yoshida and Naito, 2014; Marx and Eilers, 1998), which employs penalized spline as a smoothing technique. Unknown parameters can again be estimated by solving a set of estimation equations and we refer to this method as GAM-EE. Asymptotic properties of the GAM-EE estimator are also established. Through simulation study, we demonstrate that both proposed estimators effectively address selection bias and mismeasurement bias. Moreover, the GAM-EE method is shown to be more robust to model misspecification with little efficiency sacrifice.
We further applied our method to a real-world dataset from the UK Biobank, which aims to investigate the ATE of alcohol consumption (treatment) on gout disease (outcome) among male individuals aged 40 to 80. Gou is a form of arthritis that arises when uric acid crystals accumulate in the joints, causing inflammation and pain. However, this outcome measurement suffers from misdiagnoses with a high false negative rate of 10-30% and a low false positive rate of about 5% (Vazquez-Mellado et al., 2012; Kiefer et al., 2016). We applied our methods to this dataset and conducted a sensitivity analysis to evaluate their performance in real-world research.
The remainder of this article is structured as follows. In Section 2, we introduce our model and establish the identifiability of ATE under some appropriate assumptions. In Section 3, we describe our GLM-EE and GAM-EE methods and establish their theoretical properties. In Section 4 and Section 5, we evaluate the performance of the two methods through extensive simulation studies and a real data application, respectively. In Section 6, we conclude the article and discuss future research directions. All technical details, along with supplementary information for the numerical studies in Sections 4 and 5, are provided in Supplementary Materials.
## 2 Identifiability of average treatment effect
### Ordinary scenario
We start by reviewing the identifiability of ATE in ordinary scenarios where samples are selected randomly and measurement error is absent. Let \(T\) and \(Y\) denote the binary treatment and true outcome of interest, respectively. Let \(Y(t)\) denote the potential or counterfactual outcome for a given subject with exposure level \(t\)(Rubin, 2005). Suppose \(Y\), \(T\), and \(Y(t)\) take values in the binary set \(\{0,1\}\). Let \(X\) denote a vector of covariates or confounding variables. Our target parameter ATE, denoted as \(\tau\), is defined as the expected difference between the potential outcomes:
\[\tau=\mathbb{E}[Y(1)-Y(0)],\]
where the expectation is evaluated in the target population.
In the standard causal inference framework, the identifiability of \(\tau\) hinges on three fundamental assumptions stated as follows:
Assumption 1: (consistency)
\[Y=TY(1)+(1-T)Y(0).\]
Assumption 2: (positivity)
\[1>\mathbb{P}(T=1|X)>0.\]
Assumption 3: (unconfoundness)
\[(Y(1),Y(0))\perp T\mid X.\]
Under Assumptions 1-3, \(\tau\) is identifiable, as demonstrated by the following formula:
\[\tau=\mathbb{E}\{g_{1}(X)-g_{0}(X)\}, \tag{1}\]
where \(g_{i}(x)=\mathbb{E}[Y\mid X=x,T=i],\ i=1,0\).
As evident from Equation (1), the identifiability of \(\tau\) relies not only on \(g_{1}\) and \(g_{0}\) but also on
the distribution of covariates \(X\) in the target population. In scenarios involving ODS designs and measurement error, both \(g_{i}(x)\) and the distribution of \(X\) are ambiguous. Consequently, the identifiability of \(\tau\) needs further assumptions.
### ODS with measurement error
Let \(Y^{*}\) denote the observed outcome, which may differ from the true outcome \(Y\) due to measurement error. Let \(S\) represent an indicator of selection into the study, with \(S=1\) for "selected" and \(S=0\) for "not selected". The sample distribution is expressed by \(\mathbb{P}(Y^{*},X,T\mid S=1)\), where \(\mathbb{P}(\cdot)\) denotes the probability density function. To ensure the identifiability of \(\tau\), we introduce two additional assumptions characterizing the mechanism of sample selection and outcome measurement.
Assumption 4: (selection conditional independence): the sample selection procedure is independent of \((Y,X,T)\) given \(Y^{*}\), that is
\[S\perp(Y,X,T)\mid Y^{*}.\]
Assumption 5: (measurement conditional independence): the observed outcome \(Y^{*}\) is independent of \((X,T)\) given \(Y\), that is
\[Y^{*}\perp(X,T)\mid Y.\]
Assumption 4 naturally aligns with outcome-dependent sampling (ODS) design, as it posits that samples are selected solely based on the observed outcome \(Y^{*}\). Assumption 5 states that the observed outcome \(Y^{*}\) relies exclusively on the true outcome \(Y\), which indicates the influence of \(X\) and \(T\) on \(Y^{*}\) is completely mediated by \(Y\). This frequently arises in clinical diagnosis and is extensively employed in the literature (Shu and Yi, 2019, 2020). Figure 1 employs the directed acyclic graph (DAG) to illustrate the problem, where subfigure (a) corresponds to the ordinary design under Assumptions 1-3, while subfigure (b) depicts the ODS design with measurement error under Assumptions 1-5.
It follows from Assumptions 4 and 5 that \(\mathbb{P}(Y^{*}=j\mid Y=i,X)=\mathbb{P}(Y^{*}=j\mid Y=i)\) and \(\mathbb{P}(S=j\mid Y^{*}=i,X)=\mathbb{P}(S=j\mid Y^{*}=i)\) for \(i,j=1\) or \(0\). To simplify statement, we denote \(p_{ij}=\mathbb{P}(Y^{*}=j|Y=i)\) for \(i,j=1\) or \(0\), where \(p_{01}\) is the false positive rate of the disease and \(p_{10}\) is the false negative rate of the disease. Let \(v=\mathbb{P}(Y=1)\) denote the disease prevalence in the target population. Since \(v\), \(p_{01}\), and \(p_{10}\) are usually attainable through existing literature and medical expert consultations, we assume these values are known.
Let \(s=\mathbb{P}(S=1|Y^{*}=0)/\mathbb{P}(S=1|Y^{*}=1)\) denote the sampling ratio between cases and controls. This ratio measures the degree of sampling bias, with \(s=1\) indicating random sampling, and as \(s\) deviates further from 1, the level of selection bias increases. Let \(v^{*}=\mathbb{P}(Y^{*}=1)\) denote the observed disease prevalence, which may differ from \(v\) due to measurement error. Let \(g_{i}^{*}(x)=\mathbb{E}[Y^{*}|X=x,T=i,S=1]\) denote the expectation of \(Y^{*}\) conditional on \(X=x\), \(T=t\), and \(S=1\), which can be identified by the sample distribution. The following lemma explores the relationship between \(g_{i}^{*}(x)\) and \(g_{i}(x)\).
**Lemma 2.1**: _Under Assumptions 1-5, for \(i\) = \(0\) or \(1\), we have_
\[g_{i}^{*}(X)=\frac{((1-p_{10}-p_{01})g_{i}(X)+p_{01})s}{1+((1-p_{10}-p_{01})g_ {i}(X)+p_{01})(s-1)}, \tag{2}\]
_where_
\[s=\frac{\mathbb{P}(Y^{*}=1|S=1)/v^{*}}{\mathbb{P}(Y^{*}=0|S=1)/(1-v^{*})}, \tag{3}\]
\[v^{*}=(1-p_{10}-p_{01})v+p_{01}. \tag{4}\]
Lemma 2.1 indicates that the sampling ratio \(s\) and observed disease prevalence \(v^{*}\) are determined by \(v\), \(p_{01}\) and \(p_{10}\). Also, there is a one-to-one function relationship between \(g_{i}^{*}(X)\) and \(g_{i}(X)\) given \(v\), \(p_{01}\) and \(p_{10}\), which demonstrates that \(g_{i}(X)\) is identifiable since \(g_{i}^{*}(X)\) is determined by the sample distribution. To ensure the identifiability of \(\tau\), one must also calculate the expectation of \(g_{i}(X)\).
**Lemma 2.2**: _Under Assumptions 1-5, for \(i\) = \(0\) or \(1\), we have_
\[\mathbb{E}[g_{i}(X)]=v^{*}u_{i1}+(1-v^{*})u_{i0}, \tag{5}\]
_where_
\[u_{ij}=\int g_{i}(x)f(x|Y^{*}=j,S=1)dx, \tag{6}\]
\(v^{*}\) _is given in (4) and \(f(\cdot\mid Y^{*},S)\) represents the conditional density of \(X\) given \(Y^{*}\) and \(S\)._
The proof of Lemma 2.2 is straightforward by applying the law of total probability and Assumption 4. Applying Lemmas 2.1-2.2, we can establish the identifiability of \(\tau\), as described in Theorem 2.1.
**Theorem 2.1**: _Under Assumptions 1-5, the average treatment effect \(\tau\) is identifiable:_
\[\tau=\mathbb{E}[g_{1}(X)]-\mathbb{E}[g_{0}(X)],\]
_where_
\[\mathbb{E}[g_{i}(X)] =\frac{v^{*}}{1-p_{10}-p_{01}}\int\left(\frac{g_{i}^{*}(x)}{s-g_{ i}^{*}(x)(s-1)}-p_{01}\right)f(x|Y^{*}=j,S=1)dx\] \[+\frac{1-v^{*}}{1-p_{10}-p_{01}}\int\left(\frac{g_{i}^{*}(x)}{s-g _{i}^{*}(x)(s-1)}-p_{01}\right)f(x|Y^{*}=j,S=1)dx,\ i=1,\ 0, \tag{7}\]
_where \(s\) and \(v^{*}\) are given in (3) and (4), respectively._
Since \(\mathbb{P}(Y^{*},X,T|S=1)\) can be approximated by its sample version, we can consistently estimate ATE \(\tau\) if \(p_{01},p_{10},v\) are given. We provide methods for estimating \(\tau\) in the next section.
## 3 Estimation of ATE \(\tau\)
In this section, we derive the estimation bias of naive method ignoring selection sampling and mismeasurement, then propose two debias methods. Both methods depend on an adjusted link function associated with sampling ratio \(s\) and mismeasurement probabilities \(p_{01}\) and \(p_{10}\).
According to Theorem 2.1, to estimate \(\tau\), we can first estimate \(g_{i}\) then estimate \(\mathbb{E}[g_{i}(X)]\)
\(i=0,\ 1.\) To begin with, we model \(g_{i}\) with a logistic link:
\[g_{i}(x)=\frac{\exp(\eta(T=i,X=x))}{1+\exp(\eta(T=i,X=x))},\]
where the index \(\eta(t,x)\) is a function of \(t\) and \(x.\) Applying Lemma 2.1, we obtain that
\[g_{i}^{*}(x)=h(\eta(T=i,X=x)),\]
where
\[h(\eta)=\frac{\left(p_{01}+\exp(\eta)(1-p_{10})\right)s}{1+p_{01}(s-1)+\exp( \eta)\left(1+(1-p_{10})(s-1)\right)}. \tag{8}\]
serves as an adjusted link function.The log-likelihood function of the observed samples is
\[\ell_{n}\left(y^{*},\eta(t,\mathbf{x})\right)=\sum_{i=1}^{n}\left\{y_{i}^{*}\log( \frac{\mu_{i}}{1-\mu_{i}})+\log(1-\mu_{i})\right\}, \tag{9}\]
where \(\mu_{i}=h(\eta(t_{i},\mathbf{x}_{i}))\) for \(i\)-th sample and \(n\) is the size of sample.
**Remark 1:** The adjusted link function \(h(\eta)\) is a monotone increasing function of index \(\eta.\) The shape of its curve is highly influenced by the values of \(p_{01},\)\(p_{10},\) and \(s.\) For example, the function has a supremum of \(\frac{(1-p_{10})s}{1+(1-p_{10})(s-1)}\) and an infimum of \(\frac{p_{01}s}{1+p_{01}(s-1)}.\) Figure 4 in Appendix B of Supplementary Materials illustrates the curves of \(h(\eta)\) for various combinations of \(p_{01},\)\(p_{01},\) and \(s.\) It is worth noting that when \(p_{01}=0,\)\(p_{10}=0,\) and \(s=1,\)\(h(\eta)\) degenerate to the logistic link function, corresponding to the situation of random sampling and no measurement error.
**Remark 2:** As shown in Figure 4, when \(p_{01}>0,\)\(p_{10}>0,\) and \(s>1,\) the shape of the curves becomes deflated and shifted. Especially when \(s\) is large and \(p_{01}\) is bigger than \(0,\) the infimum of \(h(\eta)\) approaches \(1.\) This leads to a significantly reduced range of the adjusted link function, which in turn makes model estimation challenging. To avoid an excessively small range of \(h(\eta)\) when \(s\) is big, it is crucial to ensure that \(p_{01}\) does not deviate too far from \(0.\) Fortunately, in real world, \(p_{01}\) is often very close to \(0,\) maintaining a reasonable range for the \(\eta\) function.
**Remark 3:** Consider the ODS design with no measurement error, that is \(p_{01}=0,\)\(p_{10}=0\) and \(s>1.\) If we further assume that \(\eta\) has a linear form \(\eta(t,\mathbf{x})=a_{0}+a_{1}T+\mathbf{b^{\mathsf{T}}}\mathbf{x},\) then the
logistic regression produces consistent estimates of the slope parameters \(a_{1}\) and \(\boldsymbol{b}^{\mathsf{T}}\). However, when it comes to ODS design with measurement error, that is \(p_{01}>0\), \(p_{10}>0\) and \(s>1\), all the logistic regression estimators will be biased. Details can be seen in Czado and Santner (1992).
### A generalized linear model based estimator
In this subsection, we assume that the true model for \(\eta(t,\boldsymbol{x})\) has a linear form: \(\eta(t,\boldsymbol{x})=a_{0}+a_{1}T+\boldsymbol{b}^{\mathsf{T}}\boldsymbol{x}\). Denote \(\boldsymbol{\beta}^{\mathsf{T}}=(a_{0},a_{1},\boldsymbol{b}^{\mathsf{T}})\). The estimation of \(\boldsymbol{\beta}\) can be performed by maximizing the log-likelihood function of GLM with the adjusted link function \(h(\cdot)\). The estimation of \(\tau\) can then be obtained by following the subsequent steps.
**Step 0**: _Determine the values of \(p_{10},p_{01},v\)._
**Step 1**: _Estimate sampling ratio \(s\) and the adjusted link function \(h\)._ Motivated by (3), we obtain an estimator of \(s\) by solving the estimating equation.
\[S_{n}(s):=\frac{1}{n}\sum_{i=1}^{n}\{sv^{*}(1-y_{i}^{*})-(1-v^{*})y_{i}^{*}\}=0. \tag{10}\]
Denote the resulting estimator by \(\hat{s}=\frac{\sum_{i=1}^{n}y_{i}^{*}(1-v^{*})}{\sum_{i=1}^{n}(1-y_{i}^{*})v^{ *}}\). Plugging \(\hat{s}\) in (8), we obtain an estimator of the link function \(h(\cdot)\), denoted as \(\hat{h}(\cdot)\).
**Step 2**: _Estimate \(\boldsymbol{\beta}\)._ We can estimate \(\boldsymbol{\beta}\) by solving the following score equations:
\[G_{n}(\boldsymbol{\beta}):=\frac{1}{n}\frac{\partial\ell_{n}\left(\boldsymbol {\beta}\right)}{\partial\boldsymbol{\beta}}=0, \tag{11}\]
where \(l_{n}(\beta)\) is defined in (9), with \(h(\cdot)\) begging replaced by \(\hat{h}(\cdot)\). Denote the resulting estimator by \(\hat{\boldsymbol{\beta}}^{\mathsf{T}}=(\hat{a}_{0},\hat{a}_{1},\hat{ \boldsymbol{b}}^{\mathsf{T}})\). Denote \(\hat{g}_{1}(x)=\text{expit}(\hat{a}_{0}+\hat{a}_{1}+\hat{\boldsymbol{b}}^{ \mathsf{T}}x)\) and \(\hat{g}_{0}(x)=\text{expit}(\hat{a}_{0}+\hat{\boldsymbol{b}}^{\mathsf{T}}x)\).
**Step 3**: _Estimate \(\boldsymbol{u}=(u_{11},u_{10},u_{01},u_{00})^{\mathsf{T}}\)._ Inspired by (6), we can get the estimators of \(u_{ij}\) by
\[\hat{u}_{11}=\frac{1}{\sum_{i=1}^{n}y_{i}^{*}}\sum_{i=1}^{n}y_{i}^{*}\hat{g}_{ 1}(x_{i}),\ \ \hat{u}_{10}=\frac{1}{\sum_{i=1}^{n}(1-y_{i}^{*})}\sum_{i=1}^{n}(1-y_{i}^{*}) \hat{g}_{1}(x_{i}),\]
\[\hat{u}_{01}=\frac{1}{\sum_{i=1}^{n}y_{i}^{*}}\sum_{i=1}^{n}y_{i}^{*}\hat{g}_{ 0}(x_{i}),\ \ \hat{u}_{00}=\frac{1}{\sum_{i=1}^{n}(1-y_{i}^{*})}\sum_{i=1}^{n}(1-y_{i}^{*}) \hat{g}_{0}(x_{i}).\]
**Step 4**: _Estimate the average treatment effect \(\tau\)._ According to (5), we can estimate \(\tau\) by
\[\hat{\tau}=\left[v^{*}\hat{u}_{11}+(1-v^{*})\hat{u}_{10}\right]-\left[v^{*}\hat{ u}_{01}+(1-v^{*})\hat{u}_{00}\right],\]
where \(v^{*}\) is calculated according to (4).
To establish the asymptotic distribution of estimated parameters, we write steps 1-3 in a form of estimation equations:
\[\left(\begin{array}{c}S_{n}(s)\\ G_{n}(s,\mathbf{\beta})\\ M_{n}(\mathbf{\beta},\mathbf{u})\end{array}\right)=:\frac{1}{n}\sum_{i}^{n}\mathbf{\psi} \left(y_{i}^{*},x_{i},t_{i},\mathbf{\theta}\right)=0, \tag{12}\]
where \(\mathbf{\theta}^{\mathsf{T}}=(s,\mathbf{\beta}^{\mathsf{T}},\mathbf{u}^{\mathsf{T}})\), \(\mathbf{u}^{\mathsf{T}}=(u_{11},u_{10},u_{01},u_{11})\) and
\[M_{n}(\mathbf{u},\mathbf{\beta}):=\frac{1}{n}\sum_{i=1}^{n}\left(\begin{array}{c}y_ {i}^{*}(u_{11}-g_{1}(x_{i},\mathbf{\beta}))\\ (1-y_{i}^{*})(u_{10}-g_{1}(x_{i},\mathbf{\beta}))\\ y_{i}^{*}(u_{01}-g_{0}(x_{i},\mathbf{\beta}))\\ (1-y_{i}^{*})(u_{00}-g_{0}(x_{i},\mathbf{\beta}))\end{array}\right).\]
Denoting the resulting estimator by \(\hat{\mathbf{\theta}}=\left(\hat{s},\,\hat{\mathbf{\beta}}^{\mathsf{T}},\hat{\mathbf{u}}^ {\mathsf{T}}\right)^{\mathsf{T}}\), we have the following theorem.
**Theorem 3.1**: _Let \(\mathbf{\theta}_{0}\) denote the true parameter values for \(\mathbf{\theta}\). If the true index function \(\eta\) has the linear form \(\eta(t,\mathbf{x})=a_{0}+a_{1}t+\mathbf{b}^{\mathsf{T}}\mathbf{x}\), then under some regularity conditions given in Appendix A of Supplementary Materials, we have that \(\hat{\mathbf{\theta}}\) is consistent for \(\mathbf{\theta}_{0}\), and_
\[\sqrt{n}(\hat{\mathbf{\theta}}-\mathbf{\theta}_{0})\stackrel{{ d}}{{ \rightarrow}}N(0,\mathbf{V}(\mathbf{\theta}_{0})),\]
_where the limiting variance of \(\sqrt{n}\hat{\mathbf{\theta}}\) can be written as the sandwich matrix form: \(\mathbf{V}(\mathbf{\theta}_{0})=\mathbf{H}(\mathbf{\theta}_{0})^{-1}\mathbf{B}\big{\{} \mathbf{H}(\mathbf{\theta}_{0})^{-1}\big{\}}^{\mathsf{T}}\) with \(\mathbf{B}(\mathbf{\theta}_{0})=\mathbb{E}\left\{\mathbf{\psi}(Y^{*},X,T,\mathbf{\theta}_ {0})\mathbf{\psi}(Y^{*},X,T,\mathbf{\theta}_{0})^{\mathsf{T}}\big{|}\,S=1\right\}\) and \(\mathbf{H}(\mathbf{\theta}_{0})=\mathbb{E}\{\partial\mathbf{\psi}(Y^{*},X,T,\mathbf{ \theta}_{0})/\partial\mathbf{\theta}_{0}^{\mathsf{T}}|S=1\}\)._
Utilizing Theorem 3.1, we can estimate \(\tau\) by \(\hat{\tau}_{\mathrm{GLM}}=\mathbf{q}^{\mathsf{T}}\hat{\mathbf{\theta}}\), where \(\mathbf{q}^{\mathsf{T}}=(\mathbf{0}^{\mathsf{T}},\mathbf{c}^{\mathsf{T}})\), \(\mathbf{c}^{\mathsf{T}}=(v^{*},v^{*}-1,v^{*},v^{*}-1)\). Subsequently, applying the Slutsky Theorem, we obtain the asymptotic
normality of \(\hat{\tau}_{\rm GLM}\):
\[\sqrt{n}(\hat{\tau}_{\rm GLM}-\tau)\stackrel{{ d}}{{\to}}N(0, \boldsymbol{q}^{\sf T}\mathbf{V}(\boldsymbol{\theta}_{0})\boldsymbol{q}).\]
The covariance matrix \(\mathbf{V}(\boldsymbol{\theta}_{0})\) can be estimated by \(\hat{\mathbf{V}}(\boldsymbol{\theta}_{0})=\hat{\mathbf{H}}^{-1}\hat{\mathbf{ B}}\big{\{}\hat{\mathbf{H}}^{-1}\big{\}}^{\sf T}\), where \(\hat{\mathbf{H}}=-\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{\psi}^{\prime}(y_{i}^{* },x_{i},t_{i},\hat{\boldsymbol{\theta}})\) and \(\hat{\mathbf{B}}=\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{\psi}(y_{i}^{*},x_{i},t _{i},\hat{\boldsymbol{\theta}})\boldsymbol{\psi}(y_{i}^{*},x_{i},t_{i},\hat{ \boldsymbol{\theta}})^{\sf T}\). This allows us to make statistical inference regarding \(\tau\).
### A generalized additive model based estimator
In real-world studies, the linearity assumption of \(\eta(t,\boldsymbol{x})\) is often violated. To enhance the robustness of our method, we employ the generalized additive model (Hastie and Tibshirani, 1987; Marx and Eilers, 1998) to capture the nonlinear characteristics of \(\eta(t,\boldsymbol{x})\). This approach enables us to develop an improved estimator that is resilient to model misspecification.
To begin with, we denote by \(\tilde{\eta}(t_{i},\boldsymbol{x}_{i})\) the true index of the \(i\)-th individual and \(\eta(t_{i},\boldsymbol{x}_{i})\) a working index. We assume \(\tilde{\eta}(t_{i},\boldsymbol{x}_{i})\) has the following additive form:
\[\tilde{\eta}(t_{i},\boldsymbol{x}_{i})=at_{i}+\sum_{j=1}^{D}\tilde{\eta}_{j}(x _{ij}),\ j=1,...,D, \tag{13}\]
where \(x_{ij}\) is the \(j\)-th covariate of \(i\)-th sample. We also assume that \(\mathbb{E}[\tilde{\eta}_{j}(X_{ij})]=0\) for \(j=1,\ldots,D\) to ensure the identifiability of \(\tilde{\eta}_{j}\). We approximate \(\tilde{\eta}_{j}(x_{ij})\) by the following B-spline model:
\[\eta_{j}(x_{ij})=\sum_{k=-p+1}^{K_{n}}B_{k}^{p}(x_{ij})b_{k,j},\ j=1,\ldots,D,\]
where \(B_{k}^{p}(x)\) is the \(p\)-th B-spline functions defined recursively (\(k=-p+1,\ldots,K_{n}\)) (De Boor, 1978). Here \(K_{n}\) is the number of knots, \(p\) is the degree of B-spline, and \(b_{k,j}\)'s are unknown parameters. To simplify notation, we denote \(B_{k}^{p}(x)\) as \(B_{k}(x)\), unless we explicitly state the degree of the B-spline. Our primary focus henceforth will be on the \(p\)-th B-spline. We model the index of the \(i\)-th sample as
\[\eta(t_{i},\boldsymbol{x}_{i})=at_{i}+\sum_{j=1}^{D}\sum_{k=-p+1}^{K_{n}}B_{k }(x_{ij})b_{k,j}=at_{i}+\sum_{j=1}^{D}\boldsymbol{B}(x_{ij})^{\sf T} \boldsymbol{b}_{j}=\boldsymbol{Z}_{i}\boldsymbol{\beta},\]
where \(\boldsymbol{Z}_{i}=\left(\boldsymbol{B}(x_{i1})^{\sf T},\ldots,\boldsymbol{B} (x_{iD})^{\sf T},t_{i}\right)\), \(\boldsymbol{B}(x)=\left(B_{-p+1}(x),\ldots,B_{K_{n}}(x)\right)^{\sf T}\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}
\(\mathbf{b}^{\mathsf{T}}=(\mathbf{b}_{1}^{\mathsf{T}},\ldots,\mathbf{b}_{D}^{\mathsf{T}})\), and \(\mathbf{b}_{j}=\{b_{k,j}\}_{k=-p+1}^{K_{n}},j=1,\ldots,D\). Note that the dimension of \(\mathbf{\beta}\) is \(D(K_{n}+p)+1\).
The observed log-likelihood function based on the above B-spline approximation shares the same form as (9), except that \(\eta\) is no longer the true index function \(\tilde{\eta}\). We then utilize the ridge-corrected penalized log-likelihood function proposed by Marx and Eilers (1998) and Yoshida and Naito (2014):
\[\ell_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)=\ell_{n} \left(\mathbf{\beta}\right)-\sum_{j=1}^{D}\frac{\lambda_{jn}}{2n}\mathbf{b}_{j}^{ \mathsf{T}}\Delta_{m}^{\mathsf{T}}\Delta_{m}\mathbf{b}_{j}-\frac{\gamma_{n}}{2n} \sum_{j=1}^{D}\mathbf{b}_{j}^{\mathsf{T}}\mathbf{b}_{j}, \tag{14}\]
where \(\lambda_{n}=\{\lambda_{jn}\}_{j=1}^{D}\) and \(\gamma_{n}\) are tuning parameters. \(\Delta_{m}\) is the \(m\)-th order difference matrix (Dierck, 1995). The spline parameters are subject to penalization, where the first penalty term is a usual trick in the penalized spline estimators to prevent the estimate from wriggling when the spline dimension \(D(K_{n}+p)\) is large, and the second penalty term aims to ensure the nonsingularity of the Hessian matrix of \(\ell_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)\). The score functions are obtained with
\[G_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)=\frac{\partial \ell_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)}{\partial \mathbf{\beta}}. \tag{15}\]
Similar to the operations in Section 3.1, we can obtain an estimator of \(\tau\) by applying Steps 0-4 with \(G_{n}\) in Step 2 being replaced by \(G_{\mathrm{rp},n}\). Also, we write Steps 1-3 in a form of estimating equations:
\[\left(\begin{array}{c}S_{n}(s)\\ G_{\mathrm{rp},n}(\mathbf{\beta},s,\lambda_{n},\gamma_{n})\\ M_{n}(\mathbf{u},\mathbf{\beta})\end{array}\right)=:\frac{1}{n}\sum_{i=1}^{n}\mathbf{\psi }\left(y_{i}^{*},x_{i},t_{i},\mathbf{\theta},\lambda_{n},\gamma_{n}\right)=0. \tag{16}\]
Denote the resulting estimators as \(\hat{\mathbf{\theta}}=\left(\hat{s},\,\hat{\mathbf{\beta}}^{\mathsf{T}},\hat{\mathbf{u}}^ {\mathsf{T}}\right)^{\mathsf{T}}\). Note that, unlike (12), the dimension of \(\mathbf{\psi}\) in (16) is not fixed, which increases with the sample size \(n\). Similarly, The ATE \(\tau\) can be estimated by \(\hat{\tau}_{\mathrm{GAM}}=\mathbf{q}^{\mathsf{T}}\hat{\mathbf{\theta}}\).
Before stating asymptotic properties of \(\hat{\tau}_{\mathrm{GAM}}\), we introduce some notations. Let
\((s_{0},\mathbf{\beta_{0}^{\mathsf{T}}},\mathbf{u}_{0}^{\mathsf{T}})^{\mathsf{T}}\), where \(s_{0}\) is the solution of \(\mathbb{E}\left[S_{n}(s)|S=1\right]=0\), \(\mathbf{\beta}_{0}=\underset{\mathbf{\beta}}{\mathrm{argmin}}\mathbb{E}\left[\log\frac{ f(Y,\mathbf{X},\tilde{\eta})}{f(Y,\mathbf{X},\mathbf{\beta})}|S=1\right]\) is the best spline approximation of the true index function \(\tilde{\eta}(t,\mathbf{x})\) based on Kullback-Leibler measure and \(\mathbf{u}_{0}\) is the solution of \(\mathbb{E}[M_{n}(\mathbf{u},\mathbf{\beta}_{0})|S=1]=0\). Let \(\tau\) denote the true value of ATE. We have the following asymptotic results for \(\tau_{\mathrm{GAM}}\).
**Theorem 3.2**: _If the true index function \(\tilde{\eta}\) obeys the additive form as (13), then under some regularity conditions in Appendix A of Supplementary Materials, we have that \(\hat{\tau}_{\mathrm{GAM}}\) is consistent for \(\tau\), and \(\hat{\tau}_{\mathrm{GAM}}-\tau\) is asymptotically normal with asymptotic mean \(\mathbf{Bias}(\hat{\tau}_{\mathrm{GAM}})\) (refer to (A.14) in Appendix A for an explicit expression), and asymptotic covariance \(\mathbf{V}(\hat{\tau}_{\mathrm{GAM}})=\frac{1}{n}\mathbf{q}^{\mathsf{T}}\tilde{ \mathbf{H}}(\lambda_{n})^{-1}\tilde{\mathbf{B}}\big{\{}\tilde{\mathbf{H}}( \lambda_{n})^{-1}\big{\}}^{\mathsf{T}}\mathbf{q}\), where_
\[\tilde{\mathbf{H}}(\lambda_{n})=\mathbb{E}\left\{\tilde{\mathbf{\psi}}^{\prime}(Y ^{*},X,T,\mathbf{\theta}_{0},\lambda_{n},\gamma_{n}=0)|S=1\right\},\]
\[\tilde{\mathbf{B}}=\mathbb{E}\left\{\tilde{\mathbf{\psi}}(Y^{*},X,T,\mathbf{\theta}_{ 0},\lambda_{n}=0,\gamma_{n}=0)\tilde{\mathbf{\psi}}(Y^{*},X,T,\mathbf{\theta}_{0}, \lambda_{n}=0,\gamma_{n}=0)^{\mathsf{T}}|S=1\right\}.\]
_Refer to (A.5) and (A.6) in Appendix A for explicit expressions of \(\tilde{\psi}\) and \(\tilde{\psi}^{\prime}\), respectively. Furthermore, \(\mathbf{Bias}(\hat{\tau}_{\mathrm{GAM}})=O(n^{-(p+1)/(2p+3)})\) and \(\mathbf{V}(\hat{\tau}_{\mathrm{GAM}})=O(n^{-2(p+1)/(2p+3)})\)._
**Remark 1:** Theorem 3.2 demonstrates that \(\hat{\tau}_{\mathrm{GAM}}\) is \(n^{-(p+1)/(2p+3)}\)-consistent and asymptotic normal, and the asymptotic order of \(\hat{\tau}_{\mathrm{GAM}}\)'s mean squared error is \(O(n^{-2(p+1)/(2p+3)})\). These results coincide with those of Yoshida and Naito (2014).
**Remark 2:** If the true index follows a linear form, then \(\hat{\tau}_{\mathrm{GLM}}\) proposed in Section 3.1 is \(n^{-1/2}\)-consistent. However, \(\hat{\tau}_{\mathrm{GLM}}\) is generally sensitive to the linear assumption. On the other hand, \(\hat{\tau}_{\mathrm{GAM}}\) has a much wider applicability, subject to a lower efficiency in terms of convergence rate.
**Remark 3:** In large sample cases, \(\mathbf{V}(\hat{\tau}_{\mathrm{GAM}})\) can be consistently approximated by
\(\hat{\mathbf{V}}(\hat{\tau}_{\mathrm{GAM}})=\frac{1}{n}\mathbf{q}^{\mathsf{T}} \hat{\mathbf{\hat{H}}}^{-1}\hat{\mathbf{\hat{B}}}\big{\{}\hat{\mathbf{\hat{H}}} ^{-1}\big{\}}^{\mathsf{T}}\mathbf{q}\), with \(\hat{\mathbf{\hat{H}}}=-\frac{1}{n}\sum_{i=1}^{n}\mathbf{\psi}^{\prime}(y_{i}^{*},x_ {i},t_{i},\hat{\mathbf{\theta}},\lambda_{n},\gamma_{n}=0)\) and
\(\hat{\mathbf{\hat{B}}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{\psi}(y_{i}^{*},x_{i},t_{i},\hat{\mathbf{\theta}},\lambda_{n}=0,\gamma_{n}=0)\mathbf{\psi}(y_{i}^{*},x_{i},t_{i}, \hat{\mathbf{\theta}},\lambda_{n}=0,\gamma_{n}=0)^{\mathsf{T}}\). Statistical inference of \(\tau\) can be made according to the asymptotic normality of \(\hat{\tau}_{\mathrm{GAM}}\).
## 4 Simulation studies
In this section, we evaluate the finite-sample performance of our proposed GLM-EE and GAM-EE methods through simulations.
### Data generation
The data generation process consists of two steps: data pool creation and case-control sample selection. We start by creating a data pool representing the target population, where each patient's true disease status and diagnosis status are generated. We independently sample two continuous covariates, \(X_{1}\) and \(X_{2}\), from a standard normal and uniform distribution, respectively. Another discrete covariate, \(U\), is sampled from a Bernoulli distribution with \(\mathbb{P}(U=1)=0.5\). The treatment indicator \(T\) is sampled from a Bernoulli distribution with \(\mathbb{P}(T=1|X_{1},X_{2},U)=\text{expit}(1+0.1X_{1}-0.1X_{2}-0.5U)\). To demonstrate the utility of our methods in both linear and nonlinear settings, we consider the following outcome models:
\[\textbf{M1:}\quad\tilde{\eta}=a_{0}-2T-U-0.5X_{1}+X_{2};\]
\[\textbf{M2:}\quad\tilde{\eta}=a_{0}-2T-U-\sin(3\pi X_{1})+(3(X_{2}-0.5))^{3};\]
\[\textbf{M3:}\quad\tilde{\eta}=a_{0}-2T-U-\exp(2X_{1})-\sin(3\pi X_{2})X_{2};\]
\[\textbf{M4:}\quad\tilde{\eta}=a_{0}-2T-U-\exp(2X_{1})+(3(X_{2}-0.5))^{3}+X_{1} X_{2}.\]
**M1** is a typical linear model. **M2-M3** are non-linear but still follows the additive form in (13). **M4** violates both linear and additive assumptions. In all the models, the intercept term \(a_{0}\) is set based on a predetermined disease prevalence \(v\). The true outcome \(Y\) is sampled from the Bernoulli distribution with success probability \(\text{expit}(\tilde{\eta})\). We then generate the observed outcome \(Y^{*}\) based on \(Y\) with conditional probability \(p_{10}\) and \(p_{01}\). Thus we construct a data pool to simulate a population of size 1000,000. We then randomly sample \(n/2\) cases and \(n/2\) controls from the data pool based on the observed outcome \(Y^{*}\)'s, but with only \(Y^{*},T,U,X_{1},X_{2}\) kept in the subsequent analysis.
We fix \(p_{01}=0\) as it is usually very small in practice. We consider various combinations of
\(v\), \(p_{10}\), and \(n\). That is, \(v=0.001,\ 0.01,\ \text{and}\ 0.1\), \(p_{01}=0,\ 0.2,\ \text{and}\ 0.4\), \(n=500\) and \(2000\). For each combination of \(v\) and outcome model, the true \(\tau\) are calculated through Monte Carlo integration. When applying the GAM-EE method, the number of knots for \(X_{1}\) and \(X_{2}\) is set to be \(K_{n}=10\), and the value of \(\lambda_{n}\) is selected based on the Bayesian information criterion as in Marx and Eilers (1998), and \(\gamma_{n}\) is set to be 0.1. For each combination of \(v\), \(p_{10}\), \(p_{01}\), \(n\), and outcome model, we repeat simulations for 500 times.
### Debias capacity of GLM-EE and GAM-EE
We first evaluate the debiasing capability of our proposed methods in model **M1**, where the true index has a linear form. Along with the standard GLM-EE and GAM-EE methods, we also consider three naive estimators based on the GLM-EE method and three naive estimators based on the GAM-EE method. These naive estimators are obtained by applying the GLM-EE and GAM-EE methods but intentionally ignoring the information of measurement and selection (i.e., manually fix \(p_{10}=0\) and \(s=1\) when applying the GLM-EE and GAM-EE methods). The EE methods ignoring measurement information, selection information, and both information are denoted as "naive 3", "naive 2" and "naive 1", respectively. We also consider the IPTW method as a comparison.
Figure 2 depicts the box plots of the ATE estimators. The standard GLM-EE and GAM-EE estimators' box covers the true \(\tau\) (represented by the red line), and the empirical means closely align with the true \(\tau\). Conversely, the naive and IPTW estimators exhibit obvious bias, as their boxes fail to cover the true \(\tau\). The biases are particularly large for IPTW, naive 1, and naive 2, and they are much bigger than naive 3. This observation suggests that the biases are primarily due to sampling bias instead of measurement error.
To further compare the performance of standard GAM-EE and GLM-EE methods, we present the relative biases, root mean squared errors, coverage probabilities of \(\hat{\tau}_{\text{GLM}}\) and \(\hat{\tau}_{\text{GAM}}\) in Table 1. Both \(\hat{\tau}_{\text{GLM}}\) and \(\hat{\tau}_{\text{GAM}}\) produce fairly small empirical biases and reasonable
coverage probabilities close to the nominal level of 95%. Furthermore, \(\hat{\tau}_{\text{GAM}}\) has slightly bigger RMSEs than \(\hat{\tau}_{\text{GLM}}\) since GAM-EE requires to estimate more parameters than GLM-EE.
### Robustness of GLM-EE and GAM-EE
To evaluate the robustness of our methods, we conduct a detailed comparison between the standard GLM-EE method (\(\hat{\tau}_{\text{GLM}}\)) and the standard GAM-EE method (\(\hat{\tau}_{\text{GAM}}\)) in different nonlinear model settings. Tables 2 summarize the results of model **M2** and **M3**, where the GLM-EE method suffers from the problem of model misspecification. In the simulation situations, \(\hat{\tau}_{\text{GLM}}\) produces systematic biases, depending on the prevalence \(v\) (the lower the prevalence, the larger the bias). On the other hand, \(\hat{\tau}_{\text{GAM}}\) produces consistently smaller empirical biases and RMSEs than \(\hat{\tau}_{\text{GLM}}\), especially in model **M3**. The coverage probabilities of \(\hat{\tau}_{\text{GAM}}\) are also close to the nominal level of 95%. This demonstrates the high performance of the GAM-EE method in nonlinear but additive settings. Table 3 shows the results for model **M4**, where both GLM-EE and GAM-EE methods suffer from model misspecification problems, but \(\hat{\tau}_{\text{GAM}}\) has smaller biases and RMSEs in general.
Overall, The GAM-EE method outperforms the GLM-EE method in nonlinear settings and loses little statistical efficiency compared with the GLM-EE method in linear settings. The above results support the theoretical results established in Section 3. For scenarios where \(p_{01}>0\), as discussed in Section 3, the estimate of \(\tau\) is not stable if \(v\) is rather small. We increase the sample sizes to \(n=3000\) and only consider four combinations of \(p_{01}\) and \(v\) (i.e,
\(p_{01}=0.03,\,0.06\), and \(v=0.05,\,0.1\)). Tables 4 in Appendix B of Supplementary Materials summarize the corresponding results, showing the same behaviors as scenarios with \(p_{01}=0\).
## 5 Real data analysis
In this section, we apply the GAM-EE and GLM-EE methods to a real-world example. We aim to analyze the effect of alcohol intake on the risk of developing gout. We use data from the UK BioBank database, a large-scale prospective cohort study including 502,543 volunteer participants aged 37 to 73 years from UK between 2007 and 2010. We collected information on the treatment (alcohol intake), the observed outcome (gout diagnosis status), and covariates including education level, ethnicity, diet score (summarized score of diet habits), BMI, physical exercise, TDI (Townsend deprivation index), age, and household income. After eliminating the missing data and limiting our sample to only males, we obtained a target population of 136,741 subjects (refer to Table 6 in Appendix B of Supplementary Materials for detailed information). Within this population, 3.85% subjects are diagnosed with gout (\(v^{*}=3.85\%\)), but the true disease prevalence \(v\) is unknown. However, if we know the values of false positive rate \(p_{01}\) and false negative rate \(p_{10}\), we can calculate the true disease prevalence by \(v=(v^{*}-p_{01})/(1-p_{10}-p_{01})\) according to (4). We apply our proposed GLM-EE and GAM-EE methods to the dataset. When applying the GAM-EE method, the number of knots \(K_{n}\) is fixed to be 5. The value of \(\lambda_{n}\) is selected based on the Bayesian information criterion from a candidate sequence ranging from 1 to 20, and \(\gamma_{n}\) is set to be 0.1.
First, we aim to extend the discussion in Section 4 with the purpose of evaluating the validity of our proposed EE methods, leveraging the full dataset. The corresponding results can be regarded as benchmarks. It is important to mention that while the full dataset does not have the bias sampling problem, it still suffers from measurement error problems. We draw a case-control subsample from the full dataset based on the diagnosed status, with the same case and control size of 2500. The subsample suffers from both selection and
mismeasurement biases. We then apply GAM-EE and GLM-EE methods to the subsample. This process is repeated for 500 times. Figure 5 in Appendix B of Supplementary Materials depicts the box plots of our estimators. The results given by GAM-EE method are fairly close to the corresponding benchmarks, with differences ranging from \(1\times 10^{-7}\) to \(1\times 10^{-2}\). On the other hand, the results given by GLM-EE method deviate from the corresponding benchmark, with differences ranging from \(1\times 10^{-6}\) to \(2\times 10^{-2}\). The standard errors of the two methods are close to each other. These results indicate that GAM-EE method is more robust than GLM-EE method in this example.
Second, we demonstrate the practical utility of our methods in real-world research by conducting a sensitivity analysis. This time we only have a case-control subsample and the real disease status is unobserved. Based on literature review (Vazquez-Mellado et al., 2012; Kiefer et al., 2016) and experts experience, we determine possible ranges of disease prevalence \(v\in(0.030,0.045)\), false positive rate \(p_{10}\in(10\%,30\%)\), and false negative rate \(p_{01}\in(0\%,6\%)\). We select several breakpoints within these ranges, apply our methods, and summarize the results in Figure 3. Evidently, within the possible ranges of \(v\), \(p_{10}\), and \(p_{01}\), the estimated \(\tau\) is significantly greater than 0, in terms of 95% confidence intervals. The median ATE ranges from 0.01 to 0.04, depending on the specification of \(v\), \(p_{01}\) and \(p_{10}\). Therefore, we conclude that alcohol intake has a significant positive ATE on the risk of developing gout.
## 6 Discussion
This paper presents novel methods for addressing an analytical challenge that arises when conducting causal inference in the context of outcome-dependent sampling (ODS) design with measurement error. In such scenarios, ordinary ATE estimators are susceptible to selection and mismeasurement biases. Our proposed GLM-EE and GAM-EE methods leverage
additional information from the target population on disease prevalence and mismeasurement rates to address the biases. The effectiveness of our methods for eliminating the influence of ODS and mismeasurement appears to be robust to the specification of outcome models in simulation studies. We also provide practical guidance for conducting sensitivity analysis in real-world ODS studies. We apply our methods to the UK Biobank dataset to estimate ATE of alcohol intake on gout. Our methods demonstrated promising performance in this application.
Our methods focus on estimating ATE, although they can readily be extended to estimate other causal effect measures of interest, such as the causal risk ratio and the causal odds ratio. Furthermore, although we consider a scenario with two treatment options, our methods can be generalized to multiple treatment arms.
As discussed in Section 3, the adjusted link function may be quite flat in cases where \(v\) is small and \(p_{01}>0\). This can lead to instability in solving the estimating equations, necessitating a large sample size to ensure convergence. However, The increase in sample size will highly increase the computation time, especially when the dimension of covariates or the size of the knots is big. As a result, when \(p_{01}>0\), we only consider scenarios where the disease prevalence \(v\) is not small. This limitation may restrict the generalizability of our methods and we leave this interesting topic as a future work to explore.
Overall, our proposed methods provide a valuable tool for addressing the analytical challenges associated with causal inference in the presence of ODS and measurement error. Our methods offer a practical and effective means in obtaining unbiased estimates of ATE, even when the outcome model is not linear.
The work of HZ is partly supported by the National Natural Science Foundation of China (7209121, 12171451). This research has been conducted using the UK Biobank resource (application number 96744), subject to a data transfer agreement.
The data that support the findings in this paper can be obtained from the UK Biobank ([http://www.ukbiobank.ac.uk](http://www.ukbiobank.ac.uk)).
## Supplementary Materials
Appendix A (referenced in Section 2-3) and Appendix B (referenced in Sections 3-5) are available.
|
2309.11002 | PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous
Driving | Pedestrian detection under valet parking scenarios is fundamental for
autonomous driving. However, the presence of pedestrians can be manifested in a
variety of ways and postures under imperfect ambient conditions, which can
adversely affect detection performance. Furthermore, models trained on
publicdatasets that include pedestrians generally provide suboptimal outcomes
for these valet parking scenarios. In this paper, wepresent the Parking
Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research
dealing with real-world pedestrians, especially with occlusions and diverse
postures. PPD consists of several distinctive types of pedestrians captured
with fisheye cameras. Additionally, we present a pedestrian detection baseline
on PPD dataset, and introduce two data augmentation techniques to improve the
baseline by enhancing the diversity ofthe original dataset. Extensive
experiments validate the effectiveness of our novel data augmentation
approaches over baselinesand the dataset's exceptional generalizability. | Zizhang Wu, Xinyuan Chen, Fan Song, Yuanzhu Gan, Tianhao Xu, Jian Pu, Rui Tang | 2023-09-20T01:55:19Z | http://arxiv.org/abs/2309.11002v2 | # PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving
###### Abstract
Pedestrian detection under valet parking scenarios is fundamental for autonomous driving. However, the presence of pedestrians can be manifested in a variety of ways and postures under imperfect ambient conditions, which can adversely affect detection performance. Furthermore, models trained on public datasets that include pedestrians generally provide suboptimal outcomes for these valet parking scenarios. In this paper, we present the Parking Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research dealing with real-world pedestrians, especially with occlusions and diverse postures. PPD consists of several distinctive types of pedestrians captured with fisheye cameras. Additionally, we present a pedestrian detection baseline on PPD dataset, and introduce two data augmentation techniques to improve the baseline by enhancing the diversity of the original dataset. Extensive experiments validate the effectiveness of our novel data augmentation approaches over baselines and the dataset's exceptional generalizability.
Datasets, Pedestrian detection, Data augmentation, Valet parking
## I Introduction
To develop an advanced driver assistance system (ADAS) that is both effective and safe for parking lot scenarios [1, 2, 3, 4, 5], it is critical to ensure the safety of road users such as pedestrians. The detection range of conventional pinhole cameras is often insufficient to detect the variety of behaviors and postures displayed by pedestrians. As an alternative to pinhole cameras, fisheye cameras could have a wider field of vision (FoV) [6], which is necessary for the perception of close range and low altitude, particularly in a traffic bottleneck. Thus, fisheye cameras are becoming increasingly prominent in driverless vehicles as intelligent alternatives to traditional cameras.
Nevertheless, such pedestrian detection [7, 8, 9, 10] still remains difficult due to evasive irregular postures and imprecise surrounding circumstances. First, there is a wide range of pedestrian behaviors that are rarely represented in publicly available datasets, such as occlusion, lying down, walking, etc. Second, the fisheye lens's radial distortion leads to substantial appearance distortion [11, 12], complicating the pedestrian recognition process. Additionally, the quality of images is significantly affected by environmental factors such as light and opacity.
Current datasets and benchmarks for pedestrian detection, including Caltech USA [13], KITTI [14], CityPersons [15], and Wider-Person [16], have aided in rapid progress for the pedestrian detection task. These datasets usually encompass urban, rural, or highway driving scenes, and their pinhole cameras comfortably capture high-grade images with clear and distinguishable pedestrians. Moreover, Valeo delivers the fisheye automotive dataset WoodScape with extensive content [17]. However, public datasets place insufficient emphasis on pedestrians with irregular postures and fisheye image formation. The models trained on public datasets reveal suboptimal performance in difficult parking scenes without a large number of training instances, as shown in Fig.1 and Fig.2.
To expand real-world fisheye pedestrians' images with various occlusion and postures under valet parking scenes, this paper offers a new large-scale fisheye dataset called **P**arking **P**edestrian **D**ataset (**PPD**) to promote the research on pedestrian problems, as shown in Figure 2 (b). Different from other pedestrian datasets [13, 14, 15, 16], our **PPD** dataset focuses on pedestrian detection and provides more than 330K fisheye images in valet parking scenes. To guarantee the pedestrians' diversity, we collect data from different parking lots, various periods, and diverse pedestrian situations. Additionally, we subdivide **PPD** into three sub-datasets: Occlusion Pedestrians (**OP**), Posture Pedestrians (**PP**), and Lying-down Pedestrians (**LP**). **OP** involves pedestrians occluded
Fig. 1: The cross-dataset testing on the **PPD** dataset. “Evaluation” refers to explicitly evaluating public datasets’ pre-trained models on the **PPD** dataset, with suboptimal results compared with the model trained from scratch. “Finetune” means finetuning these pre-trained models on the **PPD** dataset with little advancement.
or parking lots' pillars. **PP** is concerned with pedestrians' abundant postures, including standing, stooping, sitting, etc. **LP** concentrates on lying-down pedestrians, the most perilous situation that requires immediate early warning.
To reduce annotation costs and further broaden the diversity of pedestrians, we further propose two data augmentation techniques: **Occ-Data-Augmentation (ODA)** and **Pos-Data-Augmentation (PDA)** for pedestrians' occlusions and postures, respectively. Using **ODA** and **PDA**, high-quality synthetic images are generated through the collecting, localization, and local fusion procedures, complementing the commonly used hybrid augmentation methods [18, 19, 20]. Besides, we build pedestrian detection baselines on our **PPD** dataset and extensive experiments validate the effectiveness of our novel data augmentation approaches. In addition, the cross-dataset evaluation reveals **PPD**'s exceptional capacity to generalize.
Our contributions are summarized as follows:
* We provide the first fisheye dataset comprising over 330K fisheye images, particularly for diverse occlusion and postures of pedestrians in parking lot scenes.
* We report the baseline results of pedestrian detection on the proposed **PPD** dataset and propose two novel data augmentation techniques to improve the baseline.
* Extensive experiments demonstrate the effectiveness of **ODA**, **PDA**, and **PPD**'s exceptional generalizability.
## II Related Work
In this section, we briefly introduce the related works to our topic, i.e., pedestrian detection dataset, pedestrian detection frameworks, and data augmentation methods.
### _Pedestrian Detection Datasets_
Pioneer works of pedestrian detection datasets involve [13, 14, 15, 16, 21, 22, 23, 24, 25], which contribute to great progress in pedestrian detection. There are large-scale datasets such as Caltech USA [13] and KITTI [14], which contain urban, rural, and highway scenes and provide annotation frame sequences on videos. However, both datasets have low pedestrian densities. More recently, researchers proposed vast and diversified datasets, WiderPerson [16] and CityPersons [15]. CityPersons [15] is the subset of the CityScapes Dataset, whose average pedestrian density grows three times that of KITTI [14]. WiderPerson [16] contains a range of scenarios, including marathon, traffic, activity, dance, and miscellany, totaling approximately 400 thousand annotated instances. Moreover, Valeo proposed the extensive automotive dataset WoodScape [17] with fisheye cameras instead of pinhole cameras. However, there is no publicly available benchmark dataset for valet parking pedestrian scenarios, particularly those including varied occlusions and postures, where suboptimal detection of pedestrians forms a threat to driving safety.
### _Pedestrian Detection Frameworks_
CNN-based pedestrian detection methods can be generally categorized into one-stage [26, 27, 28] and two-stage [29, 30] methods. As an end-to-end pipeline, one-stage methods achieve a significant trade-off between performance and speed, such as the SSD series [31, 32, 33, 34], YOLO series [27, 28, 35, 36] and Retinanet [26]. In contrast, two-stage methods, such as the RCNN series [29, 30, 37, 38], take advantage of the predefined anchors to improve the performance at the cost of speed. Furthermore, recent works [39, 40, 41, 42] fuse multi-scale feature maps to improve pedestrian detection with different scales. Moreover, other works [8, 43, 44, 45] focus on crowded pedestrian detection problems for actual applications. [43] designed a new boundary box regression loss specifically for better pedestrian localization. Liu et al. offer a non-maximum suppression method to refine the bounding boxes given by detectors [44].
Fig. 2: Green boxes indicate ground truth and red boxes indicate predictions. (a) The suboptimal performance of the detector pre-trained on public datasets in parking lot scenes. (b) Our Parking Pedestrian Dataset (PPD) uses data augmentation methods and provides diverse large real-world pedestrian data with occlusion and different postures.
### _Data Augmentation Methods_
Data Augmentation methods such as random cropping, color dithering and flipping, play an important role in achieving state-of-the-arts [46, 47, 48]. These enhancements are more generic in nature and are particularly relevant to expressing data transformation invariance. In addition, hybrid image augmentation [18, 19, 20] can mix cross-image information, which usually applies appropriate changes with labels to increase diversity. Furthermore, the adaptations of mixups [20, 49, 50] are popular among hybrid image augmentation. CutMix [49] pastes a rectangular crop of the image instead of mixing all pixels. It creates new composite pictures by combining the rectangular grids of individual images with actual boxes. Cut-Paste-and-Learn [50] extracts objects in poses and then mixes and pastes them to different backgrounds. Copy-paste [20] fuses information from different images in an object-aware manner: copying and pasting instances across images.
## III Parking Pedestrians Dataset
In this section, we introduce our **P**arking **P**edestrian **D**ataset (**PPD**) dataset in detail, including the data collection, annotation protocols, informative statistics, and dataset characteristics.
### _Data Collection_
To ensure the diversity of pedestrians, we collect data from 3 cities, 12 parking lots, two periods (morning and evening), and different pedestrians with various ages, heights, clothes, and postures. In total, we captured 100 videos that last from 1 hour to 6 hours and with an average of 2 hours. Then, we convert the videos into pictures and select the images containing pedestrian instances. For high-quality images, we restrict the visible range of the fisheye camera and further remove distorted and blurred pedestrian images. Also, we do not cover all pedestrians' continuous moving processes for redundant annotations. Instead, we select the best-quality images and then apply our data augmentation methods (discussed later) to increase the data variance. Based on the images' content, we also divide them into three categories: occlusion pedestrians, posture pedestrians, and lying-down pedestrians.
Table I illustrates the statistics of the **PPD** dataset. A total of more than 330K images comprise three sub-datasets: Occlusion Pedestrians Dataset, Posture Pedestrians Dataset and Living-down Pedestrians Dataset, with amounts of 111,521, 118,224 and 115,936, respectively. Besides, every sub-dataset further performs partitioning into training, validation and testing sets at a ratio of 5:3:2.
### _Image Annotation_
We annotate the dataset in the same way as the Caltech dataset [22], by drawing a tight bounding box around each pedestrian's complete body. However, occluded pedestrians are special since the foreground-like car bodies or parking lot pillars often lead to incomplete pedestrian instances. Therefore, we have to estimate the distance from the pedestrian instance to the car's fisheye camera, and then roughly calculate the size of the box according to the depth proportion, as shown below:
\[W_{o}=W_{p}\times(1-D_{o}/D_{max}), \tag{1}\]
\[H_{o}=H_{p}\times(1-D_{o}/D_{max}), \tag{2}\]
where \(H_{p}\) and \(W_{p}\) are the average human height and width, predefined as 1.7 meters and 0.3 meters, respectively. \(D_{o}\) is the depth from the occluded pedestrian instance to the camera. Since the parking space has a fixed size, we can estimate the depth approximately by the relative location between the pedestrian instance and the nearby parking space within the same image. \(D_{max}\) is the max depth of the fisheye camera. Finally, based on the depth ratio between \(D_{o}\) and \(D_{max}\), we can roughly infer the annotated width \(W_{o}\) and height \(H_{o}\), as shown in Equations. (1) and (2).
### _Sub-datasets Description_
The occluded pedestrian dataset provides three occlusion scenarios with different occlusion rates. As shown in Fig. 3, to better restore reality, we collect the pedestrians occluded by the cars' part and cube obstacles with 10 occlusion rates, starting from not occluded to 99% with 10% increments per class.
The posture pedestrian dataset contains four postures: standing, sitting, squatting and bending over. We strive to cover eight pedestrians' orientations, which are front, rear, left, right, left front, right front, left rear, and right rear. In addition, we divide lying-down pedestrians into the new subset since they are the most dangerous cases. We make an effort to cover the same eight orientations as the posture subset. The detailed distribution of posture and lying-down pedestrians is shown in Fig. 4, and a detailed explanation of **PPD**'s categories is reported in Table II.
Furthermore, to further broaden pedestrians' diversity, we apply two novel data augmentation techniques to occlusion and posture pedestrians. The data volume increases, and later experiments will indicate improvements in pedestrian detection performance.
### _Dataset Characteristics_
Our **PPD** dataset exhibits differences from public datasets in image representation style, scenarios, quantity and diversity. Below, we elaborate on the four main characteristics of our **PPD** dataset.
**Fisheye image representation.** The **PPD** dataset consists of fisheye images, different from the common pinhole images of public datasets. Fisheye images provide a larger field-of-view (FoV), which is more suitable for close-range and low-lying pedestrian detection.
**Specific parking scenarios.** The **PPD** dataset focuses on pedestrian detection in parking scenarios, which is also distinct from natural scenes of public datasets. The environmental conditions in parking scenarios, such as light and opacity, significantly increase the detection difficulty. Concerning a variety of tough pedestrian scenarios, **PPD** can promote research in dealing with real-world pedestrian problems.
**Large quantity.** Our **PPD** dataset obtains more than 330 thousand data samples from more than 200-hour parking scene video clips. We constantly collect diverse parking pedestrian scenarios, eventually reaching the goal of over one million data.
**High quality and diversity.** Our **PPD** dataset covers 3 cities, 12 parking lots from different periods and different pedestrian cases. Additionally, we carefully select high-quality images with high resolution and apply data augmentation techniques to enlarge diversity.
## IV The Proposed Data Augmentations
Training detectors for special pedestrians usually requires a large quantity of data, which demands tremendous resources and effort to acquire and annotate. Considering these challenges, we provide two novel data augmentation techniques: **O**cc-**D**ata-**A**ugmentation (**ODA**) and **P**os-**D**ata-**A**ugmentation (**PDA**). Specifically, **ODA** focuses on occluded pedestrians, and **PDA** targets pedestrians with different postures. In this section, we describe those two data augmentation methods in detail.
### _Overall Pipeline_
We define our data augmentation process as \(f(*)\), so the overall structure states are as follows:
\[I_{i}^{syn}=f(I_{i}^{bg},M_{j}),i=1,2,\cdots,N,j=1,2,\cdots,K \tag{3}\]
where \(I_{i}^{bg}\) is the background image, \(M_{j}\) indicates pedestrian masks and \(I_{i}^{syn}\) indicates the produced synthetic images.
As shown in Fig. 5, our augmentation pipeline contains three stages: (1) collecting pedestrian masks and background images; (2) determining where to paste pedestrian masks; and (3) fusing pedestrian masks with background images.
Fig. 4: The detailed distribution of posture pedestrian dataset and lying-down pedestrians dataset.
Fig. 3: Examples of different occlusion scales and occlusion types in the Occlusion Pedestrian Dataset.
### _Occ-Data Augmentation_
We present the **O**cc-**D**ata-**A**ugmentation (**ODA**) method for occluded pedestrians with three procedures. The detailed procedure can be found in Algorithm 1.
#### Iv-B1 Collecting pedestrian masks and background images
We observe that in the valet parking scenes, cars' front or rear parts mostly occlude pedestrians. To address such occlusion, our **ODA** requires the masks of pedestrians \(M_{j}^{p},j=1,2,\cdots,K\) as foreground and the background images \(I_{i}^{bg},i=1,2,\cdots,N\), containing the masks of cars' front parts \(M_{i}^{car},i=1,2,\cdots,N\).
For a background image, we label all available cars' front parts with the Labelme tool [51], as shown in Fig. 5 (a). Note that the pedestrian masks should be diverse and high quality, which determines the reality of the synthesized images. Thus, we process them with morphological operations such as OPEN and ERODE. To seamlessly paste pedestrian masks into background images, we apply occlusion-aware scaling, resizing the mask to a more realistic scale.
#### Iv-B2 Localization of synthetic occlusion
Following Algorithm 1, we take one background image \(I_{i}^{bg}\) as input and randomly pick one car's front part \(\widetilde{M}_{t}^{car}\) as the pasting location. Then, we prepare the pedestrian mask \(M_{j}^{p}\), randomly selected from the mask list \(M^{p}\).
#### Iv-B3 Local fusion for occlusion
For more precise localization, we paste the top-left point \(P_{off}\) of the pedestrian mask above the car's front part \(\widetilde{M}_{t}^{car}\) with random but limited distances, as shown in Fig. 5(d). Specifically, we calculate \(P_{off}\) according to:
\[P_{off}=\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}randint(x_{1},x_{2}-w_{p}^{\prime})\\ y_{1}-randint(0.2*h_{car},0.3*h_{car})\end{bmatrix} \tag{4}\]
where \(w_{p}^{\prime}\) is the width of the resized pedestrian mask and \((x_{1},y_{1})\), \((x_{2},y_{2})\) are the top-left and bottom-right points of the front car part's mask. \(h_{car}\) and \(w_{car}\) are the corresponding width and height of the front car part's mask, respectively. It is noteworthy that all coordinates are relative to the top-left vertex. Furthermore, we remove the intersection between the pedestrian mask and the background, that is, the occlusion region: \(I_{occ}=\widetilde{M}_{j}^{p}\cap\widetilde{M}_{t}^{car}\) (line 13 of Algorithm 1). In this way, we accomplish the pseudo occluded pedestrian's mask \(I_{avil}^{fg}\). Then, we paste the pedestrian's mask into the background image and update the label, according to
\[I_{i}^{syn}=\alpha*I_{avil}^{fg}+(1-\alpha)*I_{i}^{bg} \tag{5}\]
where \(\alpha=1.0\) with foreground and \(\alpha=0.0\) with background. Finally, we generate synthetic image \(I_{i}^{syn}\) with the pseudo label, as shown in the top row of Fig. 6.
Fig. 5: Overview of our data augmentation methods. Our methods include **O**cc-**D**ata-**A**ugmentation (**ODA**) and **P**os-**D**ata-**A**ugmentation (**PDA**). **ODA** and **PDA** have the same pipeline: (1) collecting pedestrian masks and background images; (2) determining where to paste pedestrian masks; (3) fusing pedestrian masks with background images.
### _Pos-Data-Augmentation_
```
0: pedestrian masks \(M_{j}^{p},j=1,2,\cdots,K\); background images \(I_{i}^{bg},i=1,2,\cdots,N\); front car parts' masks \(M_{i}^{car},i=1,2,\cdots,N\)
0:\(I_{i}^{syn},i=1,2,\cdots,N\)
1: Fix random seed;
2: Shuffle \(I^{bg}\) and \(M^{car}\);
3:\(i\gets 1\);
4:while\(i<=N\)do
5:\(\widehat{M}_{i}^{car},t=1,2,\cdots,T\leftarrow\) randomly select some of labels from \(M_{i}^{car}\);
6:for\(\widehat{M}_{t}^{car}\) in \(\widehat{M}^{car}\)do
7:\(x_{1},y_{1},x_{2},y_{2}\leftarrow\widehat{M}_{i}^{car}\)'s shape;
8:\(w_{car},h_{car}=x_{2}-x_{1},y_{2}-y_{1}\);
9:\(M_{j}^{p}\in\mathbb{R}^{w_{p}\times h_{p}}\leftarrow\) randomly select from \(M^{p}\);
10: Resize \(M_{j}^{p}\) to \(\widehat{M}_{j}^{p}\) with \(w_{p}^{\prime}=w_{p}\frac{h_{car}}{h_{p}}\), \(h_{p}^{\prime}=h_{car}\);
11:\(P_{off}=(randinit(x_{1},x_{2}-w_{p}^{\prime}),y_{1}-randinit(0.2*h_{car},0.3*h_ {car}))\);
12:\(I_{occ}=\widehat{M}_{j}^{p}*\widehat{M}_{i}^{car}\) in \(P_{off}\);
13:\(I_{avail}^{fg}=\widehat{M}_{j}^{p}-I_{occ}\);
14:\(I_{i}^{syn}=\alpha*I_{avail}^{fg}+(1-\alpha)*I_{i}^{bg}\);
15: Create pseudo label of \(I_{syn}\);
16:endfor
17:\(i\gets i+1\);
18:endwhile
19:return\(I^{syn}\).
```
**Algorithm 1** Occ-Data-Augmentation
We illustrate our **Pos-Data-Augmentation (PDA)** method in Algorithm 2, which also contains three steps.
#### Iv-C1 Collecting pedestrian masks and background images
Different from **ODA**, **PDA** only requires one source pedestrian image \(I_{s}\) and background images \(I_{i}^{bg},i=1,2,\cdots,N\). The source image requires pedestrians with a complete body structure and black background, with the assistance of the Labelme tool [51]. Moreover, Liquid Warping GAN (AttLWB) [52] could create different human postures with a reference video, which warps the visible textures of source images to the desired poses. With the help of AttLWB [52], we obtain a series of synthetic pedestrian masks \(M_{j}^{p},j=1,2,\cdots,K\) with different pedestrian postures, as shown in Fig. 5(c). Furthermore, we also take the morphological operations OPEN and ERODE to process the synthetic pedestrian masks to remove the noise near the mask's contours.
#### Iv-C2 Localization of synthetic posture pedestrians.
Since posture pedestrians must lie within the freespace region of parking scenes, we first detect the freespace region. We train a simple semantic segmentation model for freespace region detection and randomly pick one location within the model's freespace prediction. It is notable that this approach slightly requires time and computational resources, but we apply the procedure to ensure the quality of pseudo labels.
#### Iv-C3 Local fusion of pedestrian masks
Finally, we resize the masks at a limited scale and paste the pedestrian masks into the background at the selected freespace location:
\[\hat{I}_{i}^{syn}=\alpha*M_{j}^{syn}+(1-\alpha)*\hat{I}_{i}^{bg}. \tag{6}\]
Then, we obtain synthetic posture pedestrians with pseudo labels, as shown in Fig. 5 (e). The bottom row of Fig. 6 illustrates the **PDA**'s examples.
## V Experiments
In this section, we report the baseline results and the results of two proposed data-augmentation techniques on our **PPD** dataset. Then, we discuss **PPD**'s generalization across datasets. Furthermore, we analyze the effects of **ODA** and **PDA** by ablation studies and comparisons.
### _Experimental Settings_
#### V-A1 Implementation Details
We conduct experiments with the Pytorch framework on Ubuntu system and employ eight NVIDIA RTX A6000s. The learning rate is set to 0.12, while the momentum and learning decay rates are set to 0.9 and 0.01, respectively. For training, we adopt the stochastic gradient descent (SGD) solver, 48 epochs, and 16 batch size. For our data augmentation experiments, we mix 250,000 augmentation images with the original **PPD** dataset.
Fig. 6: Examples of augmented images. Top row: ODA results; Bottom row: PDA results.
#### Iv-A2 Evaluation Metrics
The detection task ought to chase superb targets for the location to ensure pedestrians' safety. Therefore, we select a high IoU criterion of 0.75 for object detection metrics: Average Precision (AP) and Average Recall (AR). The high threshold forms a stricter examination to filter more robust models for the pedestrian detection task.
#### Iv-A3 Baseline methods
Our baseline detectors contain CenterNet [53] with backbone DLA34 [54], YOLOF [35], Faster R-CNN [37], Cascade RCNN [55] and RetinaNet [26] with ResNet-50 [56] backbone. All baselines have occupied the field of object detection in recent years. To ensure comparability, all baselines utilize the same experimental settings as their release.
#### Iv-A4 Datasets
We also choose several public datasets for cross-dataset evaluation: COCO [21], KITTI [14], CityPersons [15], and WiderPerson [16], where the last two datasets aim for the category "Person".
### _Results and Analysis_
#### Iv-B1 **Results on PPD and PPD w/ DA**
To demonstrate the effectiveness of our data-augmentation methods, we conduct baseline evaluation based on the **PPD** dataset and the mixed dataset with augmentation images, as shown in Table III. For the original **PPD** dataset, the two-stage Faster RCNN wins on AP75, and the one-stage CenterNet wins on AR75. As a new anchor-free pipeline, CenterNet focuses on object center detection, which brings higher recall and perhaps lower precision. All the performance enhancements are approximately 2% to 4% when mixed with data-augmentation images. We attribute the advancement to our realistic synthetic images, which satisfy the appetite of pedestrians with various occlusions and postures at a low cost.
Besides, we explore the subdatasets' performance with data-augmentation images (w/ DA), as shown in Table IV. Additional data-augmentation images take effect individually on both sub-datasets.
#### Iv-B2 **Cross-dataset Evaluation**
First, we test how well models, which perform well on commonly used datasets, perform on our **PPD** dataset. We train CenterNet models on the public datasets COCO, KITTI, CityPersons, and WiderPerson. Then, we infer and evaluate them on the **PPD** dataset, as shown in Table V. We observe that models pre-trained on public datasets perform suboptimally, indicating their inadequacy with irregular pedestrian fisheye instances. Furthermore, we finetune these models on the **PPD** dataset, also as shown in Table V. In comparison to the **PPD** model trained from scratch, the pre-trained cross-dataset models do not make much advancement, even with a small performance drop similar to the KITTI dataset. We conjecture that public datasets are insufficient to compensate for the absence of different pedestrians with occlusion and varied postures.
Moreover, we conduct generalization experiments from **PPD** on public datasets based on CenterNet [53], as shown in Table VI. After finetuning, our **PPD** dataset gains approximately 2% to 5% enhancement compared to the baselines trained on public datasets, especially for AR 75. **PPD**'s pedestrian cases cover the usual pedestrian scenes, which considerably increases the recall and lift generalization ability.
#### Iv-B3 **Ablation Study**
We conduct ablation studies for the **Occ**-**Data**-Augmentation (**ODA**) and **Pos**-**Data**-Augmentation (**PDA**) methods based on CenterNet [53], as shown in Table VII. (b) and (c) rows show that training only with synthetic images does not make sense because of the data domain shift. From rows (d) and (e), our **ODA** and **PDA** obviously make great progress, especially **ODA**, contributing approximately 2% AP75 improvement. Both techniques are effective, and the result performs best in combination, as shown in the (f)
row.
#### V-C4 **Comparison with Copy-paste**
Copy-paste [20] plays an important role in hybrid image augmentation. We compare our data-augmentation techniques with copy-paste based on CenterNet, as shown in Table VIII. Surprisingly, from (b) row, the model's performance with copy-paste degrades by approximately 5%. We analyze copy-paste copies and paste instances across images but without any fusion processing, which easily leads to unreliable images and false detection. In contrast, our well-organized adaptive pasting localization and fusion strategies bring an improvement of 3.2% in AP75 and 3.0% in AR75 in the (c) row.
#### V-C5 **Discussion with amount of data-augmentation images**
Theoretically, we could produce infinite pseudo-labeling images. However, a large quantity of training data occupies large amounts of resources and time. To trade off the efficiency and performance, we explore the optimal quantity for pseudo-labeling images based on the CenterNet [53] method and the **PPD** dataset, as shown in Table IX. Interestingly, from rows (d), (e) and (f), most training data do not perform the best result, perhaps resulting from overfitting. Before overfitting, a larger image volume means greater enhancement, as illustrated in rows (b), (c) and (d).
## VI Conclusion
In this paper, we have presented a new dataset, the Parking Pedestrians Dataset (**PPD**), as well as two unique data-augmentation techniques, Occ-Data-Augmentation (**ODA**) and Pos-Data-Augmentation (**PDA**). By providing a diversity of pedestrian postures, the proposed dataset aims to assist the industry in constructing a more secure advanced driving assistance system. Moreover, we provide two techniques for enhancing pedestrian detection performance using data augmentation. Extensive experiments on the proposed **PPD** validate the effectiveness of the techniques. However, **PPD** has a large capacity for development, including how to strengthen the realism of the data augmentation, simplify our methodologies, deal with sustainably increasing data and have the potential for diverse vision tasks. Nevertheless, we expect **PPD** to inspire more relevant research and promote the performance of pedestrian detection under parking scenes. In the future, the proposed **PPD** dataset's potential not only lies in pedestrian detection but can also be extended into other vision tasks, such as pixel-wise semantic segmentation, video object detection and 3D object detection tasks.
|
2305.19953 | Multi-Dataset Co-Training with Sharpness-Aware Optimization for Audio
Anti-spoofing | Audio anti-spoofing for automatic speaker verification aims to safeguard
users' identities from spoofing attacks. Although state-of-the-art spoofing
countermeasure(CM) models perform well on specific datasets, they lack
generalization when evaluated with different datasets. To address this
limitation, previous studies have explored large pre-trained models, which
require significant resources and time. We aim to develop a compact but
well-generalizing CM model that can compete with large pre-trained models. Our
approach involves multi-dataset co-training and sharpness-aware minimization,
which has not been investigated in this domain. Extensive experiments reveal
that proposed method yield competitive results across various datasets while
utilizing 4,000 times less parameters than the large pre-trained models. | Hye-jin Shim, Jee-weon Jung, Tomi Kinnunen | 2023-05-31T15:37:48Z | http://arxiv.org/abs/2305.19953v2 | # Multi-Dataset Co-Training with Sharpness-Aware Optimization
###### Abstract
Audio anti-spoofing for automatic speaker verification aims to safeguard users' identities from spoofing attacks. Although state-of-the-art spoofing countermeasure(CM) models perform well on specific datasets, they lack generalization when evaluated with different datasets. To address this limitation, previous studies have explored large pre-trained models, which require significant resources and time. We aim to develop a compact but well-generalizing CM model that can compete with large pre-trained models. Our approach involves multi-dataset co-training and sharpness-aware minimization, which has not been investigated in this domain. Extensive experiments reveal that proposed method yield competitive results across various datasets while utilizing 4,000 times less parameters than the large pre-trained models.
Hye-jin Shim\({}^{1}\), _Jee-weon Jung\({}^{2,\dagger}\), Tomi Kinnunen\({}^{1}\)\({}^{1}\)_University of Eastern Finland, Finland
\({}^{2}\)Carnegie Mellon University, USA
[email protected], [email protected], [email protected]
**Index Terms**: audio spoofing, spoofing detection, sharpness aware minimization, generalization, multi-dataset training
## 1 Introduction
Automatic speaker verification (ASV) systems [1], even state-of-the-art, have been reported to be easily deceived by _spoofing attacks_ including speech synthesis (text-to-speech, TTS), voice conversion (VC). For the reliable ASV systems, _audio anti-spoofing_ has emerged which aims to distinguish the utterance from a real human (_bona fide_) or spoofing attacks (_spoofed_). To develop spoofing countermeasure (CM) models, various studies have conducted focusing on feature [2, 3, 4], model architecture [5, 6, 7, 8], and other techniques (e.g. loss function and data augmentation) [9, 10, 11].
While most state-of-the-art CMs perform well on specific evaluation datasets, they do not generalize well when cross-evaluated on different datasets [12, 13, 14, 15, 16, 17, 18]. Several studies have explored improving generalization capability and literature can be divided into two strands: The first strand exploits techniques to develop a model with feature adjustment, gradient-based methods, and adversarial learning [12, 13, 15]. The other strand develops model using domain adaptation, continual learning, and self-supervised learning to leverage large-scale datasets [16, 17, 18, 19, 20, 21]. Even though the latter demonstrated promising results with a large gap, it typically requires additional learning steps with a large, heavyweight model such as wav2Vec2.0 [22] or HuBERT [23] that typically contains billions of parameters.
Dating back to the base assumption of large-scale pre-trained models, there is a core premise of "_more data leads to better (generalization) performance_" that lies on deep learning as well as statistics. It expects that exposure of large amounts of data to the model can lead to better generalization. Following this idea, abundant studies have demonstrated the effectiveness of training a model on diverse datasets to enhance their robustness and adaptability, namely unsupervised/semi-supervised pre-training and self-supervised learning. Otherwise, training a model using multiple datasets at once has been well-known as a challenging and unsolved problem because of different characteristics of datasets that may interfere with the target task. Nevertheless, a few studies have been conducted in this direction [24, 25, 26, 27]. There is a potential to be improved, so further inspection and exploration yet remain.
In this study, we aim to develop a compact and well-generalized CM model leveraging multi-dataset co-training. At an early stage of the present study, we conducted pilot experiments with gradually enlarging datasets for training the model at once. The result is shown in Table 1 and its outcome indicates that merely combining datasets that span different domains does not guarantee generalization. These observations motivated us to address more elaborate ways to optimize the model when using multiple training datasets. To mitigate this problem, we explore the way to reduce the perturbation which can be caused by domain mismatch across different datasets. Recent studies have shown gradient-based methods, especially sharpness-aware related works [28, 29, 30], demonstrate to avoid a severe perturbation during the training process and enhance generalization capability. Here, the term _sharpness_ in this context refers to the curvature of neighborhoods in the loss surface.
To this end, we exploit two recently proposed optimization techniques: _sharpness-aware minimization_ (SAM) [28] and _Adaptive sharpness-aware minimization_ (ASAM) [29]. SAM and ASAM are both designed to find flat minimas by taking into account the sharpness in the loss surface in addition to the gradients. Hence, we hypothesize that combining sharpness-aware training -- an approach designed to avoid sharp loss minima -- with multiple dataset co-training (to handle diverse data) has the potential to lead to improving the generalization of CM.
Our study first attempts to optimize multi-dataset co-training and also practical effectiveness of sharpness-aware training remains presently unknown in the CM task. Provid
\begin{table}
\begin{tabular}{l c} \hline \hline Train Dataset(s) & EER(\%) \\ \hline ASVspoof 2015 & 38.83 \\ ASVspoof 2019 & 1.38 \\ \hline ASVspoof 2019 + ASVspoof 2015 & 1.56 \\ ASVspoof 2019 + ASVspoof 2015 + WaveFake & 1.76 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison of using a single dataset and multiple datasets in audio anti-spoofing. Evaluation is conducted with the ASVspoof 2019 LA dataset.
ing initial answers to this question forms the main novelty of our work; we implement our proposed method using the state-of-the-art graph neural network-based AASIST [8] model with various evaluation data. Our comprehensive experimental validation reveals that both approaches are effective in that the proposed model shows competitive results throughout various datasets using a number of parameters more than 4000 times less than the large pre-trained models and leads to better generalization.
## 2 Multi-dataset training
### Related works
Several previous works have focused on enlarging the amount of training data for the training based on "_more data leads to better (generalization) performance_" in line with fitting the general distribution with large amounts of data. Both unsupervised learning and self-supervised learning align with this principle aimed to enhance model generalization with more data. However, above mentioned studies are taking into account the conditions that are hard to get labeled data for the same task. It means if labeled data is available, it is basically helpful for training, however, multi-dataset co-training is even unveiled yet compared to the methodologies for exploiting unlabeled data. Few studies [27, 31] have conducted multi-dataset co-training in other domains.
There exists a few preliminary works in audio spoofing utilizing multiple datasets [13, 32]. They concentrated on developing a single model that can detect diverse types of attacks as audio spoofing attacks can be divided into two categories: logical access (LA) and physical access (PA). The former includes TTS and VC attacks, whereas the latter refers to replay attacks only. However, no research has explored multi-dataset training to deal with one category of attack (either LA or PA). The potential effectiveness of training the model with multiple datasets simultaneously for the same task has yet to be explored in depth, but it would be worthwhile to investigate further.
### Summary of datasets used in this study
We use three datasets concurrently to train a _single model_ in a _single phase_: ASVspoof 2015 [33], ASVspoof 2019 LA [34], and WaveFake [35]. The latest ASVspoof edition in 2021 additionally introduced DeepFake (DF) scenario which includes lossy codecs used for media storage. In this study, we _only_ deal with LA spoofing attacks for training a model. To address generalization to an unseen domain, we use the ASVspoof 2021 LA and DF tasks for the evaluation. Note that ASVspoof 2015, ASVspoof 2019, and a part of ASVspoof 2021 are based upon the Voice Cloning Toolkit (VCTK) corpus [36]; however, they cover different attacks. Supporting this basis, it is assumed that ASVspoof 2015 evaluation can be theoretically easy when a CM is trained on the more diverse LA 2019 train set. However, it has empirically confirmed that this is not the case [13, 18]. An overview of the selected datasets is shown in Table 2.
**ASVspoof 2015**[33] is the earliest and smallest database among the four existing ASVspoof editions. The evaluation set consists of five known and five unknown attacks composed of different TTS and VC systems. In this context, the term _known_ attack indicates an attack in the train and test set is overlapped, while _unknown_ attack indicates to scenarios where the test set includes attacks that were not encountered during the training phase.
**ASVspoof 2019**[34] is a large-scale dataset that covers advanced technologies developed during the four years following ASVspoof 2015. It includes 6 and 13 types of spoofing attacks in train and test, respectively. There are two known attacks, four partially known attacks, and seven unknown attacks between train and test sets. Here, _partially known_ attack denotes a scenario where some of the attacks are present in both the train set and test set, but some attacks are not present in the train set.
**WaveFake**[35] is collected using six different state-of-the-art TTS methods. It considers the created samples to resemble the training distributions. All spoofed data has been generated using the last, competitive VC and TTS models. Note that we utilize whole spoofed speech for WaveFake since no standardized test protocol exists. WaveFake contains spoofed utterances from two speakers, originating from LJSPEECH [37] and JSUT [38] datasets, respectively.
**ASVspoof 2021**[39] is the latest and hardest edition of the ASVspoof challenge series. It only contains a test set and introduces real telephony systems both applied to encoding and transmission artifacts in the LA scenario of the ASVspoof 2019 dataset. The DF scenario additionally consists of bona fide and spoofed data processed through various audio compressors containing data from two additional datasets, VCC 2018 [40] and VCC 2020 [41].
## 3 Sharpness-aware optimizations
When working with multiple datasets simultaneously, a model may easily struggle with domain information between the main task, despite having access to explicit class labels. Domain information can distract the model from converging, although it may generalize better once the train loss has well converged. We thus seek methods that can prevent the model from being distracted by domain discrepancies between different datasets. Besides, this direction can also cast away the necessity of the additional pre-training and fine-tuning steps in a straight way.
In particular, _sharpness-aware minimization_ (SAM) [28] recently demonstrates state-of-the-art performance in various tasks [42, 43, 44, 45]. SAM aims to find a _flat_ region in the parameter space with both low loss itself and neighborhoods, seeking flat minima. It uses worst-case perturbation of the model parameters on every iteration in the training phase and can be easily implemented on top of existing optimizers such as Adam [46]. Moreover, there are several follow-up studies related to sharpness. For instance, the authors in [29] proposed a scale-invariant version, _adaptive_ variant of SAM, adaptive SAM (ASAM). It solves the scale dependency problem by removing the effect of scaling and helps build a solid correlation with the generalization gap. In the following subsections, we detail both SAM and ASAM.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & \# Spks & \# Uts & \# Conds \\ \hline ASVspoof 2015 & 25 / 46 & 16375 / 193404 & 5 / 10 \\ ASVspoof 2019 LA & 20 / 48 & 25380 / 108978 & 6 / 13 \\ WaveFake & 2 & 117985 & 6 \\ \hline ASVspoof 2021 LA & - / 48 & 181566 & - / 13 \\ ASVspoof 2021 DF & - / 48 & 611829 & - / 13 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of data statistics used in this study. #Spks, # Uts, and # Conds refer to the number of speakers, utterances, and spoofing conditions, respectively. Division of train and test set is indicated by /.
### Sharpness-Aware Minimization
Given a labeled training set \(S=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\) drawn from i.i.d an unknown data distribution \(D\), the _training loss_ function with model parameter \(\mathbf{w}\) and _population loss_ are defined as \(L_{S}\) and \(L_{D}(w)=\mathbb{E}_{(x,y)}\)\(\mathcal{D}[\{\mathbf{w},x,y\}]\), respectively. While the training loss is the empirical/sample-based estimator, the population loss refers to the corresponding theoretical quantity when the knowledge of the actual joint distribution \((x,y)\) is fully known. So, population loss can be thought of as the empirical training loss, applied to an infinitely large training set \((n\rightarrow\infty)\).
Our goal is to select \(\mathbf{w}\) not only for having low training loss \(L_{S}(\mathbf{w})\) but also for low population loss \(L_{D}(s)\), for improved generalization. To achieve such a goal, SAM is designed to minimize the following PAC-Bayesian generalization upper bound, where the sharpness term appears explicitly in another following equation. (The term in brackets indicates the sharpness of \(L_{s}\) at \(\mathbf{w}\)):
\[L_{D}(\mathbf{w})\leq\max_{\|\mathbf{\varepsilon}\|_{2}\leq \rho}L_{S}(\mathbf{w}+\mathbf{\varepsilon})+h(\|\mathbf{w}\|_{2}^{2}/\rho^{2})\] \[=[\max_{\|\mathbf{\varepsilon}\|_{2}\leq\rho}L_{S}(\mathbf{w}+ \mathbf{\varepsilon})-L_{S}(\mathbf{w})]+L_{S}(\mathbf{w})+h(\|\mathbf{w}\|_{ 2}^{2}/\rho^{2})\]
Here, \(h\) is a strictly increasing function under conditions on \(L_{D}(\mathbf{w})\)and \(\rho\) is a predefined constant controlling the radius of a neighborhood in \(l^{p}\) ball (\(p\ni[1,\infty]\), [28] revealed that \(p=2\) is optimal). A detailed explanation of the PAC-Bayesian generalization bound is omitted for the limited space. Refer to Appendix A.1 of [28]) for full details.
Finally, for any \(\rho>0\) and \(\mathbf{\varepsilon}\approx 0\) (to avoid division by 0), the model loss is defined as:
\[\min_{\mathbf{w}}L_{S}^{\text{SAM}}(\mathbf{w})+\lambda\|\mathbf{ w}\|_{2}^{2},\] \[\text{where}L_{S}^{\text{SAM}}(\mathbf{w})\triangleq\max_{\| \mathbf{\varepsilon}\|_{p}\leq\rho}L_{S}(\mathbf{w}+\mathbf{\varepsilon})\]
### Adaptive Sharpness-Aware Minimization
Even if the vanilla SAM performs usually well, it is easily affected by parameter re-scaling as a sharpness term in SAM defined on a rigid region with a fixed radius. This may disturb the generalization of SAM.
To address this shortcoming, ASAM [29] uses adaptive sharpness that removes the effect of scaling and adjusting the maximization region, leading to an improved training path. It utilizes the normalization operation \(T_{\mathbf{w}}^{-1}\) and achieves better generalization compared to SAM. Firstly, the normalization operator of weight \(\mathbf{w}\) can be defined, if \(T_{\mathbf{w}}\) is a family of invertible linear operators and \(T_{A}^{-1}\mathbf{w}=T_{\mathbf{w}}^{-1}\) for any invertible scaling operator \(A\), which does not alter the loss function. With this normalization operator, adaptive sharpness objective function is defined as:
\[L_{S}^{\text{SASM}}(\mathbf{w})\triangleq\max_{\|T_{\mathbf{w}}^{-1}\mathbf{ \varepsilon}\|_{p}\leq\rho}L_{S}(\mathbf{w}+\mathbf{\varepsilon})\]
## 4 Experimental settings
For experiments, we deploy the "light" version of the recent AASIST model [8], referred to as AASIST-L. It includes a graph attention layer to capture information both in spectral and temporal domains and max graph operations to select features in a competitive manner. The main difference between AASIST and AASIST-L is the number of parameters for practical purposes including 297K and 85K numbers of parameters, respectively. We use Adam [46] as our base optimizer. When exploiting SAM and ASAM, optimization proceeds similarly, but with the additional sharpness term added to the training loss as explained above. As our aim is to focus on the generalization throughout corpora rather than dealing with architectural details, we did not adjust parameters (e.g. learning rate, pooling ratio). Full details of AASIST model can be referred to [8]. All models were implemented using PyTorch and trained for 100 epochs. Performance evaluation is done based on equal error rate (EER) and we selected the best-performing model in terms of EER on the development set. Code for experiments of this study is available in: https:
\begin{table}
\begin{tabular}{l c c c} \hline \hline Attack & (a) & (b) & (a)+(b) & (a)+(b)+(c) \\ \hline Traditional & 21.54 & 12.18 & 14.18 & **10.77** \\ Wav.Concat. & 55.22 & **12.07** & 20.32 & 13.09 \\ Neural AR & 46.82 & **23.10** & 27.70 & 24.87 \\ Neural non-AR & 40.23 & **20.47** & 25.05 & 23.21 \\ Unknown & 24.45 & 20.38 & 17.94 & **16.05** \\ \hline Pooled & 33.54 & 18.20 & 22.14 & 19.49 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Per-attack results of ASVspoof 2021 DF evaluation. The best results are selected between w/o SAM, SAM, and ASAM. The best results in the same row are represented in boldface. ((a) ASVspoof 2015, (b) ASVspoof 2019, (c) WaveFake.)
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**ASVspoof 2015**} & \multicolumn{3}{c}{**ASVspoof 2019 LA**} & \multicolumn{3}{c}{**ASVspoof 2021 LA**} & \multicolumn{3}{c}{**ASVspoof 2021 DF**} \\ \cline{2-13} & w/o & SAM & ASAM & \multicolumn{3}{c}{w/o} & SAM & ASAM & \multicolumn{3}{c}{w/o} & SAM & ASAM & \multicolumn{3}{c}{w/o} & SAM & ASAM & \multicolumn{3}{c}{_Average_} \\ \cline{2-13} & SAM & ASAM & \multicolumn{3}{c}{SAM} & SAM & ASAM & \multicolumn{3}{c}{SAM} & SAM & ASAM & \multicolumn{3}{c}{SAM} & \multicolumn{3}{c}{SAM} & ASAM & \multicolumn{3}{c}{_Average_} \\ \hline
**2015** & 8.25 & 6.50 & 5.83 & 38.83 & 29.50 & 30.70 & 39.87 & 32.27 & 31.09 & 33.54 & 28.40 & 21.80 & 25.55 \\ \hline
**2019 LA** & 5.98 & 4.32 & 3.53 & 1.38 & 1.06 & 1.48 & 12.18 & 7.08 & 10.18 & 18.20 & 21.16 & 19.58 & 8.84 \\ \hline
**2015** & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13}
**+2019 LA** & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13}
**+2019 LA** & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13}
## 5 Results and Analyses
**Main results** In Table 3, we validate our proposed model using both in-domain datasets (ASVspoof 2015 and ASVspoof 2019 LA) and out-of-domain datasets (ASVspoof 2021 LA and ASVspoof 2021 DF). Firstly, as for the multi-dataset co-training, the pooled results are shown in the last column. We confirm that the best result achieved when all three datasets are all utilized. Even though the full usage of three datasets did not show a consistent result as the best result, the effectiveness of using multiple datasets was demonstrated.
Secondly, the results of sharpness-aware optimizations can be easily comparable in the last row. Sharpness-aware optimization methods improve the performance in most cases, regardless of whether using a single dataset or multiple datasets. Except for the ASVspoof 2019 LA evaluation case that gets the lowest EER using SAM, ASAM shows the best results. Through results, we observe that SAM and ASAM substantially benefit the model optimization.
**Per-attack results on ASVspoof 2021 DF** For further analysis, Table 4 shows the results for each attack in ASVspoof 2021 DF evaluation result. We depict the best results for each training dataset combination : each column refers to ASVspoof 2015 (w/ ASAM), ASVspoof 2019 (w/o SAM), ASVspoof 2015 + ASVspoof 2019 (w/o SAM), and ASVspoof2015 + ASVspoof 2019 + WaveFake (w/ SAM). The interesting thing to be noted is that (a)+(b)+(c) which includes all datasets show superior in Traditional and Unknown attack in a large gap compared to the best result which is from (b) only. In terms of generalization, _unknown_ is the most critical subset; thus we interpret that the lowest EER in unknown signifies better generalization. Hence, these results back up the effectiveness of the proposed method for the generalization performance.
**Mini-batch composition** When utilizing multiple datasets, an imbalance exists between different datasets. We thus further explore whether balancing the number of samples drawn from each dataset can be advantageous in terms of performance. Table 5 describes the results, which confirm that simply balancing the samples between different datasets within a mini-batch are helpful. We confirm improvement in both reported train dataset configurations, where the performance of the most extensive setting is further improved by 14% relative.
**Comparison with other studies** In Table 6, we compare our results with other state-of-the-art systems including the studies utilizing a large pre-trained model. Among four evaluation protocols, our model demonstrates competitive performance with only 85K parameters in two protocols: ASVspoof 2015 and ASVspoof 2019 LA. In the other two remaining protocols, our model underperforms; nonetheless, taking into account the number of parameters and training time, we argue that our approach remains competitive. Given the fact that the purpose of CM models is to aid ASV systems, lightweight yet well-generalizing models are worth further investigation.
## 6 Conclusions
Recent studies have widely exploited large pre-trained models to leverage as much data as possible to develop a well-generalized model. While training a single model with multiple datasets is a straightforward way to utilize diverse data without additional training, it is well-known that handling domain differences is challenging. Given the nature that CM models are inherently built to support ASV systems, these enormous systems can be potentially not applicable because of their size. In this paper, we explore a case study in the audio anti-spoofing field which lacks a large amount of data compared to other research domains. To optimize the model to handle multiple datasets simultaneously, we utilize sharpness-aware methodologies, which include a curvature-based term in the objective function to reduce the gap between the variance of the data. Using a number of parameters more than 4000 times less than the large pre-trained models, our proposed method demonstrates effectiveness in both in-domain evaluations on unknown attacks and out-of-domain evaluations.
## 7 Acknowledgements
This work was supported by the Academy of Finland (Decision No. 349605, project "SPEECHFAKES")
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **\# Params** & **EER(\%)** \\ \hline \multicolumn{3}{c}{**ASVspoof 2015**} \\ \hline Primary result [47] & - & 1.21 \\
**Ours** & 85K & 0.66 \\ \hline \multicolumn{3}{c}{**ASVspoof 2019 LA**} \\ \hline SSAD+LCNN big [48] & - & 5.31 \\ Imag-pre + J\&S [16] & - & 0.87 \\ wav2Vec2.0 [18] & 317M + (290\(\pm\)30K) & 1.28 \\ HuBERT-XL [18] & 317M + (290\(\pm\)30K) & 3.55 \\
**Ours** & 85K & 0.99 \\ \hline \multicolumn{3}{c}{**ASVspoof 2021 LA**} \\ \hline Img-pre+RawBoost [16] & - & 7.71 \\ wav2Vec2.0-XLSR [17] & 317M + 297K & 6.15 \\ wav2Vec2.0-XLSR [18] & 317M + 297K & 9.66 \\ HuBERT-XL [18] & 317M + (290\(\pm\)30K) & 9.55 \\
**Ours** & 85K & 7.08 \\ \hline \multicolumn{3}{c}{**ASVspoof 2021 DF**} \\ \hline Img-pre+RawBoost [16] & - & 19.11 \\ wav2Vec2.0-XLSR [17] & 317M + 297K & 7.69 \\ wav2Vec2.0-XLSR [18] & 317M + 297K & 4.75 \\ HuBERT-XL [18] & 317M + (290\(\pm\)30K) & 13.07 \\
**Ours** & 85K & 18.20 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison with other state-of-the-art results including the research which utilized pre-trained model with large datasets.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Training datasets & Mini-batch & Loss & EER \\ \hline
2015 + 2019 LA & pooled & w/o SAM & 1.56 \\ \hline
2015 + 2019 LA & **balanced** & w/o SAM & 1.49 \\ \hline
2015 + 2019 LA & pooled & ASAM & 1.27 \\ \hline
2015 + 2019 LA & **balanced** & ASAM & 1.09 \\ + WaveFake & **balanced** & ASAM & 1.09 \\ \hline \hline \multicolumn{3}{c}{//github.com/shimz/MDL\_sharpness.} \\ \hline \end{tabular}
\end{table}
Table 5: The comparison of mini-batch composition strategies using ASVspoof 2019 LA evaluation. Pooled mini-batch refers to the condition which ignores the balance between datasets, and balanced mini-batch refers to the condition which considers the balance between multiple datasets. |
2309.13615 | Descent representations and colored quasisymmetric functions | The quasisymmetric generating function of the set of permutations whose
inverses have a fixed descent set is known to be symmetric and Schur-positive.
The corresponding representation of the symmetric group is called the descent
representation. In this paper, we provide an extension of this result to
colored permutation groups, where Gessel's fundamental quasisymmetric functions
are replaced by Poirier's colored quasisymmetric functions. For this purpose,
we introduce a colored analogue of zigzag shapes and prove that the
representations associated with these shapes coincide with colored descent
representations studied by Adin, Brenti and Roichman in the case of two colors
and Bagno and Biagioli in the general case. Additionally, we provide a colored
analogue of MaMahon's alternating formula which expresses ribbon Schur
functions in the basis of complete homogeneous symmetric functions. | Vassilis Dionyssis Moustakas | 2023-09-24T12:10:20Z | http://arxiv.org/abs/2309.13615v1 | # Descent representations and colored quasisymmetric functions
###### Abstract.
The quasisymmetric generating function of the set of permutations whose inverses have a fixed descent set is known to be symmetric and Schur-positive. The corresponding representation of the symmetric group is called the descent representation. In this paper, we provide an extension of this result to colored permutation groups, where Gessel's fundamental quasisymmetric functions are replaced by Poirier's colored quasisymmetric functions. For this purpose, we introduce a colored analogue of zigzag shapes and prove that the representations associated with these shapes coincide with colored descent representations studied by Adin, Brenti and Roichman in the case of two colors and Bagno and Biagioli in the general case. Additionally, we provide a colored analogue of MaMahon's alternating formula which expresses ribbon Schur functions in the basis of complete homogeneous symmetric functions.
2020 Mathematics Subject Classification: Primary: 05E05, 05E10, 05A05, 05A15. Secondary: 20C30 The author was partially co-financed by Greece and the European Union (European Social Fund-ESF) through Operational Programme "Human Resources Development, Education and Lifelong Learning" in the context of the project "Strengthening Human Resources Research Potential via Doctorate Research 2nd Cycle" (MIS-5000432), Implemented by the State Scholarships Foundation (IKY)
## 1. Introduction
The basis of Schur functions forms one of the most interesting basis of the space of symmetric functions [18, Chapter 7]. Schur functions appear in the representation theory of the symmetric group as characters of irreducible representations. A symmetric function is called Schur-positive if it is a linear combination of Schur functions with nonnegative coefficients. The problem of determining whether a given symmetric function is Schur-positive constitutes a major problem in algebraic combinatorics [19].
Adin and Roichman [4] highlighted a connection between Schur-positivity of certain quasisymmetric generating functions and the existence of formulas which express the characters of interesting representations as weighted enumerations of nice combinatorial objects. Quasisymmetric functions are certain power series in infinitely many variables that generalize the notion of symmetric functions. They first appeared in the work of Stanley and were later defined and systematically studied by Gessel [12] (see also [9]).
An example of this connection of particular interest involves the quasisymmetric generating function of inverse descent classes of the symmetric group and the characters of Specht modules of zigzag shapes, often called descent representations (for all undefined terminology we refer to Section 2). Adin, Brenti and Roichman [2] studied descent representations by using the coinvariant algebra as a representation space and provided an extension to the hyperoctahedral group, which was later generalized to every complex reflection group by Bagno and Biagioli [5].
Recently, Adin et al. [1] investigated an extension of the aforementioned connection to the hyperoctahedral setting, where Gessel's fundamental quasisymmetric functions were replaced by Poirier's signed quasisymmetric functions [16]. In particular, they proved [1, Proposition 5.5]
that the signed quasisymmetric generating function of signed inverse descent classes is Schur-positive in the hyperoctahedral setting, but without explicitly specifying the corresponding characters.
Motivated by the afore-mentioned result, in this paper, we aim to extend upon it in the case of colored permutation groups, a special class of complex reflection groups. In particular, we prove that the colored quasisymmetric generating function of inverse colored descent classes is Schur-positive in the colored setting and show that the corresponding characters are precisely the characters of colored descent representations studied by Bagno and Biagioli (see Theorem 5.2). For this purpose, we suggest a colored analogue of Gessel's zigzag shape approach to descent representations. Furthermore, we provide a colored analogue of a well-known formula due to MacMahon, popularized by Gessel [12], which expresses the Frobenius image of colored descent representations, usually called ribbon Schur functions, as an alternating sum of complete homogeneous symmetric functions in the colored context (see Theorem 5.3).
The paper is structured as follows. Section 2 discusses background on permutations, tableaux, compositions, zigzag diagrams, symmetric/quasisymmetric functions and descent representations. Section 3 reviews the combinatorics of colored compositions, colored permutations and colored quasisymmetric functions. Section 4 introduces and studies the notion of colored zigzag shapes and Section 5 proves the main results of this paper, namely Theorems 5.2 and 5.3.
## 2. Preliminaries
This section fixes notation and discusses background. Throughout this paper we assume familiarity with basic concepts in the theory of symmetric functions and representations of the symmetric group as presented, for example, in [18, Chapter 7]. For a positive integer \(n\), we write \([n]:=\{1,2,\ldots,n\}\) and denote by \(|S|\) the cardinality of a finite set \(S\).
### Permutations, tableaux, compositions and zigzag diagrams
A composition of a positive integer \(n\) is a sequence \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{k})\) of positive integers such that \(\alpha_{1}+\alpha_{2}+\cdots+\alpha_{k}=n\). Compositions of \(n\) are in one-to-one correspondence with subsets of \([n-1]\). In particular, let \(\mathrm{S}_{\alpha}:=\{r_{1},r_{2},\ldots,r_{k-1}\}\) be the set of partial sums \(r_{i}:=\alpha_{1}+\alpha_{2}+\cdots+\alpha_{i}\), for all \(1\leq i\leq k\). Conversely, given a subset \(S=\{s_{1}<s_{2}<\cdots<s_{k}\}\subseteq[n-1]\), let \(\mathrm{co}(S)=(s_{1},s_{2}-s_{1},\ldots,s_{k}-s_{k-1},n-s_{k})\). The maps \(\alpha\mapsto\mathrm{S}_{\alpha}\) and \(S\mapsto\mathrm{co}(S)\) are bijections and mutual inverses.
Sometimes, it will be convenient to work with subsets of \([n-1]\) which contain \(n\). For this purpose, we will write \(S^{+}:=S\cup\{n\}\). In this case, \(\mathrm{S}_{\alpha}^{+}=\{r_{1},r_{2},\ldots,r_{k}\}\) and the maps \(\alpha\mapsto\mathrm{S}_{\alpha}^{+}\) and \(S^{+}\mapsto\mathrm{co}(S^{+})\) remain bijections and mutual inverses. We make this (non-standard) convention because we will later need to keep track of the color of the last coordinate of a colored permutation (see Section 3.1).
The set of all compositions of \(n\), written \(\mathrm{Comp}(n)\), becomes a poset with the partial order of reverse refinement. The covering relations are given by
\[(\alpha_{1},\ldots,\alpha_{i}+\alpha_{i+1},\ldots,\alpha_{k})\prec(\alpha_{1},\ldots,\alpha_{i},\alpha_{i+1},\ldots,\alpha_{k}).\]
The corresponding partial order on the set of all subsets of \([n-1]\) is inclusion of subsets. A partition of \(n\), written \(\lambda\vdash n\), is a composition \(\lambda\) of \(n\) whose parts appear in weakly decreasing order.
A zigzag diagram (also called border-strip, ribbon or skew hook) is a connected skew shape that does not contain a \(2\times 2\) square. Ribbons with \(n\) cells are in one-to-one correspondence with
compositions of \(n\). Given \(\alpha\in\operatorname{Comp}(n)\), let \(\operatorname{Z}_{\alpha}\) be the ribbon with \(n\) cells whose row lengths, when read from bottom to top, are the parts of \(\alpha\). For example, for \(n=9\)
\[\alpha=(2,1,2,3,1)\quad\longmapsto\quad\operatorname{Z}_{\alpha}=\quad\raisebox {-19.916929pt}{\includegraphics[]{fig/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fab/fabf
The fundamental quasisymmetric function associated to \(\alpha\in\operatorname{Comp}(n)\) is defined by
\[F_{\alpha}(\boldsymbol{x}):=\sum_{\begin{subarray}{c}1\leq i_{1}\leq i_{2}\leq \cdots\leq i_{n}\\ j\in\operatorname{S}_{\alpha}\,\Rightarrow\,i_{j}<i_{j+1}\end{subarray}}x_{i_{1 }}x_{i_{2}}\cdots x_{i_{n}}.\]
We recall the following well-known expansion [18, Theorem 7.19.7]
\[s_{\lambda/\mu}(\boldsymbol{x})=\sum_{Q\in\operatorname{SYT}(\lambda/\mu)}F_{ \operatorname{co}(Q)}(\boldsymbol{x}), \tag{2.1}\]
for any skew shape \(\lambda/\mu\).
A subset \(\mathcal{A}\subseteq\mathfrak{S}_{n}\) is called Schur-positive if the quasisymmetric generating function
\[F(\mathcal{A};\boldsymbol{x}):=\sum_{\pi\in\mathcal{A}}F_{\operatorname{co}( \pi)}(\boldsymbol{x})\]
is Schur-positive. In this case, it follows that \(\operatorname{ch}(\varrho)(\boldsymbol{x})=F(\mathcal{A};\boldsymbol{x})\) for some non-virtual \(\mathfrak{S}_{n}\)-representation \(\rho\) (see also [1, Corollary 3.3]) and we will say that \(\mathcal{A}\) is Schur-positive for \(\varrho\).
The skew Schur function \(r_{\alpha}(\boldsymbol{x}):=s_{Z_{\alpha}}(\boldsymbol{x})\) is called the ribbon Schur function corresponding to \(\alpha\in\operatorname{Comp}(n)\). The (virtual) \(\mathfrak{S}_{n}\)-representation \(\varrho_{\alpha}\) such that \(\operatorname{ch}(\varrho_{\alpha})(\boldsymbol{x})=r_{\alpha}(\boldsymbol{x})\) is called the descent representation of the symmetric group. We remark that this definition is not the standard way to define descent representations in the literature. For more information on descent representations from a combinatorial representation-theoretic point of view we refer to [2]. For example, descent representations are non-virtual \(\mathfrak{S}_{n}\)-representations, as the following proposition explains. Combining Proposition 2.1 and Equation (2.1) yields the following result of Gessel [12, Theorem 7] (see also [18, Corollary 7.23.4]).
**Proposition 2.2**.: _For every \(\alpha\in\operatorname{Comp}(n)\),_
\[r_{\alpha}(\boldsymbol{x})\ =\ F(\operatorname{D}_{\alpha}^{-1};\boldsymbol{x})\ =\ \sum_{\lambda\vdash n}\,c_{\lambda}(\alpha)\,s_{\lambda}( \boldsymbol{x}), \tag{2.2}\]
_where \(c_{\lambda}(\alpha)\) is the number of \(Q\in\operatorname{SYT}(\lambda)\) such that \(\operatorname{co}(Q)=\alpha\). In particular, inverse descent classes are Schur-positive for descent representations._
Descent representations in disguised form appear in Stanley's work [17] on group actions on posets. If \(\chi_{\alpha}\) denotes the character of \(\varrho_{\alpha}\), then [17, Theorem 4.3] is translated into the following alternating formula
\[\chi_{\alpha}=\sum_{\begin{subarray}{c}\beta\in\operatorname{Comp}(n)\\ \beta\preceq\alpha\end{subarray}}(-1)^{\ell(\alpha)-\ell(\beta)}\,1_{\beta} \uparrow_{\mathfrak{S}_{\beta}}^{\mathfrak{S}_{n}}, \tag{2.3}\]
where
* \(\ell(\alpha)\) denotes the number of parts of \(\alpha\), called length of \(\alpha\)
* \(\mathfrak{S}_{\alpha}:=\mathfrak{S}_{\alpha_{1}}\times\mathfrak{S}_{\alpha_{2 }}\times\cdots\) denotes the Young subgroup corresponding to \(\alpha\)
* \(1_{n}\) (resp. \(1_{\alpha}\)) denotes the trivial \(\mathfrak{S}_{n}\)-character (resp. \(\mathfrak{S}_{\alpha}\)-character)
* \(\uparrow\) denotes induction of characters.
Taking the Frobenius image, Equation (2.3) becomes
\[r_{\alpha}(\boldsymbol{x})=\sum_{\begin{subarray}{c}\beta\in\operatorname{ Comp}(n)\\ \beta\preceq\alpha\end{subarray}}(-1)^{\ell(\alpha)-\ell(\beta)}\,h_{\beta}( \boldsymbol{x}), \tag{2.4}\]
where \(h_{\beta}(\mathbf{x})\) denotes the complete homogeneous symmetric functions corresponding to \(\beta\). As Gessel [12, page 293] points out, MacMahon was the first to study ribbon Schur functions by means of Equation (2.4).
In our running example, for \(n=4\) and \(\alpha=(2,2)\)
\[r_{\alpha}(\mathbf{x})=2F_{(2,2)}(\mathbf{x})+F_{(3,1)}(\mathbf{x})+F_{(1,3)}(\mathbf{x})+F_{(1,2,1)}(\mathbf{x})=s_{(2,2)}(\mathbf{x})+s_{(3,1)}(\mathbf{x}),\]
since the tableaux of shape \((2,2)\) and \((3,1)\) and descent set \(\{2\}\) are
\[\begin{array}{c}\framebox{$1$}\framebox{$3$}\\ \framebox{$3$}\end{array}\quad\text{and}\quad\begin{array}{c}\framebox{$1$} \framebox{$2$}\framebox{$4$}\\ \framebox{$3$}\end{array}\]
respectively, which is also in agreement with
\[r_{\alpha}(\mathbf{x})=h_{(2,2)}(\mathbf{x})-h_{(4)}(\mathbf{x}).\]
## 3. Combinatorics of colored objects
This section reviews the combinatorics of colored objects including colored permutations, colored compositions, \(r\)-partite tableaux, colored quasisymmetric functions and a colored analogue of the characteristic map. For the corresponding notions in the case of two colors we refer the reader to [1]. We fix a positive integer \(r\) and view the elements of \(\mathbb{Z}_{r}\), the cyclic group of order \(r\), as colors \(0,1,\ldots,r-1\), totally ordered by the natural order inherited by the integers. Also, we will write \(i^{j}\) instead of \((i,j)\) to represent colored integers, where \(i\) is the underlying integer and \(j\) is the color.
### Colored compositions and colored sets
An \(r\)-colored composition of a positive integer \(n\) is a pair \((\alpha,\epsilon)\) such that \(\alpha\in\operatorname{Comp}(n)\) and \(\epsilon\in\mathbb{Z}_{r}^{\ell(\alpha)}\) is a sequence of colors assigned to the parts of \(\alpha\). An \(r\)-colored subset of \([n]\) is a pair \((S^{+},\zeta)\) such that \(S\subseteq[n-1]\) and \(\zeta:S^{+}\to\mathbb{Z}_{r}\) is a color map. For the examples, we will represent colored compositions (resp. sets) as ordered tuples (resp. sets) of colored integers.
Colored compositions of \(n\) are in one-to-one correspondence with colored subsets of \([n]\). The correspondence is given as follows: Given a colored composition \((\alpha,\epsilon)\), let \(\sigma_{(\alpha,\epsilon)}:=(\mathrm{S}^{+}(\alpha),\zeta)\) where \(\zeta:\mathrm{S}^{+}(\alpha)\to\mathbb{Z}_{r}\) is defined by \(\zeta(r_{i}):=\epsilon_{i}\). Conversely, given a colored subset \((S^{+},\zeta)\) with \(S^{+}=\{s_{1}<\cdots<s_{k}<s_{k+1}=n\}\), let \(\mathrm{co}(S^{+},\zeta)=(\mathrm{co}(S),\epsilon)\) where \(\epsilon\in\mathbb{Z}_{r}^{k}\) is defined by letting \(\epsilon_{i}=\zeta(s_{i})\), for all \(1\leq i\leq k\). For example, for \(n=10\) and \(r=4\)
\[\left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\longleftrightarrow\left\{2 ^{0},4^{1},5^{1},6^{3},9^{1},10^{2}\right\}.\]
Given a colored composition \((\alpha,\epsilon)\) of \(n\), we can extend \(\epsilon\) to a color vector \(\tilde{\epsilon}\in\mathbb{Z}_{r}^{n}\) by letting
\[\tilde{\epsilon}:=(\underbrace{\epsilon_{1},\epsilon_{1},\ldots,\epsilon_{1}} _{\alpha_{1}\text{ times}},\underbrace{\epsilon_{2},\epsilon_{2},\ldots,\epsilon_{2}} _{\alpha_{2}\text{ times}},\ldots,\underbrace{\epsilon_{k},\epsilon_{k},\ldots, \epsilon_{k}}_{\alpha_{k}\text{ times}}).\]
Similarly, given a colored subset \((S^{+},\zeta)\) of \([n]\) with \(S^{+}=\{s_{1}<\cdots<s_{k}<s_{k+1}:=n\}\), we can extend the color map to a color vector \(\tilde{\zeta}=(\tilde{\zeta}_{1},\tilde{\zeta}_{2},\ldots,\tilde{\zeta}_{n}) \in\mathbb{Z}_{r}^{n}\), by letting \(\tilde{\zeta}_{j}:=\zeta(s_{i})\) for all \(s_{i-1}<j\leq s_{i}\) where \(s_{0}:=0\). The corresponding color vector of our running example is
\[(0,0,1,1,1,3,1,1,1,2).\]
The set of all \(r\)-colored compositions of \(n\), written \(\operatorname{Comp}(n,r)\), becomes a poset with the partial order of reverse refinement on consecutive parts of constant color. The covering relations are given by
\[((\ldots,\alpha_{i}+\alpha_{i+1},\ldots),(\ldots,\epsilon_{i},\ldots))\prec(( \ldots,\alpha_{i},\alpha_{i+1},\ldots),(\ldots,\epsilon_{i},\epsilon_{i}, \ldots))\,.\]
The corresponding partial order on \(r\)-colored subsets of \([n]\) is inclusion of subsets with the same color vector. Notice that these posets are not connected, since each color vector gives rise to a unique connected component (see, for example [13, Figure 4]).
### Colored permutations and \(r\)-partite tableaux
The wreath product \(\mathbb{Z}_{r}\wr\mathfrak{S}_{n}\) is called the \(r\)-colored permutation group and we denote it by \(\mathfrak{S}_{n,r}\). It consists of all pairs \((\pi,\mathrm{z})\), called \(r\)-colored permutations, such that \(\pi\in\mathfrak{S}_{n}\) is the underlying permutation and \(\mathrm{z}=(\mathrm{z}_{1},\mathrm{z}_{2},\ldots,\mathrm{z}_{n})\in\mathbb{Z }_{r}^{n}\) is a color vector. When we consider specific examples, it will be convenient to write colored permutations in window notation, that is as words \(\pi_{1}^{\mathrm{z}_{1}}\pi_{2}^{\mathrm{z}_{2}}\cdots\pi_{n}^{\mathrm{z}_{n}}\) on colored integers.
The product in \(\mathfrak{S}_{n,r}\) is given by the rule
\[(\pi,\mathrm{z})(\tau,\mathrm{w})=(\pi\tau,\mathrm{w}+\tau(\mathrm{z}))\]
where \(\pi\tau\) is evaluated from right to left, \(\tau(\mathrm{z}):=(\mathrm{z}_{\tau_{1}},\mathrm{z}_{\tau_{2}},\ldots,\mathrm{ z}_{\tau_{n}})\) and the addition is coordinatewise modulo \(r\). The inverse (resp. conjugate) of \((\pi,\mathrm{z})\), written \((\pi,\mathrm{z})^{-1}\) (resp. \(\overline{(\pi,\mathrm{z})}\)) is the element \((\pi^{-1},-\pi^{-1}(\mathrm{z}))\) (resp. \((\pi,-\mathrm{z})\)).
Colored permutation groups can be viewed as complex reflection groups (see, for example, [5, Sections 1-2]). Therefore, \(\mathfrak{S}_{n,r}\) can be realized as the group of all \(n\times n\) matrices such that
* the nonzero entries are \(r\)-th roots of unity, and
* there is exactly one nonzero entry in every row and every column.
For our purposes it is more convenient to view them as groups of colored permutations rather than groups of complex matrices.
The case \(r=2\) is of particular interest. In this case, it is often customary to write \(\mathfrak{B}_{n}:=\mathfrak{S}_{n,2}\) and identify colors \(0\) and \(1\) with signs \(+\) and \(-\), respectively. \(\mathfrak{B}_{n}\) coincides with the hyperoctahedral group, the symmetry group of the \(n\)-dimensional cube. The hyperoctahedral group is a real reflection group and its elements are called signed permutations. Much of what is presented in this paper is motivated by Adin et al.'s work [1] on character formulas and descents for \(\mathfrak{B}_{n}\).
The colored descent set of \((\pi,\mathrm{z})\in\mathfrak{S}_{n,r}\), denoted by \(\mathrm{sDes}(\pi,\mathrm{z})\), is the pair \((S^{+},\zeta)\) where
* \(S\) consists of all \(i\in[n-1]\) such that \(\mathrm{z}_{i}\neq\mathrm{z}_{i+1}\) or \(\mathrm{z}_{i}=\mathrm{z}_{i+1}\) and \(i\in\mathrm{Des}(\pi)\)
* \(\zeta:S^{+}\to\mathbb{Z}_{r}\) is the map defined by \(\zeta(i)=\mathrm{z}_{i}\) for all \(i\in S^{+}\).
In words, the colored descent set records the ending positions of increasing runs of constant color together with their colors. Notice that the color vector of the colored descent set of \((\pi,\mathrm{z})\) is the same as \(\mathrm{z}\). For example, for \(n=10\) and \(r=4\)
\[\mathrm{sDes}\left(2^{3}4^{3}6^{1}1^{1}5^{1}10^{3}3^{1}7^{1}9^{1}8^{0}\right) =\{2^{3},3^{1},5^{1},6^{3},8^{1},9^{1},10^{0}\}.\]
The \(r\)-colored composition which corresponds to the colored descent set \(\mathrm{sDes}(\pi,\mathrm{z})\) is called colored descent composition of \((\pi,\mathrm{z})\) and is denoted by \(\mathrm{co}(\pi,\mathrm{z})\). It records the lengths of increasing runs of constant color together with their colors. In our running example, we have
\[\mathrm{co}\left(2^{3}4^{3}6^{1}1^{1}5^{1}10^{3}3^{1}7^{1}9^{1}8^{0}\right)= \left(2^{3},1^{1},2^{1},1^{3},1^{1},2^{1},1^{0}\right).\]
For \((\alpha,\epsilon)\in\mathrm{Comp}(n,r)\), we define the colored descent class
\[\mathrm{D}_{(\alpha,\epsilon)}:=\{(\pi,\mathrm{z})\in\mathfrak{S}_{n,r}: \mathrm{co}(\pi,\mathrm{z})=(\alpha,\epsilon)\}\]
and the corresponding conjugate-inverse colored descent class
\[\overline{\mathrm{D}}_{(\alpha,\epsilon)}^{-1}:=\{(\pi,\mathrm{z})\in\mathfrak{ S}_{n,r}:\mathrm{co}\left(\overline{(\pi,\mathrm{z})}^{-1}\right)=(\alpha, \epsilon)\}.\]
For reasons that will become apparent in the sequel, instead of dealing with inverse descent classes it will be more convenient to deal with conjugate-inverse descent classes. Colored descent classes were introduced by Mantaci and Reutenauer [14] who called them shape classes and used them to introduce and study a colored analogue of Solomon's descent algebra. We remark that in the hyperoctahedral case, where we have only two colors, there is no need to consider conjugate-inverse elements because \(\mathfrak{B}_{n}\) is a real reflection group.
An \(r\)-partite partition of \(n\), written \(\boldsymbol{\lambda}\vdash n\), is an \(r\)-tuple \(\boldsymbol{\lambda}=(\lambda^{(0)},\lambda^{(1)},\ldots,\lambda^{(r-1)})\) of (possibly empty) integer partitions of total sum \(n\). For example,
\[\boldsymbol{\lambda}\ =\ ((2),(3,2,1),(1),(1))\;.\]
is a \(4\)-partite partition of \(10\).
A standard Young \(r\)-partite tableau of shape \(\boldsymbol{\lambda}\) is an \(r\)-tuple \(\boldsymbol{Q}=(Q^{(0)},Q^{(1)},\ldots,Q^{(r-1)})\) of (possibly empty) tableaux, called parts, which are strictly increasing along rows and columns such that \(Q^{(i)}\) has shape \(\lambda^{(i)}\) and every element of \([n]\) appears exactly once as an entry of some \(Q^{(i)}\). We denote by \(\mathrm{SYT}(\boldsymbol{\lambda})\) the set of all standard Young \(r\)-partite tableaux of shape \(\boldsymbol{\lambda}\). To each \(r\)-partite tableau \(\boldsymbol{Q}\), we associate a color vector \(\mathrm{z}\), defined by letting \(\mathrm{z}_{i}=j\), where \(0\leq j\leq r-1\) is such that \(i\in Q^{(j)}\). For example, for \(n=10\) and \(r=4\)
\[\boldsymbol{Q}\ =\ \left(\begin{array}{c|c}\framebox{19},&\framebox{35 56}\\ \framebox{40}&\framebox{2},&\framebox{8}\\ \framebox{7}&\end{array}\right)\]
has color vector
\[\mathrm{z}=(0,2,1,1,1,1,1,3,0,1)\]
The colored descent set of an \(r\)-partite tableau \(\boldsymbol{Q}\), denoted by \(\mathrm{sDes}(\boldsymbol{Q})\), is defined similarly to that for colored permutations. In this case, the colored descent set records the first element of a pair \((i,i+1)\) together with its color, such that \(i\) and \(i+1\) either belong to parts with different colors or they belong to the same part and \(i\) is a descent of this part. In our running example,
\[\mathrm{sDes}(\boldsymbol{Q})=\left\{1^{0},2^{2},3^{1},6^{1},7^{1},8^{3},9^{ 0},10^{1}\right\}.\]
Also, we write \(\mathrm{co}(\boldsymbol{Q}):=\mathrm{co}(\mathrm{sDes}(\boldsymbol{Q}))\).
### Colored quasisymmetric functions and the characteristic map
Consider \(r\) copies \(\boldsymbol{x}^{(0)},\boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(r-1)}\) of \(\boldsymbol{x}\), one for each color of \(\mathbb{Z}_{r}\) and let \(\mathrm{Sym}_{n}^{(r)}\) be the space of (homogeneous) formal power series of degree \(n\) in \(\boldsymbol{x}^{(0)},\boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(r-1)}\) which are symmetric in each variable \(\boldsymbol{x}^{(j)}\) separately. In particular,
\[\mathrm{Sym}_{n}^{(r)}=\bigoplus_{\begin{subarray}{c}a_{0},\ldots,a_{r-1} \in\mathbb{N}\\ a_{0}+\cdots+a_{r-1}=n\end{subarray}}\left(\mathrm{Sym}_{a_{0}}(\boldsymbol{ x}^{(0)})\otimes\cdots\otimes\mathrm{Sym}_{a_{r-1}}(\boldsymbol{x}^{(r-1)}) \right).\]
Drawing parallel to the classical case, for an \(r\)-partite partition \(\boldsymbol{\lambda}=(\lambda^{(0)},\lambda^{(1)},\ldots,\lambda^{(r-1)})\), we define
\[s_{\boldsymbol{\lambda}}:=s_{\lambda^{(0)}}(\boldsymbol{x}^{(0)})s_{\lambda^ {(1)}}(\boldsymbol{x}^{(1)})\cdots s_{\lambda^{(r-1)}}(\boldsymbol{x}^{(r-1)}).\]
The set \(\{s_{\boldsymbol{\lambda}}:\boldsymbol{\lambda}\vdash n\}\) forms a basis for \(\mathrm{Sym}_{n}^{(r)}\) which we call the Schur basis. An element of \(\mathrm{Sym}_{n}^{(r)}\) is called Schur-positive if all the coefficients in its expansion in the Schur basis are nonnegative.
It is well-known that (complex) irreducible \(\mathfrak{S}_{n,r}\)-representations are indexed by \(r\)-partite partitions of \(n\) (see, for example, [5, Section 5]). Poirier [16] introduced a colored analogue of the
characteristic map which we denote by \(\operatorname{ch}^{(r)}\). This map is a \(\mathbb{C}\)-linear isomorphism from the space of virtual \(\mathfrak{S}_{n,r}\)-representations to \(\operatorname{Sym}_{n}^{(r)}\) which sends the irreducible \(\mathfrak{S}_{n,r}\)-representation corresponding to \(\boldsymbol{\lambda}\vdash n\) to \(s_{\boldsymbol{\lambda}}\). In particular, it maps non-virtual \(\mathfrak{S}_{n,r}\)-representations to Schur-positive elements of \(\operatorname{Sym}_{n}^{(r)}\).
The colored (fundamental) quasisymmetric function associated to \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) is defined by
\[F_{(\alpha,\epsilon)}^{(r)}:=F_{(\alpha,\epsilon)}(\boldsymbol{x}^{(0)}, \ldots,\boldsymbol{x}^{(r-1)}):=\sum_{\begin{subarray}{c}1\leq i_{1}\leq i_{2} <\ldots<i_{n}\\ \epsilon_{j}\geq\epsilon_{j+1}\Rightarrow i_{r_{j}}<i_{r_{j+1}}\end{subarray}}x _{i_{1}}^{(\tilde{\epsilon_{1}})}x_{i_{2}}^{(\tilde{\epsilon_{2}})}\cdots x _{i_{n}}^{(\tilde{\epsilon_{n}})}, \tag{3.1}\]
where the second restriction in the sum runs through all indices \(1\leq j\leq\ell(\alpha)-1\). For example, if \((m^{n})\) denotes the vector (or sequence) of length \(n\) and entries equal to \(m\), then
\[F_{((n),(k))}^{(r)} =\sum_{1\leq i_{1}\leq i_{2}\leq\ldots<i_{n}}x_{i_{1}}^{(k)}x_{i _{2}}^{(k)}\cdots x_{i_{n}}^{(k)}=h_{n}(\boldsymbol{x}^{(k)})\] \[F_{((1^{n}),(k^{n}))}^{(r)} =\sum_{1\leq i_{1}<i_{2}<\cdots<i_{n}}x_{i_{1}}^{(k)}x_{i_{2}}^{ (k)}\cdots x_{i_{n}}^{(k)}=e_{n}(\boldsymbol{x}^{(k)}),\]
where \(h_{n}\) (resp. \(e_{n}\)) denotes the \(n\)-th complete homogeneous (resp. elementary) symmetric function.
This colored analogue of Gessel's fundamental quasisymmetric function was introduced by Poirier [16] and has been studied by several people [1, 7, 10, 13, 15]. It seems that this is particularly suitable when we consider colored permutation groups as wreath products. A different signed analogue of quasisymmetric functions was introduced by Chow [11] which has found applications when one considers the hyperoctahedral group as a Coxeter group (see, for example, [6]).
Steingrimsson [21, Definition 3.2] introduced a notion of descents for colored permutations which reduces to the classical one and using it we can provide an alternative (and more convenient) description for colored quasisymmetric functions. The descent set of \((\pi,\mathrm{z})\in\mathfrak{S}_{n,r}\) is defined by
\[\operatorname{Des}(\pi,\mathrm{z}):=\{i\in[n]:\mathrm{z}_{i}>\mathrm{z}_{i+1 }\text{ or }\mathrm{z}_{i}=\mathrm{z}_{i+1}\,\text{and}\,i\in \operatorname{Des}(\pi)\},\]
where \(\pi_{n+1}:=0\) and \(\mathrm{z}_{n+1}:=0\). In particular, \(n\in\operatorname{Des}(\pi,\mathrm{z})\) if and only if \(\mathrm{z}_{n}>0\). With this in mind, Equation (3.1) for the colored descent composition of \((\pi,\mathrm{z})\) becomes
\[F_{(\pi,\mathrm{z})}^{(r)}\,:=\,F_{\mathrm{co}(\pi,\mathrm{z})}^{(r)}\,=\, \sum_{\begin{subarray}{c}1\leq i_{1}\leq i_{2}\leq\ldots\leq i_{n}\\ j\in\operatorname{Des}(\pi,\mathrm{z})\setminus\{n\}\Rightarrow\,i_{j}<i_{j+ 1}\end{subarray}}x_{i_{1}}^{(\mathrm{z}_{1})}x_{i_{2}}^{(\mathrm{z}_{2})} \cdots x_{i_{n}}^{(\mathrm{z}_{n})}. \tag{3.2}\]
Adin et al. [1, Proposition 4.2] proved a signed analogue of Equation (2.1), which can be trivially extended to the general case.
**Proposition 3.1**.: _For \(\boldsymbol{\lambda}\vdash n\),_
\[s_{\boldsymbol{\lambda}}=\sum_{\boldsymbol{Q}\in\operatorname{SYT}(\boldsymbol{ \lambda})}F_{\mathrm{co}(\boldsymbol{Q})}^{(r)}. \tag{3.3}\]
Finally, a subset \(\mathcal{A}\subseteq\mathfrak{S}_{n,r}\) is called Schur-positive if the colored quasisymmetric generating function
\[F^{(r)}(\mathcal{A}):=\sum_{(\pi,\mathrm{z})\in\mathcal{A}}F_{(\pi,\mathrm{z})} ^{(r)}\]
is a Schur-positive element of \(\operatorname{Sym}_{n}^{(r)}\). In this case, it follows that \(\operatorname{ch}^{(r)}(\varrho)(\boldsymbol{x})=F^{(r)}(\mathcal{A})\) for some non-virtual \(\mathfrak{S}_{n,r}\)-representation \(\varrho\) (see also [1, Corollary 3.7]) and we will say that \(\mathcal{A}\) is Schur-positive for \(\varrho\).
## 4. Introducing colored zigzag shapes
This section introduces the notion of colored zigzag shapes and proves several properties which will be needed in the sequel.
Following Bergeron and Hohlweg [10, Section 2.1] (see also [13, Section 3.6]), the rainbow decomposition of a colored composition \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) is the unique concatenation \((\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)})\cdots(\alpha_{(m)},\epsilon_{(m)})\) of non-empty, monochromatic colored compositions \(\alpha_{(i)}\) of color \(\epsilon_{(i)}\) such that \(\epsilon_{(i)}\neq\epsilon_{(i+1)}\) for all \(1\leq i\leq m-1\). For example, for \(n=10\) and \(r=4\)
\[\left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\ =\ (2)^{0}(2,1)^{1}(1)^{3}(3)^{1}( 1)^{2}.\]
Notice that each \(\epsilon_{(i)}\) is a single color rather than a sequence of colors.
**Definition 4.1**.: An \(r\)-colored zigzag shape with \(n\) cells is a pair \((Z,\epsilon)\), where \(Z=(Z_{1},\ldots,Z_{k})\) is a sequence of zigzag diagrams and \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{k})\in\mathbb{Z}_{r}^{k}\) is a sequence of colors assigned to the parts of \(Z\) such that \(\epsilon_{i}\neq\epsilon_{i+1}\) for every \(1\leq i\leq k-1\).
For example, there exist six \(2\)-colored zigzag shapes with \(2\) cells
\[\left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},0 \right),\ \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},1 \right),\ \left(\left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}, \raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}\right),(0,1) \right),\ \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},0 \right),\ \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},1 \right).\]
In general, as the following proposition suggests, the number of \(r\)-colored zigzag shapes with \(n\) cells is equal to \(r(r+1)^{n-1}\), the cardinality of \(\operatorname{Comp}(n,r)\) (see [13, Table 1]).
**Proposition 4.2**.: _The set of \(r\)-colored zigzag shapes with \(n\) cells is in one-to-one correspondence with \(\operatorname{Comp}(n,r)\) and therefore with the set of all \(r\)-colored subsets of \([n]\)._
Proof.: Given a colored composition of \(n\) with rainbow decomposition
\[(\alpha,\epsilon)=(\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)}) \cdots(\alpha_{(m)},\epsilon_{(m)})\]
we form the following colored zigzag shape with \(n\) cells
\[\operatorname{Z}_{(\alpha,\epsilon)}:=\left(\left(\operatorname{Z}_{\alpha_{ (1)}},\operatorname{Z}_{\alpha_{(2)}},\ldots,\operatorname{Z}_{\alpha_{(m)} }\right),\left(\epsilon_{(1)},\epsilon_{(2)},\ldots,\epsilon_{(m)}\right) \right).\]
The map \((\alpha,\epsilon)\mapsto\operatorname{Z}_{(\alpha,\epsilon)}\) is the desired bijection.
For example, the corresponding \(4\)-colored zigzag shape with \(10\) cells to the \(4\)-colored composition of our running example is
\[\left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\longleftrightarrow\left( \left(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}, \raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}},\raisebox{-0.5 pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}\right),\ \raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2.eps}}\right), \left(0,1,3,1,2\right)\right).\]
Now, to each colored zigzag shape we can associate an \(r\)-partite (skew) shape and consider standard Young \(r\)-partite tableaux of this shape. In particular, given an \(r\)-colored zigzag shape \((Z,\epsilon)\) we define the \(r\)-partite skew shape \(\boldsymbol{\lambda}_{(Z,\epsilon)}:=\left(Z^{(0)},Z^{(1)},\ldots,Z^{(r-1)}\right)\), where
\[Z^{(j)}:=\bigoplus_{\begin{subarray}{c}1\leq i\leq k\\ \epsilon_{i}=j\end{subarray}}Z_{i}\]
for all \(0\leq j\leq r-1\). Here, the direct sum \(\lambda\oplus\mu\) of two (skew) shapes \(\lambda,\mu\) is the skew shape whose diagram is obtained by placing the diagram of \(\lambda\) and \(\mu\) in such a way that the upper-right vertex of \(\lambda\) coincides with the lower-left vertex of \(\mu\). In our running example, we have
\[\left(\raisebox{-0.5pt}{\includegraphics[]{figures/2-3-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
way we defined \(\mathbf{Q}\) implies \(\tilde{\zeta}=\pi^{-1}(\mathrm{z})\). These observations imply that \(\mathrm{sDes}(\overline{(\pi,\mathrm{z})}^{-1})\) and \(\mathrm{sDes}(\mathbf{Q})\) have the same color vector and therefore they record the same changes of colors. It remains to examine what happens in the case of constant color. Suppose that the second component of \(\mathrm{sDes}(\overline{(\pi,\mathrm{z})}^{-1})\) is \((\mathrm{z}_{i_{1}},\mathrm{z}_{i_{2}},\ldots,\mathrm{z}_{i_{k}})\), for some \(1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n\). If \(\mathrm{z}_{i_{j}}=\mathrm{z}_{i_{j+1}}\), then \(i_{j}\in\mathrm{Des}(\pi^{-1})\) which implies that \(i_{j}\) and \(i_{j+1}\) belong to the same part \(Q^{(\mathrm{z}_{i_{j}})}\) of \(\mathbf{Q}\) and that \(i_{j}\in\mathrm{Des}(Q^{(\mathrm{z}_{i_{j}})})\) which concludes the proof.
**Example 4.4**.: We illustrate the previous proof in a specific example for \(n=10\) and \(r=4\). Suppose
\[(\alpha,\epsilon)\ =\ \left(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2}\right)\ =\ (2^{0})(2^{1},1^{1})(1^{3})(3^{1})(1^{2}).\]
As we have already computed, its corresponding colored zigzag shape is
and thus it corresponds to the following 4-partite skew shape
Now, we pick an element of \(\mathrm{D}_{(\alpha,\epsilon)}\)
\[(\pi,\mathrm{z})\ =\ 2^{0}3^{0}7^{1}10^{1}5^{1}6^{3}1^{1}8^{1}9^{1}4^{2}\ =\ (23)^{0}(7 \,10\,5)^{1}(6)^{3}(189)^{1}(4)^{2}\]
and form the tableaux
\[Q_{(1)}=\overline{\left[\begin{array}{c}\framebox{23}\\ \framebox{7}\framebox{10}\end{array}\right]},\quad Q_{(2)}=\overline{\left[ \begin{array}{c}\framebox{5}\\ \framebox{7}\framebox{10}\end{array}\right]},\quad Q_{(3)}=\overline{\left[ \begin{array}{c}\framebox{6}\\ \framebox{7}\framebox{10}\end{array}\right]},\quad Q_{(4)}=\overline{\left[ \begin{array}{c}\framebox{18}\framebox{9}\end{array}\right]},\quad Q_{(5)}= \overline{\left[\begin{array}{c}\framebox{4}\end{array}\right]}\]
with corresponding colors
\[\epsilon_{(1)}\ =\ 0,\quad\epsilon_{(2)}\ =\ 1,\quad\epsilon_{(3)}\ =\ 3,\quad \epsilon_{(4)}\ =\ 1,\quad\epsilon_{(5)}\ =\ 2.\]
Taking the direct sum of tableaux of the same color yields the following 4-partite tableau
\[\mathbf{Q}\ =\ \left(\begin{array}{c}\framebox{23}\\ \framebox{3}\end{array}\right),\ \ \begin{array}{c}\framebox{18}\framebox{9}\\ \framebox{5}\end{array},\ \underline{\left[\begin{array}{c}\framebox{4} \end{array}\right]},\ \underline{\left[\begin{array}{c}\framebox{6}\end{array}\right]}\]
with colored descent set
\[\mathrm{sDes}(\mathbf{Q})\ =\ \left\{1^{1},3^{0},4^{2},5^{1},6^{3},9^{1},10^{1}\right\}\]
which coincides with the colored descent set of the conjugate-inverse of \((\pi,\mathrm{z})\)
\[\overline{(\pi,\mathrm{z})}^{-1}\ =\ 7^{1}1^{0}2^{0}10^{2}5^{1}6^{3}3^{1}8^{1}9^{1}4^{ 1}.\]
## 5. Character formulas for colored descent representations
This section studies colored descent representations in the context of colored zigzag shapes and proves the main results of this paper. In particular, Theorem 5.2 proves that the colored quasisymmetric generating function of conjugate-inverse colored descent classes is Schur-positive and equals the Frobenius image of colored descent representations. Theorem 5.3 provides an alternating formula for the latter in terms of complete homogeneous symmetric functions in the colored context.
Bagno and Biagioli [5, Section 8] studied colored descent representations using the coinvariant algebra as a representation space, extending the techniques of Adin, Brenti and Roichman [2]. We are going to define colored descent representations by means of colored zigzag shapes and
prove that the two descriptions coincide by providing the decomposition into irreducible \(\mathfrak{S}_{n,r}\)-representations.
**Definition 5.1**.: Let \((\alpha,\epsilon)\) be an \(r\)-colored composition of \(n\) with rainbow decomposition \((\alpha,\epsilon)=(\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)}) \cdots\ (\alpha_{(m)},\epsilon_{(m)})\). The element
\[r_{(\alpha,\epsilon)}:=r_{(\alpha,\epsilon)}(\mathbf{x}^{(0)},\mathbf{x}^{(1)},\dots, \mathbf{x}^{(r-1)}):=r_{\alpha_{(1)}}(\mathbf{x}^{(\epsilon_{(1)})})r_{\alpha_{(2)}}( \mathbf{x}^{(\epsilon_{(2)})})\cdots r_{\alpha_{(m)}}(\mathbf{x}^{(\epsilon_{(m)})})\]
of \(\operatorname{Sym}_{n}^{(r)}\) is called the colored ribbon Schur function corresponding to \((\alpha,\epsilon)\) and the (virtual) \(\mathfrak{S}_{n,r}\)-representation \(\varrho_{(\alpha,\epsilon)}\) such that
\[\operatorname{ch}^{(r)}(\varrho_{(\alpha,\epsilon)})=r_{(\alpha,\epsilon)}\]
is called the colored descent representation corresponding to \((\alpha,\epsilon)\).
For example, for \(n=10\) and \(r=4\)
\[r_{(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2})}=r_{(2)}(\mathbf{x}^{(0)})r_{(2,1)}(\mathbf{ x}^{(1)})r_{(1)}(\mathbf{x}^{(3)})r_{(3)}(\mathbf{x}^{(1)})r_{(1)}(\mathbf{x}^{(2)}).\]
The first part of following theorem shows that colored descent representations are actually non-virtual and coincide with the ones studied by Bagno and Biagioli [5, Theorem 10.5], while the second part extends and complements Adin et al.'s [1, Proposition 5.5(i)] to general colored permutation groups.
**Theorem 5.2**.: _For every \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\),_
\[r_{(\alpha,\epsilon)}\ =\ F^{(r)}(\overline{\operatorname{D}}_{(\alpha, \epsilon)}^{-1})\ =\ \sum_{\mathbf{\lambda}\vdash n}c_{\mathbf{\lambda}}(\alpha,\epsilon)\,s_{\mathbf{\lambda}}, \tag{5.1}\]
_where \(c_{\mathbf{\lambda}}(\alpha,\epsilon)\) is the number of \(\mathbf{Q}\in\operatorname{SYT}(\mathbf{\lambda})\) such that \(\operatorname{co}(\mathbf{Q})=(\alpha,\epsilon)\). In particular, conjugate-inverse colored descent classes are Schur-positive for colored descent representations._
The proof of Theorem 5.2 is essentially a colored version of that of Proposition 2.2. It is based on a colored analogue of the well-known Robinson-Schensted correspondence, first considered by White [22] and further studied by Stanton and White [20] (see also [17, Section 6] and [1, Section 5] for the case of two colors). It is a bijection from \(\mathfrak{S}_{n,r}\) to the set of all pairs of standard Young \(r\)-partite tableaux of the same shape and size \(n\). If \(w\mapsto(\mathbf{P},\mathbf{Q})\) under this correspondence, then
\[\operatorname{sDes}(w) =\operatorname{sDes}(\mathbf{Q})\] \[\operatorname{sDes}(\overline{w}^{-1}) =\operatorname{sDes}(\mathbf{P}).\]
Proof of Theorem 5.2.: The first equality of Equation (5.1) follows directly from Proposition 4.3. For the second equality, applying the colored analogue of the Robinson-Schensted correspondence yields
\[F^{(r)}(\overline{\operatorname{D}}_{(\alpha,\epsilon)}^{-1})=\sum_{\mathbf{ \lambda}\vdash n}\sum_{\begin{subarray}{c}\mathbf{P},\,\mathbf{Q}\in\operatorname{SYT }(\mathbf{\lambda})\\ \operatorname{co}(\mathbf{P})=(\alpha,\epsilon)\end{subarray}}F^{(r)}_{ \operatorname{co}(\mathbf{Q})}.\]
and the proof follows from Equation (3.3).
In our running example, we see that
\[\begin{split}\text{$\text{$\text{$\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$}$}$}$}}}}}}}}$}$}(2^{0},2^{1,1,1},1^{3,3^{1},1^{2})}&=s_{2}(\boldsymbol{x}^{(0)})s_{21}( \boldsymbol{x}^{(1)})s_{3}(\boldsymbol{x}^{(1)})s_{1}(\boldsymbol{x}^{(2)})s_ {1}(\boldsymbol{x}^{(3)})\\ &=s_{2}(\boldsymbol{x}^{(0)})\left((s_{321}(\boldsymbol{x}^{(1)}) +s_{411}(\boldsymbol{x}^{(1)})+s_{42}(\boldsymbol{x}^{(1)})+s_{51}( \boldsymbol{x}^{(1)})\right)s_{1}(\boldsymbol{x}^{(2)})s_{1}(\boldsymbol{x}^ {(3)})\\ &=s_{(2,321,1,1)}+s_{(2,411,1,1)}+s_{(2,42,1,1)}+s_{(2,51,1,1)}, \end{split}\]
where we omitted the parentheses and commas in (regular) partitions for ease of notation. There are many ways to make this computation, the most "powerful" of which is to implement the Littlewood-Richardson rule [18, Section 7.15]. Thus, the decomposition of the colored descent representation corresponding to \((2^{0},2^{1},1^{1},1^{3},3^{1},1^{2})\) is the multiplicity free direct sum of the irreducible \(\mathfrak{S}_{10,4}\)-representations corresponding to the \(4\)-partite partitions \((2,321,1,1),(2,411,1,1),(2,42,1,\,1)\) and \((2,51,1,1)\) with corresponding \(4\)-partite tableaux
We can express the colored ribbon Schur function as an alternating sum of elements of a basis of \(\operatorname{Sym}_{n}^{(r)}\) which can be viewed as the colored analogue of the basis of complete homogeneous symmetric functions. For an \(r\)-partite partition \(\boldsymbol{\lambda}=(\lambda^{(0)},\lambda^{(1)},\ldots,\lambda^{(r-1)})\), let
\[h_{\boldsymbol{\lambda}}:=h_{\boldsymbol{\lambda}}(\boldsymbol{x}^{(0)}, \boldsymbol{x}^{(1)},\ldots,\boldsymbol{x}^{(r-1)}):=h_{\lambda^{(0)}}( \boldsymbol{x}^{(0)})h_{\lambda^{(1)}}(\boldsymbol{x}^{(1)})\cdots h_{ \lambda^{(r-1)}}(\boldsymbol{x}^{(r-1)}).\]
The set \(\{h_{\boldsymbol{\lambda}}:\boldsymbol{\lambda}\vdash n\}\) forms a basis for \(\operatorname{Sym}_{n}^{(r)}\).
Similarly to the classical case, given \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) we can form an \(r\)-partite partition \(\boldsymbol{\lambda}_{(\alpha,\epsilon)}\) of \(n\) by first splitting its entries into colored components and then rearranging the entries of each component in weakly decreasing order. We write \(h_{(\alpha,\epsilon)}:=h_{\boldsymbol{\lambda}_{(\alpha,\epsilon)}}\).
**Theorem 5.3**.: _For every \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\),_
\[r_{(\alpha,\epsilon)}\ =\ \sum_{\begin{subarray}{c}(\beta,\delta)\in \operatorname{Comp}(n,r)\\ (\beta,\delta)\preceq(\alpha,\epsilon)\end{subarray}}(-1)^{\ell(\alpha)- \ell(\beta)}\,h_{(\beta,\delta)}. \tag{5.2}\]
Proof.: Let \((\alpha,\epsilon)\) be a colored composition of \(n\) with rainbow decomposition
\[(\alpha,\epsilon)=(\alpha_{(1)},\epsilon_{(1)})(\alpha_{(2)},\epsilon_{(2)}) \cdots(\alpha_{(m)},\epsilon_{(m)}).\]
Expanding each term \(r_{\alpha_{(i)}}(\boldsymbol{x}^{(\epsilon_{(i)})})\) in the definition of the colored ribbon Schur function \(r_{(\alpha,\epsilon)}\) according to Equation (2.3) yields
\[\begin{split} r_{(\alpha,\epsilon)}&=r_{\alpha_{(1 )}}(\boldsymbol{x}^{(\epsilon_{(1)})})r_{\alpha_{(2)}}(\boldsymbol{x}^{( \epsilon_{(2)})})\cdots r_{\alpha_{(m)}}(\boldsymbol{x}^{(\epsilon_{(m)})})\\ &=\prod_{1\leq i\leq m}\sum_{\beta_{(i)}\preceq\alpha_{(i)}}(-1)^ {\ell(\alpha_{(i)})-\ell(\beta_{(i)})}h_{\beta_{(i)}}(\boldsymbol{x}^{( \epsilon_{(i)})})\\ &=\sum_{\begin{subarray}{c}1\leq i\leq m\\ \beta_{(i)}\preceq\alpha_{(i)}\end{subarray}}(-1)^{\ell(\alpha)-(\ell(\beta_{(1 )})+\cdots+\ell(\beta_{(m)}))}\,h_{\beta_{(1)}}(\boldsymbol{x}^{(\epsilon_{( 1)})})\cdots h_{\beta_{(m)}}(\boldsymbol{x}^{(\epsilon_{(m)})}),\end{split}\]
since \(\ell(\alpha)=\ell(\alpha_{(1)})+\cdots+\ell(\alpha_{(m)})\). The proof follows by considering the colored composition with rainbow decomposition \((\beta,\delta)=(\beta_{(1)},\epsilon_{(1)})\cdots(\beta_{(m)},\epsilon_{(m)})\) and noticing that the conditions \(\beta_{(i)}\preceq\alpha_{(i)}\) for all \(1\leq i\leq m\) are precisely equivalent to \((\beta,\delta)\preceq(\alpha,\epsilon)\) and that
\[\ell(\beta) =\ell(\beta_{(1)})+\cdots+\ell(\beta_{(m)})\] \[h_{(\beta,\delta)} =h_{\beta_{(1)}}(\mathbf{x}^{(\epsilon_{(1)})})\cdots h_{\beta_{(m)}} (\mathbf{x}^{(\epsilon_{(m)})}).\]
In our running example, we have
\[r_{(2^{0},2^{1},1^{1},1^{3},3^{1},1^{2})}=h_{(2^{0},2^{1},1^{1},1^{3},3^{1},1^ {2})}-h_{(2^{0},3^{1},1^{3},3^{1},1^{2})}\]
which is in agreement with the expansion in the Schur basis that we calculated above, since
\[h_{321}-h_{33}=s_{321}+s_{411}+s_{42}+s_{51}.\]
Finally, let us describe the representation-theoretic version of Equation (5.2). For this we need to introduce some notation. We fix a primitive \(r\)-th root of unity \(\omega\). For all \(0\leq j\leq r-1\), let \(\mathds{1}_{n,j}\) be the irreducible \(\mathfrak{S}_{n,r}\)-representation corresponding to the \(r\)-partite partition having all parts empty, except for the part of color \(j\) which is equal to \((n)\). Then,
\[\mathds{1}_{n,j}(\pi,\epsilon)=\omega^{j(\epsilon_{1}+\epsilon_{2}+\cdots+ \epsilon_{n})}\]
for all \((\pi,\epsilon)\in\mathfrak{S}_{n,r}\) (see, for example, [8, Section 4]).
For \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\) of length \(k\), we define the following \(\mathfrak{S}_{\alpha,r}\)-representation
\[\mathds{1}_{(\alpha,\epsilon)}:=\mathds{1}_{\alpha_{1},\epsilon_{1}}\otimes \mathds{1}_{\alpha_{2},\epsilon_{2}}\otimes\cdots\otimes\mathds{1}_{\alpha_{k },\epsilon_{k}},\]
where \(\mathfrak{S}_{\alpha,r}:=\mathfrak{S}_{\alpha_{1},r}\times\mathfrak{S}_{ \alpha_{2},r}\times\cdots\times\mathfrak{S}_{\alpha_{k},r}\) is embedded in \(\mathfrak{S}_{n,r}\) in the obvious way. Since the colored characteristic map is a ring homomorphism, we have
\[\operatorname{ch}^{(r)}\left(\mathds{1}_{(\alpha,\epsilon)}\uparrow_{ \mathfrak{S}_{\alpha,r}}^{\mathfrak{S}_{n,r}}\right)\ =\ h_{\alpha_{1}}(\mathbf{x}^{(\epsilon_{1})})h_{\alpha_{2}}(\mathbf{x}^{(\epsilon_{2} )})\cdots h_{\alpha_{k}}(\mathbf{x}^{(\epsilon_{k})}), \tag{5.3}\]
where the second equality follows from basic properties of the colored characteristic map [16, Corollary 3]
\[\operatorname{ch}^{(r)}(\mathds{1}_{\alpha_{i},\epsilon_{i}})=s_{(\alpha_{i} )}(\mathbf{x}^{(\epsilon_{i})})=h_{\alpha_{i}}(\mathbf{x}^{(\epsilon_{i})}).\]
Since the characteristic map is actually a ring isomorphism [16, Theorem 2], applying it to Equation (5.2) yields the following.
**Corollary 5.4**.: _For all \((\alpha,\epsilon)\in\operatorname{Comp}(n,r)\), the character \(\chi_{(\alpha,\epsilon)}\) of the colored descent representation corresponding to \((\alpha,\epsilon)\) satisfies_
\[\chi_{(\alpha,\epsilon)}=\sum_{\begin{subarray}{c}(\beta,\delta)\in \operatorname{Comp}(n,r)\\ (\beta,\delta)\preceq(\alpha,\epsilon)\end{subarray}}(-1)^{\ell(\alpha)-\ell( \beta)}\,\mathds{1}_{(\beta,\delta)}\uparrow_{\mathfrak{S}_{\beta,r}}^{ \mathfrak{S}_{n,r}}. \tag{5.4}\]
Schur functions can also be viewed as generating functions of certain \(P\)-partitions, where \(P\) is a Schur labeled poset arising from the (possibly skew) Young diagram (see, for example, [12, Section 2] and [18, Section 7.19]). In view of this, we conclude with the following question.
**Question 5.5**.: Are colored ribbon Schur functions generating functions of certain colored \(P\)-partitions, where \(P\) arises from the corresponding colored zigzag shape, in the sense of [13]?
## Acknowledgments
The author would like to thank Christos Athanasiadis for suggesting the problem and providing Equation (5.2) in the hyperoctahedral case.
|
2309.09665 | Uplink Power Control for Distributed Massive MIMO with 1-Bit ADCs | We consider the problem of uplink power control for distributed massive
multiple-input multiple-output systems where the base stations (BSs) are
equipped with 1-bit analog-to-digital converters (ADCs). The scenario with a
single-user equipment (UE) is first considered to provide insights into the
signal-tonoise-and-distortion ratio (SNDR). With a single BS, the SNDR is a
unimodal function of the UE transmit power. With multiple BSs, the SNDR at the
output of the joint combiner can be made unimodal by adding properly tuned
dithering at each BS. As a result, the UE can be effectively served by multiple
BSs with 1-bit ADCs. Considering the
signal-to-interference-plus-noise-anddistortion ratio (SINDR) in the multi-UE
scenario, we aim at optimizing the UE transmit powers and the dithering at each
BS based on the min-power and max-min-SINDR criteria. To this end, we propose
three algorithms with different convergence and complexity properties.
Numerical results show that, if the desired SINDR can only be achieved via
joint combining across multiple BSs with properly tuned dithering, the optimal
UE transmit power is imposed by the distance to the farthest serving BS (unlike
in the unquantized case). In this context, dithering plays a crucial role in
enhancing the SINDR, especially for UEs with significant path loss disparity
among the serving BSs. | Bikshapathi Gouda, Italo Atzeni, Antti Tölli | 2023-09-18T11:02:52Z | http://arxiv.org/abs/2309.09665v1 | # Uplink Power Control for Distributed Massive MIMO with 1-Bit ADCs
###### Abstract
We consider the problem of uplink power control for distributed massive multiple-input multiple-output systems where the base stations (BSs) are equipped with 1-bit analog-to-digital converters (ADCs). The scenario with a single user equipment (UE) is first considered to provide insights into the signal-to-noise-and-distortion ratio (SNDR). With a single BS, the SNDR is a unimodal function of the UE transmit power. With multiple BSs, the SNDR at the output of the joint combiner can be made unimodal by adding properly tuned dithering at each BS. As a result, the UE can be effectively served by multiple BSs with 1-bit ADCs. Considering the signal-to-interference-plus-noise-and-distortion ratio (SINDR) in the multi-UE scenario, we aim at optimizing the UE transmit powers and the dithering at each BS based on the min-power and max-min-SINDR criteria. To this end, we propose three algorithms with different convergence and complexity properties. Numerical results show that, if the desired SINDR can only be achieved via joint combining across multiple BSs with properly tuned dithering, the optimal UE transmit power is imposed by the distance to the farthest serving BS (unlike in the unquantized case). In this context, dithering plays a crucial role in enhancing the SINDR, especially for UEs with significant path loss disparity among the serving BSs.
+
Footnote †: publication: pubid: 979-8-3503-1090-0/23/$31.00 © 2023 IEEE
## I Introduction
Fully digital massive multiple-input multiple-output (MIMO) is widely recognized for its ability to realize flexible beamforming and large-scale spatial multiplexing [1]. As we move towards higher carrier frequencies in the next-generation wireless systems, the number of antennas at the base station (BS) is bound to increase significantly [2]. Fully digital massive MIMO arrays with high-resolution analog-to-digital converters (ADCs), however, are exceedingly complex and power hungry. Low-resolution and 1-bit ADCs can be used to considerably decrease the power consumption without compromising the performance [3, 4, 5, 6]. Moreover, adopting low-resolution and 1-bit ADCs in a distributed/cell-free massive MIMO setting allows to reduce the backhaul signaling overhead (among the BSs or between the BSs and the central processing unit) associated with the network-wide beamforming design [7, 8].
The works [4, 5] investigated the uplink channel estimation and achievable signal-to-interference-plus-noise-and-distortion ratio (SINDR) using 1-bit ADCs in single-BS systems. In addition, the spectral and energy efficiency of massive MIMO systems with low-resolution ADCs are studied as a function of the UE transmit power in [9]. These studies employ the Gaussian approximation for quantization distortion (QD) to evaluate system performance, which accurately represents the behavior in scenarios with low UE transmit power or a large number of UEs. The performance of 1-bit ADC systems with a low number of UEs is evaluated in [10], which is inferior at high UE transmit power due to QD. However, the introduction of properly tuned dithering at the BS enhances performance at high transmit powers [10]. Regarding cell-free massive MIMO systems, the performance of 1-bit ADCs is evaluated in [11], while the energy efficiency of such systems with low-resolution ADCs is analyzed in conjunction with an uplink power control scheme for max-min fairness in [12]. It is important to note that the works in [11, 12] also adopt the Gaussian approximation for QD, which is more accurate with increasing the number of ADC bits. However, the approximation may not fully capture the true performance characteristics of 1-bit ADC systems.
In this paper, we consider the uplink power control problem in the distributed massive MIMO systems with 1-bit ADCs. We first provide insight into the signal-to-noise-and-distortion ratio (SNDR) of a UE in single and multi-BS scenarios with 1-bit ADCs. Our analysis reveals that the SNDR of the UE is non-monotonic and highly dependent on the UE transmit powers and location. Specifically, in the single-BS case, the SNDR is unimodal, whereas, in the multi-BS case, the SNDR is non-unimodal. However, we show that by tuning the Gaussian dithering (i.e., the noise level), the SNDR can be made unimodal even in the multi-BS scenario. We also optimize the transmit powers of UEs by considering the min-power and max-min-SINDR optimization problems in a multi-UE scenario. To solve these optimization problems, we use gradient, fixed-point, and block coordinate descent (BCD) methods. Our numerical results indicate that in a single BS system, the SINDR reaches a saturation point beyond a certain power of the UEs. Moreover, in a multi-BS scenario, if achieving the desired target SINDR involves reception from multiple BSs using a large number of antennas and dithering, then the UE transmit power heavily depends on the distance to the farthest serving BS. In this context, employing dithering at the BSs becomes crucial in enhancing the SINDR for UEs that exhibit a substantial variation in path loss among the serving BSs.
## II System Model
We consider an uplink distributed/cell-free massive MIMO system where a set of BSs \(\mathcal{B}\triangleq\{1,\ldots,B\}\) with \(M\) antennas serves a set of single-antenna UEs \(\mathcal{K}\triangleq\{1,\ldots,K\}\). Each |
2309.16745 | Efficient Training of One Class Classification-SVMs | This study examines the use of a highly effective training method to conduct
one-class classification. The existence of both positive and negative examples
in the training data is necessary to develop an effective classifier in common
binary classification scenarios. Unfortunately, this criteria is not met in
many domains. Here, there is just one class of examples. Classification
algorithms that learn from solely positive input have been created to deal with
this setting. In this paper, an effective algorithm for dual soft-margin
one-class SVM training is presented. Our approach makes use of the Augmented
Lagrangian (AL-FPGM), a variant of the Fast Projected Gradient Method. The FPGM
requires only first derivatives, which for the dual soft margin OCC-SVM means
computing mainly a matrix-vector product. Therefore, AL-FPGM, being
computationally inexpensive, may complement existing quadratic programming
solvers for training large SVMs. We extensively validate our approach over
real-world datasets and demonstrate that our strategy obtains statistically
significant results. | Isaac Amornortey Yowetu, Nana Kena Frempong | 2023-09-28T15:35:16Z | http://arxiv.org/abs/2309.16745v1 | # Efficient Training of One Class Classification - SVMs
###### Abstract
This study examines the use of a highly effective training method to conduct one-class classification. The existence of both positive and negative examples in the training data is necessary to develop an effective classifier in common binary classification scenarios. Unfortunately, this criteria is not met in many domains. Here, there is just one class of examples. Classification algorithms that learn from solely positive input have been created to deal with this setting. In this paper, an effective algorithm for dual soft-margin one-class SVM training is presented. Our approach makes use of the Augmented Lagrangian (AL-FPGM), a variant of the Fast Projected Gradient Method. The FPGM requires only first derivatives, which for the dual soft margin OCC-SVM means computing mainly a matrix-vector product. Therefore, AL-FPGM, being computationally inexpensive, may complement existing quadratic programming solvers for training large SVMs. We extensively validate our approach over real-world datasets and demonstrate that our strategy obtains statistically significant results.
**Keywords:** Support Vector Machine (SVM), One-Class Classification
(OCC), Support vector Data Description (SVDD)
## 1 Introduction
Learning algorithms consider the availability of both positive and negative examples in common binary classification tasks. Sometimes a strong requirement like this is needed but it does not work in the context of an application for real-world application. In actuality, labeling data is an expensive, time-consuming task that necessitates a high level of domain knowledge. In some cases, this operation is quick, but usually, defining reliable labels for each data example is a hard task [1].
The goal of one-class classification (OCC) methods is to create classification models when the negative class is either nonexistent, poorly sampled, or poorly specified [2]. Thus, this technique creates class boundaries only with the knowledge of positive class. Examples of one-class classification include outlier detection and novelty detection, where the outlier elements are identified independently of all the other data elements. This One-Class classification problem happens in a variety of situations, including:
* **Outlier Detection:** The objective is to find samples from an unlabeled dataset that are outliers. Outliers in the training set are observations that deviate significantly from the others. Outlier estimators ignore the deviating observations and attempt to fit the majority of the training data under the region. An alternative name for it is unsupervised anomaly detection.
* **Novelty Detection:** Consider training data where there are no outliers, and we want to determine whether the incoming or fresh observation is an outlier or not. The outlier can be referred to as a novelty in this situation. Anomaly detection that is semi-supervised might be said to be involved.
* **Information Retrieval:** The purpose of this classification is to find samples in the unlabeled dataset that are related to the ones provided by the user.
* **One-vs-rest:** This is taken into account in this case when the negative class is too diversified and it is difficult to collect and label numerous negative samples [3].
This technique can find outliers that deviate from the training set in some way. It is highly helpful in resolving a classification problem where samples for one class are in great abundance and samples for the other classes are scarce [4]. The primary objective of OCC is to estimate the support of the data distribution, which is very helpful in unsupervised learning, particularly in high-dimensional feature spaces where doing density estimation is exceedingly challenging [3]. Several real problems, including novelty discovery, outlier detection, and others. Among the first authors to develop OCC algorithms were [5] and [6]. A classifier that finds the least radius hypersphere enclosing data is proposed by [6] whereas [5] specifically suggests a classifier that finds the hyperplane separating data from the origin with the largest margin. [5] establishes that despite these two approaches having differences between them, they both yield the same results for translation-invariant kernels like
the Gaussian kernel. With the aim of addressing large-scale non-linear optimization problems, first-order approaches for non-linear optimization problem solving have been developed and used. Several methods that require the solution of linear systems of equations, such as the Exterior-Point Method (EPM) and Interior-Point Methods (IPM) [7; 8] have been used to address quadratic programming issues. One of these techniques, for instance, works well for medium-sized situations with a few thousand variables
In recent years, algorithms like machine learning and others like as [9] have been used to solve nonlinear optimization problems involving very large scale variables or datasets. [10] suggested combining the Fast Projected Gradient Approach (FPGM) with the Argumented Lagrangian (AL) method to train SVMs that simulate large-scale convex quadratic optimization problems with linear constraints and straightforward bounds. The study omitted the AL-convergence FPGM's analysis, despite the encouraging results of the model. On the other hand, the convergence of the AL-FPGM was theoretically investigated in paper [9].
The three main contributions of this paper are (i) applying a technique based on fast projected gradient [10] for training dual soft margin OCC-SVMs, (ii) creating and implementing an AL-FPGM-based QP solver in Python for training OCC-SVMs, and (iii) testing the QP solver by training OCC-SVMs on few datasets used under PU-Learning problems [3]
The remaining parts of this paper is organized as follows: Section 2 describes the dual soft margin OCC-SVMs training problem, Section 3 describes the augmented Lagrangian method, Section 4 describes the fast projected gradient method, Section 5 presents numerical results for training the OCC-SVMs with the AL-FPGM and Section 6 presents concluding remarks.
## 2 The dual soft-margin SVM problem
[5] developed a method called "one-class classification" that extends the SVM methodology to handle training with just positive input. Only positive data can be used with the suggested Scholkopf mechanism. The algorithm checks for "outliers" within the positive instances and uses them as negative examples [11]. After changing the feature via a kernel, the one-class classification method of [5] treats the origin as the unique member of the second class. The image of one class is then separated from the origin using "relaxation parameters." Following that, the conventional OCC-SVMs algorithms are used [11].
The following quadratic programming problem must be solved in order to separate the data set from the origin:
\[\min\frac{1}{2}\|w\|^{2}+\frac{1}{vn}\sum_{i=1}^{n}\zeta_{i}-\beta \tag{1}\]
subject to:
\[(w\cdot\phi(x_{i}))\geq\beta-\zeta_{i}\ \ \ i=1,2,\ldots,n\ \ \zeta_{i}\geq 0\]
Here, \(v\in(0,1)\) is a parameter whose meaning will become clear later. Since nonzero slack variables \(\zeta_{i}\) are penalized in the objective function, we can expect that if w and \(\beta\) solve this problem, then the decision function \(f(x)=sign((w\cdot\phi(x))-\beta)\) will be positive for most examples \(x_{i}\) contained in the training set, while the SV type regularization term \(\|w\|\) will still be small. The actual trade-off between these two goals is controlled by \(v\). It is possible to demonstrate that the solution has an SV expansion by deriving the dual problem and applying the kernel transformation.
Using Lagrange multiplier, constraint optimization problem can further be expressed as Equation (2).
\[\mathcal{L}(w,\zeta,\alpha,b)=\frac{1}{2}\|w\|^{2}+\frac{1}{vn}\sum_{i=1}^{n} \zeta_{i}-b-\sum_{i=1}^{n}\alpha_{i}((w\cdot\phi(x_{i})-b)+\zeta_{i}) \tag{2}\]
The Lagrange multiplier \(\alpha\) must be greater or equal to zero. For the purpose of simplification, we expressed the \(\frac{1}{vn}\) as \(C\). We further find the partial derivatives of the loss function \(\mathcal{L}\) with respect to \(\zeta,w\) and \(\beta\)
\[\frac{\partial\mathcal{L}}{\partial w} =w-\sum_{i=1}^{n}\alpha_{i}x_{i}=0\implies w=\sum_{i=1}^{n} \alpha_{i}x_{i} \tag{3}\] \[\frac{\partial\mathcal{L}}{\partial\beta} =-1+\sum_{i=1}^{n}\alpha_{i}=0\implies\sum_{i=1}^{n}\alpha_{i}=1\] (4) \[\frac{\partial\mathcal{L}}{\partial\zeta} =C-\alpha_{i}=0\implies C=\alpha_{i} \tag{5}\]
The optimal value will be obtained using \(\zeta,w\) and \(\beta\) from the equations above, and this gives rise to Equation 6.
\[\mathcal{L}(w,\beta,\zeta,\alpha) =\frac{1}{2}w^{T}w+C\sum_{i=1}^{n}\zeta_{i}-\beta-w\sum_{i=1}^{n} \alpha_{i}x_{i}+\beta\sum_{i=1}\alpha_{i}-\sum_{i=1}^{n}\alpha_{i}\zeta_{i} \tag{6}\] \[=\frac{1}{2}w^{T}w-w^{T}w\] (7) \[\max\mathcal{L}(\alpha) =-\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}x_{i }^{T}x_{j} \tag{8}\]
Where \(x^{T}x=K(x,x)\) is a kernel to be computed. Minimizing Equation (8) gives:
\[f(\alpha)=\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_{i}\alpha_{j}x_{i}^{T} x_{j} \tag{9}\]
Equation 9 can further be expressed in a more compact form as:
\[f(\alpha)=\frac{1}{2}\alpha^{T}K(x,x)\alpha \tag{10}\]
The patterns \(x_{i}\) with nonzero \(\alpha_{i}\) are called SVs, where the coefficients are found as the solution of the dual problem:
\[f(\alpha)=\frac{1}{2}\alpha^{T}K(x,x)\alpha\ \ \mbox{subject to}\ \ \ \ \sum_{i}^{n}\alpha_{i}=1 \tag{11}\]
and the bounded set:
\[Box=\{\alpha\in\mathbf{R}^{m}:0\leq\alpha_{i}\leq\frac{1}{vn},\ \ \ \ i=1,\cdots,m\}\]
Then the optimization problem 11 can be rewritten as follows:
\[\min f(\alpha)\ \ \mbox{s.t}\ \ h(\alpha)=0 \tag{12}\]
The augmented Lagrangian can be written as follows:
\[\mathcal{L}(\alpha)=f(\alpha)+\mu h(\alpha)+0.5ch(\alpha)^{2} \tag{13}\]
where \(\mu\in\mathbb{R}\) is the unknown Langrage multiplier that corresponds to the equality constraint and \(c>0\) is the scaling parameter.
## 3 Augmented Lagrangian Method Algorithm
the augmented Lagrangian method constitute up of a sequences of approximate minimizations of \(\mathcal{L}_{c}(\alpha,\mu)\) in \(\alpha\) on the Box set
\[\hat{\alpha}\approx\alpha(\hat{\mu})=\arg\min_{\alpha\in Box}\mathcal{L}_{c}( \alpha,\mu) \tag{14}\]
followed by updating the Lagrange multiplier \(\mu.\) We use the following function, which measures the first-order optimality conditions for problem (14), as the stopping criteria:
Algorithm 1 provides a preliminary sketch of an augmented Lagrangian technique.
```
Step 1: Set\(\alpha^{*}:=\alpha=0,\ \mu:=0,\ rec:=acc(\alpha,\mu)\) Select\(c>0,\,0<\theta<1,\ \delta>1\) Step 2: Find\(\hat{\alpha}\approx\arg\min_{\alpha\in Box}\mathcal{L}(\alpha,\mu)\) such that \(\mu(\alpha,\mu)\leq\theta*rec\) using FPGM Step 3: Find\(\hat{\mu}\approx\mu+ch(\alpha)\) Step 4: Set\(\alpha:=\hat{\alpha},\epsilon:=\theta\epsilon,c:=\delta c\) Step 5: If\(rec>\varepsilon\) then Goto 2. Step 6: End
```
**Algorithm 1** Augmented Lagrangian Methods
Standard QP methods can be used to solve this problem 11. The offset \(\beta\) may be obtained by exploiting that for any \(\alpha_{i}\) which is not at the upper or lower bound, the corresponding pattern \(x_{i}\) satisfies:
\[\beta=(w\cdot x_{i})=\sum_{j}\alpha_{j}k(x_{j},x_{i}) \tag{15}\]
Note that the upper limits of the Lagrange multipliers go to infinity if \(v\mapsto 0\).
## 4 Fast Projected Gradient Method (FPGM)
The Lipschitz constant \(L>0\) of the gradient of \(\mathcal{L}_{c}\) must be estimated for the fast projected gradient method (FPGM) for the inequality
\[\|\nabla_{\alpha}\mathcal{L}_{c}(\alpha_{{}_{1}},\lambda)-\nabla_{\alpha} \mathcal{L}_{c}(\alpha_{{}_{2}},\lambda)\|\leq L\|\alpha_{{}_{1}}-\alpha_{{}_{ 2}}\| \tag{16}\]
to hold \(\forall\alpha_{1},\alpha_{2}\in\mathbb{R}^{n}\)
The gradient and the Hessian of \(\mathcal{L}_{c}(\alpha,\lambda)\) for the OCC-SVM can be written as follows:
\[\nabla_{\alpha}\mathcal{L}_{c}(\alpha,\lambda)=K\alpha-(\lambda-c(\alpha^{T}e -1))e^{T} \tag{17}\]
\[\nabla_{\alpha\alpha}^{2}\mathcal{L}_{c}(\alpha,\lambda)=K+cee^{T} \tag{18}\]
where K is an \(n\times n\) matrix,
\[K_{i,j} =k(x_{i},x_{j})\] \[e =(1,...,1)^{T}\in\mathbb{R}^{n}\]
\(\mathcal{L}_{c}\) is a quadratic form with respect to \(\alpha\)
\[L=\|\nabla_{\alpha\alpha}^{2}\mathcal{L}_{c}(\alpha,\lambda)\|=\|K+cee^{T}\| \tag{19}\]
where the biggest singular value of a matrix is the matrix spectral norm, i.e the constant that is dependent only on the kernel Matrix \(\mathbf{K}\) and the scaling parameter c [10].
In the terms of Gaussian kernel we have this expression:
\[\mathcal{L}\leq trace(\nabla_{\alpha\alpha}^{2}\mathcal{L}_{c}( \alpha,\lambda) =trace(K+cee^{T}) \tag{20}\] \[=trace(K)+trace(cee^{T}) \tag{21}\]
In the estimation above, we took into account the facts that the trace of a matrix is the sum of its eigenvalues and that, when all eigenvalues are non-negative, the biggest eigenvalue is either less than or equal to the total of all
eigenvalues.
Therefore, the estimation (\(L\approx trace(K)+trace(c(ee^{T}))\) is considered in the training. Similar kernel-specific bounds for additional kernels can be determined. For the dot product kernel, polynomial kernel and others, for example, the same estimation (\(L\approx trace(K)+trace(cee^{T})\)) can be applied. The most computational expensive term of the \(\mathcal{L}_{c}(\alpha,\lambda)\) is the matrix-vector product \(K\alpha\) calculation which carries \(\mathcal{O}(n^{2})\) basic operations in the case when the matrix under consideration is dense (Steps 3 and 7 in Algorithm 3). The projection operator \(P_{Box}:\mathbb{R}^{n}\mapsto Box\) (Step 3) is not computationally expensive (see algorithm 2) and expects only \(\mathcal{O}(n)\) basic arithmetic operations. Less than dozen of arithmetic operations are contained in the remaining stages together. Therefore, one iteration of FPGM expects \(\mathcal{O}(n^{2})\) operations. Algorithm 3 describes the FPGM used in the step 2 of the Algorithm 1. For the sequences of \(\{\alpha_{s}\}\) that are generated by the FPGM for a convex quadratic problem, the following bound hold [10]
\[\mathcal{L}_{c}(\alpha_{s},\lambda)-\mathcal{L}_{c}(\alpha(\lambda),\lambda) \leq\frac{2L\|\alpha_{0}-\alpha(\lambda)\|^{2}}{(s+1)^{2}} \tag{22}\]
where \(L>0\) is the Lipschitz constant and \(\alpha(\lambda)\) is the expected solution of the problem (14) under consideration. As a result, the FPGM converges to the minimum value augmented Lagrangian with an error order of \(\mathcal{O}(s^{-2})\), where s is the number of iterations. Furthermore, with reference to the estimation of L for the various kernels and the fact that both \(\alpha_{0}\) and \(\alpha(\lambda)\in Box\), the number of steps expected for the FPGM to converge to the minimum of the Augmented Lagrangian \(\mathcal{L}_{p}(\alpha(\lambda),\lambda)\) can be estimated [10].
Since \(\{\mathcal{L}(\alpha)\}\) is a decreasing sequence (with reference to the fact that \(\alpha^{k}\neq\alpha^{k+1}\forall k\geq 0\) and that the objective function is minimized at every iteration) and also bounded below (based on the existence of an unknown global optimum), it can be said that convergences. Cauchy sequence can also be applied to prove that it is convergent.
By using this fact and by considering that \(\|\mathcal{L}(\alpha^{k})-\mathcal{L}(\alpha^{k+l})\|>\|\alpha^{k}-\alpha^{k+l }\|_{2}\forall k,l\geq 0\), It can be inferred that \(\{\alpha^{k}\}\) Therefore since the sequence lies also in a closed feasible set, it is convergent. That's as \(k\mapsto\infty\), \(\alpha^{k}\mapsto\alpha^{k+1}\) then, the algorithm (1) yields a sequence of points.
```
1:Loop over all \(i=1,\ldots,n\)
2:If\(\alpha_{i}<0\) then Set \(\alpha_{i}=0\)
3:If\(\alpha_{i}>C\) then Set \(\alpha_{i}=C\)
4:Return \(\alpha_{1}\)
```
**Algorithm 2** Operator \(P_{Box}\): Projection of \(\alpha\in\mathbb{R}^{n}\) onto the set B
The AL method falls into the class of the proximal algorithms that have been extensively investigated in the papers of [12, 13]. One of possible AL-FPGM convergence analysis paths is to use of the theory developed in [12]. The importance of using the proximal-point theory is the possibility to demonstrate the convergence of the AL-FPGM for any \(c>0\); which would explain the possibility of obtaining convergence for small \(c\).
## 5 Numerical results and discussion
In this section, we report on the numerical results of training OCC-SVMs with the AL-FPGM on datasets that were used to solve PU-Learning problem [3]. To assess an improvement in efficiency of the AL-FPGM relatively to optimization methods that solve linear systems at every iteration, we compared the efficiency of the former to the OCC-SVM solver in python. It was observed that both approaches converges in less than 1 second. We selected the OCC-SVM solver in Python for a few reasons. First, it happens to be the widely used solver built for OCC-SVM. Secondly, it turns out to be an efficient and accurate method that solves linear systems at each step while keeping the number of iterations low. Lastly, the numerical results of testing the OCC-SVM solver is published and available for comparison. The OCC-SVM implemented in Python was developed for anomaly detection. For the training set, 25% of the positives were used while the remaining 75% of the positives and all the negatives were used for the testing. The Gaussian kernel \(K(x_{i},x_{j})=e^{-\gamma\|x_{i}-x_{j}\|}\) with \(\gamma\) values of 0.1, 0.5 and 1 were used and the results were presented in Table 1. For the AL-FPGM we used the scaling parameter value \(c=0.1\) and the stopping criteria of \(10^{-6}\). The other parameters used were \(\theta=0.99\) and \(\delta=1.01\) for all the runs.
## 6 Conclusion
This manuscript presents numerical results on training dual soft margin OCC-SVMs with AL-FPGM. We developed and implemented a QP solver based on the AL-FPGM and tested the QP solver by training medium-sized data sets from the USMO paper. The numerical results presented here indicate that the AL-FPGM is an efficient training algorithm for |
2309.10334 | Information geometric bound on general chemical reaction networks | We investigate the dynamics of chemical reaction networks (CRNs) with the
goal of deriving an upper bound on their reaction rates. This task is
challenging due to the nonlinear nature and discrete structure inherent in
CRNs. To address this, we employ an information geometric approach, using the
natural gradient, to develop a nonlinear system that yields an upper bound for
CRN dynamics. We validate our approach through numerical simulations,
demonstrating faster convergence in a specific class of CRNs. This class is
characterized by the number of chemicals, the maximum value of stoichiometric
coefficients of the chemical reactions, and the number of reactions. We also
compare our method to a conventional approach, showing that the latter cannot
provide an upper bound on reaction rates of CRNs. While our study focuses on
CRNs, the ubiquity of hypergraphs in fields from natural sciences to
engineering suggests that our method may find broader applications, including
in information science. | Tsuyoshi Mizohata, Tetsuya J. Kobayashi, Louis-S. Bouchard, Hideyuki Miyahara | 2023-09-19T05:37:13Z | http://arxiv.org/abs/2309.10334v1 | # Information geometric bound on general chemical reaction networks
###### Abstract
We investigate the dynamics of chemical reaction networks (CRNs) with the goal of deriving an upper bound on their reaction rates. This task is challenging due to the nonlinear nature and discrete structure inherent in CRNs. To address this, we employ an information geometric approach, using the natural gradient, to develop a nonlinear system that yields an upper bound for CRN dynamics. We validate our approach through numerical simulations, demonstrating faster convergence in a specific class of CRNs. This class is characterized by the number of chemicals, the maximum value of stoichiometric coefficients of the chemical reactions, and the number of reactions. We also compare our method to a conventional approach, showing that the latter cannot provide an upper bound on reaction rates of CRNs. While our study focuses on CRNs, the ubiquity of hypergraphs in fields from natural sciences to engineering suggests that our method may find broader applications, including in information science.
## I Introduction
Over the past three decades, extensive research has been dedicated to understanding stochastic and information thermodynamics, specifically focusing on bounds related to entropy production and various physical quantities [1; 2; 3; 4]. This trajectory persists, with newer studies shedding light on thermodynamic uncertainty relations [5; 6; 7; 8; 9] and establishing thermodynamic bounds on cross-correlations [10; 11; 12]. Parallel to the work on physical systems, researchers have also explored bounds in non-physical realms such as biological systems. For example, limits concerning population growth have been studied [13; 14; 15; 16; 17; 18].
Recent studies have unveiled the geometric structure of chemical reaction networks (CRNs) and have also extended these concepts to the domain of general hypergraphs [19; 20; 21]. Concurrently, topological analyses on CRNs and hypergraphs have been performed [22; 23; 24; 25]. Despite these advancements, the intrinsic nonlinearity in CRNs presents a significant challenge for elucidating specific properties, leaving gaps in our understanding.
Information geometry offers a framework that applies differential geometry to probability distributions, facilitating the exploration of their geometric structures [26; 27]. Among its significant contributions is the concept of the natural gradient (NG) [28], which has demonstrated effectiveness in optimization problems, particularly in the realm of machine learning. Additional studies have ventured into the acceleration of information gradient flows [29] and have investigated the biological significance of gradient flows [30]. Research has also extended to constraints involving rates of statistical divergences and mutual information [31]. These diverse applications underline the versatility of information geometry, which we leverage in this study.
In the present work, we explore the upper bound on reaction rates in general CRNs using NG. Initially, we present a geometrical description of CRN dynamics [20; 21]. Subsequently, we categorize CRNs according to the number of chemicals involved, the maximum coefficients in the reactions, and the total number of reactions. Utilizing this classification, we formulate a nonlinear system that provides an upper bound on reaction rates for a given class of CRNs. Through numerical simulations, we find that our system exhibits a steeper gradient, facilitating faster convergence. Importantly, this fast convergence minimizes the Kullback-Leibler (KL) divergence to zero. In contrast, conventional CRNs often maintain a nonzero KL divergence due to nontrivial equilibrium points. We also note that conventional methods are insufficient for achieving these results, underscoring the uniqueness of our approach.
The remainder of this paper is structured as follows. In Sec. II, we furnish an overview of CRNs. Section III elucidates the challenges of establishing an upper bound on CRNs using Newton's method. Section IV is dedicated to explaining NG. In Sec. V, we introduce a dynamical system that serves as an upper bound for CRNs in a specified class. Numerical simulations are presented in Sec. VI. In Sec. VII, we provide discussions on our findings. The paper concludes with Sec. VIII.
Chemical reaction networks
In this section, the primary aim is to formulate the geometric representation of the dynamical equations governing CRNs, as delineated in [20; 21]. We commence by presenting the standard notation for hypergraphs and CRNs. Subsequently, we elucidate the dynamics intrinsic to CRNs, as well as concepts of Legendre duality and detailed balance. These elements are then combined to construct the geometric expression of CRN dynamics.
### Definition of CRNs
We begin with a hypergraph \((\mathbb{V},\mathbb{E})\) where \(\mathbb{V}\coloneqq\{\mathsf{v}_{i}\}_{i=1}^{N_{\mathsf{v}}}\) and \(\mathbb{E}\coloneqq\{\mathsf{e}_{e}\}_{e=1}^{N_{\mathsf{e}}}\) as a hypergraph provides a mathematical framework to describe a chemical reaction. Suppose that a CRN of interest involves \(N_{\mathsf{X}}\) chemicals, denoted as \(\mathbb{X}_{1},\mathbb{X}_{2},\ldots,\mathbb{X}_{N_{\mathsf{K}}}\). In the case of a CRN, each hypervertex \(\mathsf{v}_{i}\) is composed of a combination of chemicals \(\mathbb{X}_{1},\mathbb{X}_{2},\ldots,\mathbb{X}_{N_{\mathsf{K}}}\) and given by
\[\mathsf{v}_{i}\coloneqq\gamma_{1,i}\mathbb{X}_{1}+\gamma_{2,i}\mathbb{X}_{2}+ \cdots+\gamma_{N_{\mathsf{K}},i}\mathbb{X}_{N_{\mathsf{K}}}. \tag{1}\]
Each hyperedge \(\mathsf{e}_{e}\) corresponds to a chemical reaction and is defined by a directed pair of two hypervertices \(\mathsf{e}_{e}\coloneqq(\mathsf{v}_{e}^{+},\mathsf{v}_{e}^{-})\), which can be expressed as
\[\alpha_{1,e}\mathbb{X}_{1}+\alpha_{2,e}\mathbb{X}_{2}+\cdots+ \alpha_{N_{\mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}\] \[\stackrel{{\mathsf{e}_{e}}}{{\longrightarrow}} \beta_{1,e}\mathbb{X}_{1}+\beta_{2,e}\mathbb{X}_{2}+\cdots+\beta_{N_{ \mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}. \tag{2}\]
Here, \(\mathsf{v}_{e}^{\pm}\) are chosen from \(\{\mathsf{v}_{i}\}_{i=1}^{N_{\mathsf{v}}}\), and in Eq. (II.1), \(\mathsf{v}_{e}^{+}=\alpha_{1,e}\mathbb{X}_{1}+\alpha_{2,e}\mathbb{X}_{2}+ \cdots+\alpha_{N_{\mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}\) and \(\mathsf{v}_{e}^{-}=\beta_{1,e}\mathbb{X}_{1}+\beta_{2,e}\mathbb{X}_{2}+\cdots+ \beta_{N_{\mathsf{K}},e}\mathbb{X}_{N_{\mathsf{K}}}\). We also define the order of reaction as follows:
\[m\coloneqq\max_{i,e}\{\alpha_{i,e},\beta_{i,e}\}. \tag{3}\]
To characterize CRNs, \(m\) in Eq. (3) will play an important role.
When a CRN involves multiple chemical reactions, the description provided above may be inadequate. To describe a complex CRN, the stoichiometric matrix plays a crucial role. The stoichiometric matrix \(S\) is defined as an \(N_{\mathsf{X}}\times N_{\mathsf{e}}\) matrix and is given by
\[S\coloneqq[\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{N_{\mathsf{e}}}], \tag{4}\]
where, for \(e=1,2,\ldots,N_{\mathsf{e}}\),
\[\mathbf{s}_{e}\coloneqq\begin{bmatrix}\beta_{1,e}-\alpha_{1,e}\\ \beta_{2,e}-\alpha_{2,e}\\ \vdots\\ \beta_{N_{\mathsf{K}},e}-\alpha_{N_{\mathsf{K}},e}\end{bmatrix}. \tag{5}\]
That is, the \((j,e)\)-th element of \(S\) is given by \(s_{j,e}=\beta_{j,e}-\alpha_{j,e}\) for \(j=1,2,\ldots,N_{\mathsf{K}}\) and \(e=1,2,\ldots,N_{\mathsf{e}}\). In general, when a CRN involves multiple chemical reactions, the stoichiometric matrix provides a concise representation of the relationships between the reactants and products.
The stoichiometric matrix \(S\) is also expressed as \(S=-\Gamma B\). Here, \(B\in\{1,0,-1\}^{N_{\mathsf{v}}\times N_{\mathsf{e}}}\) is the incidence matrix whose \((i,e)\)-th element is given for \(i=1,2,\ldots,N_{\mathsf{v}}\) and \(e=1,2,\ldots,N_{\mathsf{e}}\) by
\[b_{i,e}\coloneqq\begin{cases}1&(\mathsf{v}_{i}\text{ is the head of hyperedge }\mathsf{e}_{e}\colon\mathsf{v}_{i}=\mathsf{v}_{e}^{+}),\\ -1&(\mathsf{v}_{i}\text{ is the tail of hyperedge }\mathsf{e}_{e}\colon\mathsf{v}_{i}= \mathsf{v}_{e}^{-}),\\ 0&(\text{otherwise}),\end{cases} \tag{6}\]
and \(\Gamma\in\mathbb{Z}_{\geq 0}^{N_{\mathsf{v}}\times N_{\mathsf{v}}}\) is given by
\[\Gamma\coloneqq[\mathbf{\gamma}_{1},\mathbf{\gamma}_{2},\ldots,\mathbf{\gamma}_{N_{ \mathsf{v}}}], \tag{7}\]
where, using \(\gamma_{1,i},\gamma_{2,i},\ldots,\gamma_{N_{\mathsf{K}},i}\) in Eq. (1), \(\mathbf{\gamma}_{i}\) is defined as
\[\mathbf{\gamma}_{i}\coloneqq[\gamma_{1,i},\gamma_{2,i},\ldots,\gamma_{N_{\mathsf{ K}},i}]^{\intercal}, \tag{8}\]
for \(i=1,2,\ldots,N_{\mathsf{v}}\). Having defined the necessary variables to describe CRNs, we will now derive the equation that characterizes the dynamics of CRNs in the remainder of this section.
### Dynamics of CRNs
To analyze the dynamics of a CRN, we introduce fluxes associated with each hyperedge. Let \(j_{e}^{+}(\mathbf{x})\) and \(j_{e}^{-}(\mathbf{x})\) denote the currents from the head to the tail and from the tail to the head of hyperedge \(\mathsf{e}_{e}\), respectively, where \(\mathbf{x}\) is the chemical concentration vector. We define \(\mathbf{j}^{+}(\mathbf{x})\coloneqq[j_{1}^{+}(\mathbf{x}),j_{2}^{+}(\mathbf{x}),\ldots,j_{N_{ \mathsf{e}}}^{+}(\mathbf{x})]^{\intercal}\) and \(\mathbf{j}^{-}(\mathbf{x})\coloneqq[j_{1}^{-}(\mathbf{x}),j_{2}^{-}(\mathbf{x}),\ldots,j_{N_{ \mathsf{e}}}^{-}(\mathbf{x})]^{\intercal}\).
The law of mass action is widely observed to hold for CRNs and is considered one of the fundamental characteristics that differentiate CRNs from nonchemical hypergraphs. Based on this, we make the assumption of mass action kinetics for the forward and reverse reaction fluxes on hyperedge \(\mathsf{e}_{e}\) in Eq. (II.1):
\[j_{e}^{\pm}(\mathbf{x})=k_{e}^{\pm}\sum_{i=1}^{N_{\mathsf{v}}}b_{i,e}^{\pm}\prod_{ j=1}^{N_{\mathsf{K}}}x_{j}^{\gamma_{j,i}}, \tag{9}\]
where, for \(i=1,2,\ldots,N_{\mathsf{K}}\) and \(e=1,2,\ldots,N_{\mathsf{e}}\),
\[b_{i,e}^{+}\coloneqq\max(b_{i,e},0), \tag{10}\] \[b_{i,e}^{-}\coloneqq-\min(b_{i,e},0), \tag{11}\]
and \(k_{e}^{\pm}\) are the reaction rate coefficients for the forward and backward currents on \(\mathsf{e}_{e}\). Expressed in vector notation, Eq. (9) can be written as
\[\mathbf{j}^{\pm}(\mathbf{x}) =\mathbf{k}^{\pm}\circ(B^{\pm})^{\intercal}\mathbf{x}^{\Gamma\,\intercal} \tag{12}\] \[=\mathbf{k}^{\pm}\circ\mathbf{x}^{(\Gamma B^{\pm})^{\intercal}}, \tag{13}\]
where
\[B^{+} \coloneqq\max(B,\mathbb{0}), \tag{14}\] \[B^{-} \coloneqq-\min(B,\mathbb{0}),\] (15) \[\boldsymbol{x}^{\mathsf{T}}\coloneqq[\boldsymbol{x}^{\mathsf{T}_{ \mathsf{T}}},\boldsymbol{x}^{\mathsf{T}_{\mathsf{T}_{\mathsf{S}_{\mathsf{e}}}}}, \ldots,\boldsymbol{x}^{\mathsf{T}\mathsf{N}_{\mathsf{e}}}]^{\mathsf{T}},\] (16) \[\boldsymbol{x}^{\mathsf{T}_{i}} \coloneqq\prod_{j=1}^{N_{\mathsf{x}}}x_{j}^{\gamma_{j,i}},\] (17) \[\boldsymbol{k}^{\pm} \coloneqq[k_{1}^{\pm},k_{2}^{\pm},\ldots,k_{N_{\mathsf{e}}}^{\pm}]. \tag{18}\]
Here, \(\mathbb{0}\) represents the zero matrix, which has the same size as matrix \(B\). The functions \(\max(\cdot,\cdot)\) and \(\min(\cdot,\cdot)\) are applied elementwise, meaning that for each element \([A]_{i,j}\) and \([B]_{i,j}\), we have \([\max(A,B)]_{i,j}=\max([A]_{i,j},[B]_{i,j})\) and \([\min(A,B)]_{i,j}=\min([A]_{i,j},[B]_{i,j})\), respectively. The notation \([\cdot]_{i,j}\) represents the element located at the \(i\)-th row and \(j\)-th column. Moreover, the symbol \(\circ\) denotes the element-wise product, which is defined as follows:
\[\boldsymbol{x}\circ\boldsymbol{y}\coloneqq\begin{bmatrix}x_{1}y_{1}\\ x_{2}y_{2}\\ \vdots\\ x_{N_{\mathsf{x}}}y_{N_{\mathsf{x}}}\end{bmatrix}, \tag{19}\]
where \(\boldsymbol{x}\coloneqq[x_{1},x_{2},\ldots,x_{N_{\mathsf{x}}}]^{\mathsf{T}}\), \(\boldsymbol{y}\coloneqq[y_{1},y_{2},\ldots,y_{N_{\mathsf{x}}}]^{\mathsf{T}}\).
The chemical concentration vector \(\boldsymbol{x}_{t}\) at time \(t\) satisfies the chemical rate equation (CRE) given by [32; 33; 34]
\[\dot{\boldsymbol{x}}_{t}=S\boldsymbol{j}(\boldsymbol{x}_{t}), \tag{20}\]
where \(\boldsymbol{j}(\boldsymbol{x})\coloneqq\boldsymbol{j}^{+}(\boldsymbol{x})- \boldsymbol{j}^{-}(\boldsymbol{x})\).
### Legendre duality of fluxes and forces
In the realm of physics, the relationship between fluxes and forces is commonly expressed through Legendre duality, a concept that describes how forces and fluxes are dual aspects of the same system. Their product results in entropy production, denoted as \(\langle\boldsymbol{j},\boldsymbol{f}\rangle\). In the context of chemical thermodynamics, we define the force on a hyperedge \(\mathsf{e}_{e}\) in a manner consistent with entropy production:
\[f_{e}(\boldsymbol{x})\coloneqq\frac{1}{2}\ln\frac{j_{e}^{+}(\boldsymbol{x})} {j_{e}^{-}(\boldsymbol{x})}, \tag{21}\]
for \(e=1,2,\ldots,N_{\mathsf{e}}\). The corresponding vector form is given by Eq. (21), denoted as
\[\boldsymbol{f}(\boldsymbol{x})\coloneqq[f_{1}(\boldsymbol{x}),f_{2}( \boldsymbol{x}),\ldots,f_{N_{\mathsf{e}}}(\boldsymbol{x})]^{\mathsf{T}},\]
can be expressed as
\[\boldsymbol{f}(\boldsymbol{x})=\frac{1}{2}\ln\frac{\boldsymbol{j}^{+}( \boldsymbol{x})}{\boldsymbol{j}^{-}(\boldsymbol{x})}, \tag{22}\]
where the division and the logarithmic function are computed elementwise.
We introduce a quantity called "frenetic activity," particularly on hyperedge \(\mathsf{e}_{e}\), to describe the rate of change in the state of the system \(\mathsf{e}_{e}\)[20; 21] as
\[\omega_{e}(\boldsymbol{x})\coloneqq 2\sqrt{j_{e}^{+}(\boldsymbol{x})j_{e}^{-}( \boldsymbol{x})}, \tag{23}\]
for \(e=1,2,\ldots,N_{\mathsf{e}}\). The vector form of Eq. (23), denoted as \(\boldsymbol{\omega}(\boldsymbol{x})\coloneqq[\omega_{1}(\boldsymbol{x}), \omega_{2}(\boldsymbol{x}),\ldots,\omega_{N_{\mathsf{e}}}(\boldsymbol{x})]^{ \mathsf{T}}\), can be expressed as
\[\boldsymbol{\omega}(\boldsymbol{x})=2\sqrt{\boldsymbol{j}^{+}(\boldsymbol{x} )\circ\boldsymbol{j}^{-}(\boldsymbol{x})}. \tag{24}\]
Then, the following strictly convex smooth function \(\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}(\boldsymbol{f}(\boldsymbol{x}))\), which is called the dissipation function, establishes the Legendre duality between force \(\boldsymbol{f}(\boldsymbol{x})\), Eq. (22) and flux \(\boldsymbol{j}(\boldsymbol{x})\), Eq. (24):
\[\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}(\boldsymbol{f}(\boldsymbol{x}) )\coloneqq\boldsymbol{\omega}(\boldsymbol{x})^{\mathsf{T}}[\cosh( \boldsymbol{f}(\boldsymbol{x}))-\boldsymbol{1}], \tag{25}\]
where
\[\cosh(\boldsymbol{f}(\boldsymbol{x})) \coloneqq\begin{bmatrix}\cosh(f_{1}(\boldsymbol{x}))\\ \cosh(f_{2}(\boldsymbol{x}))\\ \vdots\\ \cosh(f_{N_{\mathsf{e}}}(\boldsymbol{x}))\end{bmatrix}, \tag{26}\] \[\boldsymbol{f}(\boldsymbol{x}) \coloneqq[f_{1}(\boldsymbol{x}),f_{2}(\boldsymbol{x}),\ldots,f_{N_ {\mathsf{e}}}(\boldsymbol{x})]^{\mathsf{T}},\] (27) \[\boldsymbol{1} \coloneqq[\underbrace{1,1,\ldots,1}_{N_{\mathsf{e}}}]^{\mathsf{T}}. \tag{28}\]
As a result we have 1
Footnote 1: We have used the following notation: \(\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}( \boldsymbol{f}(\boldsymbol{x}))=\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{ \omega}(\boldsymbol{x})}(\boldsymbol{f})|_{\boldsymbol{f}=\boldsymbol{f}( \boldsymbol{x})}\).
\[\boldsymbol{j}(\boldsymbol{x})=\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{ \omega}(\boldsymbol{x})}(\boldsymbol{f}(\boldsymbol{x})). \tag{29}\]
Note that
\[\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{\omega}(\boldsymbol{x})}( \boldsymbol{f}(\boldsymbol{x})) =\boldsymbol{\omega}(\boldsymbol{x})\circ\sinh(\boldsymbol{f}( \boldsymbol{x})) \tag{30}\] \[=\begin{bmatrix}\omega_{1}(\boldsymbol{x})\sinh(f_{1}(\boldsymbol{ x}))\\ \omega_{2}(\boldsymbol{x})\sinh(f_{2}(\boldsymbol{x}))\\ \vdots\\ \omega_{N_{\mathsf{e}}}(\boldsymbol{x})\sinh(f_{N_{\mathsf{e}}}(\boldsymbol{x})) \end{bmatrix}. \tag{31}\]
Combining Eqs. (20) and (29), we get
\[\dot{\boldsymbol{x}}_{t}=S\partial_{\boldsymbol{f}}\Psi^{*}_{\boldsymbol{ \omega}(\boldsymbol{x}_{t})}(\boldsymbol{f}(\boldsymbol{x}_{t})). \tag{32}\]
While Eq. (32) is a well-defined differential equation, it lacks an explicit functional form for \(\boldsymbol{f}(\boldsymbol{x})\), thus limiting its predictive capability. The functional form of \(\boldsymbol{f}(\boldsymbol{x})\) based on thermodynamics and kinetics will be elaborated in the subsequent subsection.
### Chemical reaction dynamics
Until this point, the discussion has centered on the general description of dynamics on hypergraphs. Going forward, the focus will be exclusively on CRNs. In the realm of chemical thermodynamics, it is a common assumption to employ mass action kinetics to describe reaction rates. Within this framework, a specific definition of force is accepted and widely used [20; 21; 32; 33]:
\[\mathbf{f}(\mathbf{x})=-\frac{1}{2}\bigg{(}S^{\intercal}\ln\mathbf{x}-\ln\frac{\mathbf{k}^{+} }{\mathbf{k}^{-}}\bigg{)}. \tag{33}\]
To clarify the geometric meaning of Eq. (33), we introduce the Bregman divergence \(\mathcal{D}_{\phi}(\mathbf{x}\|\mathbf{y})\) associated with potential \(\phi(\cdot)\)2:
Footnote 2: We have used the notation: \(\partial_{\mathbf{x}}\phi(\mathbf{y})=\partial_{\mathbf{x}}\phi(\mathbf{x})|_{\mathbf{x}=\mathbf{y}}\).
\[\mathcal{D}_{\phi}(\mathbf{x}\|\mathbf{y})\coloneqq\phi(\mathbf{x})-\phi(\mathbf{y})-\langle \mathbf{x}-\mathbf{y},\partial_{\mathbf{x}}\phi(\mathbf{y})\rangle. \tag{34}\]
The derivative of Eq. (34) is given by
\[\partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}\|\mathbf{y})=\partial_{\mathbf{x}}\phi( \mathbf{x})-\partial_{\mathbf{x}}\phi(\mathbf{y}). \tag{35}\]
The KL divergence is Eq. (34) with the following potential 3:
Footnote 3: See Appendix A for detail.
\[\phi_{\text{KL}}(\mathbf{x})\coloneqq\sum_{i=1}^{N_{\text{K}}}x_{i}\ln x_{i}. \tag{36}\]
Then, the KL divergence is defined by \(\mathcal{D}_{\phi_{\text{KL}}}(\cdot\|\cdot)\coloneqq\mathcal{D}_{\text{KL}} (\cdot\|\cdot)\) and it reads
\[\mathcal{D}_{\text{KL}}(\mathbf{x}\|\mathbf{y})=\sum_{i=1}^{N_{\text{K}}}x_{i}\ln \frac{x_{i}}{y_{i}}-\sum_{i=1}^{N_{\text{K}}}x_{i}+\sum_{i=1}^{N_{\text{K}}} y_{i}, \tag{37}\]
and its derivative takes the following form:
\[\partial_{\mathbf{x}}\mathcal{D}_{\text{KL}}(\mathbf{x}\|\mathbf{y})=\begin{bmatrix}\ln x _{1}-\ln y_{1}\\ \ln x_{2}-\ln y_{2}\\ \vdots\\ \ln x_{N_{\text{K}}}-\ln y_{N_{\text{K}}}\end{bmatrix}. \tag{38}\]
Then, Eq. (33) is rewritten as
\[\mathbf{f}(\mathbf{x})=-\frac{1}{2}S^{\intercal}\partial_{\mathbf{x}}\mathcal{D}_{\text{ KL}}(\mathbf{x}\|\hat{\mathbf{x}})+\mathbf{f}_{\text{ne}}. \tag{39}\]
The definition of \(\hat{\mathbf{x}}\) will be given in the following subsection, and \(\mathbf{f}_{\text{ne}}\not\in\text{Im}[S^{\intercal}]\) represents the nonequilibrium force incurred to the system [19].
Mass action kinetics also offers the following definitions of the flux and activity [20; 21; 32; 33]:
\[\mathbf{j}(\mathbf{x})=(\mathbf{k}^{+}\circ(B^{+})^{\intercal}-\mathbf{k}^{-}\circ(B^{-})^{ \intercal})\mathbf{x}^{\Gamma\intercal}. \tag{40}\]
Substituting Eq. (40) into Eq. (2.24), we also get the activity for CRNs:
\[\mathbf{\omega}(\mathbf{x})=2\sqrt{\mathbf{k}^{+}\circ\mathbf{k}^{-}}\circ\mathbf{x}^{R^{ \intercal}/2}. \tag{41}\]
where
\[R\coloneqq\Gamma(B^{+}+B^{-}). \tag{42}\]
In the remaining part of this section, we will present the geometric expression of the equation for CRNs.
### Geometric expression of an equilibrium CRE
Up to this point, the discussion has centered on the geometric relationships that exist among the chemical concentration, potential, force, and flux in a CRN. Subsequently, the CRE specified in Eq. (20) can be reformulated into a geometric expression [32; 33; 34]. To accomplish this, the detailed balance condition (DBC) must be taken into account. The DBC, a criterion for the dynamic stability of a system at equilibrium, is described in the following section [20; 21]:
\[\ln\frac{\mathbf{k}^{+}}{\mathbf{k}^{-}}=S^{\intercal}\ln\mathbf{x}_{\text{eq}}, \tag{43}\]
Here, \(\mathbf{x}_{\text{eq}}\) represents the equilibrium chemical concentration vector, which is dependent on both the initial concentration vector \(\mathbf{x}_{\text{ini}}\) and the specific CRE under consideration. Additionally, if Eq. (43) is met, then \(\mathbf{f}_{\text{ne}}=\mathbf{0}\). Generally, at equilibrium, net fluxes cease (\(\mathbf{j}=\mathbf{0}\)), allowing us to define a set of equilibrium chemical concentration vectors as follows:
\[V_{\text{eq}}\coloneqq\{\mathbf{x}>\mathbf{0}|\mathbf{j}(\mathbf{x})=\mathbf{0}\}. \tag{44}\]
From Eq. (43), Eq. (44) is transformed into
\[V_{\text{eq}}=\{\mathbf{x}>\mathbf{0}|\exists\mathbf{\eta}\in\mathbb{R}^{|\text{ker}(S^{ \intercal})|},\ln\mathbf{x}=\ln\mathbf{x}_{\text{eq}}+U\mathbf{\eta}\}. \tag{45}\]
where \(U\coloneqq[u_{1},u_{2},\ldots,u_{|\text{ker}(S^{\intercal})|}]\) and \(\{u_{i}\}_{i=1}^{|\text{ker}(S^{\intercal})|}\) are the bases of \(\text{ker}(S^{\intercal})\). We have introduced \(\hat{\mathbf{x}}\) in Eq. (39). We here impose the following relation to \(\hat{\mathbf{x}}\):
\[\hat{\mathbf{x}}\in V_{\text{eq}}. \tag{46}\]
Then Eq. (39) describes dynamics of gradient flow to \(V_{\text{eq}}\). Equation (46) is equivalently written as
\[\ln\frac{\mathbf{k}^{+}}{\mathbf{k}^{-}}=S^{\intercal}\ln\hat{\mathbf{x}}. \tag{47}\]
Note that using \(\hat{\mathbf{x}}\) instead of \(\mathbf{x}_{\text{eq}}\) provides us with a generalized expression of the dynamical system.
Finally, we have arrived at the geometric expression of a CRE. Namely, combining Eqs. (2.32), (2.39), (2.41),
and (2.43), we get 4
Footnote 4: We have used the following notation: \(\partial_{\mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})=\partial_{ \mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}\|\hat{\mathbf{x}})|_{\mathbf{x}=\mathbf{x}_{t}}\).
\[\dot{\mathbf{x}}_{t}=S\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x}_{t})}\bigg{(}- \frac{1}{2}S^{\intercal}\partial_{\mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}_{t} \|\hat{\mathbf{x}})\bigg{)}, \tag{2.48}\]
where \(\hat{\mathbf{x}}\in V_{\mathrm{eq}}\). Note that in Eq. (2.48), replacing \(\hat{\mathbf{x}}\) with \(\mathbf{x}_{\mathrm{eq}}\) does not affect the dynamics of CRNs because \(SU\eta=\mathbf{0}\).
## III Difficulty of constructing an upper bound on the reaction rates of CRNs
In this section, we briefly revisit Newton's method and present a counterexample illustrating its limitations in establishing an upper bound on the reaction rates of CRNs.
### Newton's method
As stated in Sec. I, the objective of this paper is to determine an upper bound on the reaction rates of CRNs. One might assume that straightforward optimization methods could achieve this. However, before discussing NG, we elucidate the challenges of using Newton's method [35] as an optimization technique for this purpose. While the gradient method is another elementary optimization technique, its indeterminate step size precludes its consideration in this study. We now turn to a specific optimization problem:
\[\min_{\mathbf{x}}f(\mathbf{x}). \tag{3.1}\]
Letting \(\mathbf{x}_{t}\) be the state at the \(t\)-th iteration for \(t\in\mathbb{Z}_{\geq 0}\), Newton's method for Eq. (3.1) is given by
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-[\partial_{\mathbf{x}}^{2}f(\mathbf{x}_{t})]^{-1}\partial_{ \mathbf{x}}f(\mathbf{x}_{t}). \tag{3.2}\]
In the case of CRNs, we have \(f(\mathbf{x})=\mathcal{D}_{\phi}(\mathbf{x}\|\hat{\mathbf{x}})\); then Eq. (3.2) reads
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-G_{\phi}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}\mathcal{D }_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}}), \tag{3.3}\]
where \(G_{\phi}\) is the Hessian of \(\phi(\cdot)\).
### Counterexample
We will demonstrate a counterexample to show that Eq. (3.3) does not yield an upper bound for a CRN. We consider the following CRN with \(N_{\mathcal{K}}=2\), \(m=3\), and \(N_{\mathbf{e}}=1\):
\[2\mathcal{X}_{1}\rightleftharpoons 3\mathcal{X}_{2}. \tag{3.4}\]
For the simulations of Eq. (2.48), we set \(k_{\mathrm{c}}^{\pm}=1\), \(\Delta t=1.0\times 10^{-4}\), \(\mathbf{x}_{\mathrm{ini}}=[3/4,11/8]^{\intercal}\), and \(\hat{\mathbf{x}}=[1,1]^{\intercal}\). In Fig. 1, we plot the dynamics of Eq. (3.4) as well as the dynamics obtained using Newton's method. At \(t=1\), the divergence of Newton's method is greater than that of the CRN, indicating that Newton's method fails to bound the dynamics. This observation is illustrated in the figure. The reason for this discrepancy lies in the nonlinearity of Eq. (3.4).
## IV Natural gradient
In this section, we explore the NG method and its applicability to the problem of constraining reaction rates in CRNs. As our proposed methodology hinges on NG, understanding its theoretical underpinnings and its distinction from Newton's method is crucial.
### Derivation of NG
In this section, we outline the derivation of the NG method, which is grounded in information geometry. Specifically, we will elucidate how the dynamics of a given vector \(\mathbf{x}_{t}\) at time \(t\) are updated within the framework of NG:
\[\mathbf{x}_{t+\Delta t}=\mathbf{x}_{t}+\Delta\mathbf{x}_{t}(\epsilon), \tag{4.1}\]
where \(\Delta\mathbf{x}_{t}(\epsilon)\) is defined as 5
Footnote 5: We have used the following notation: \(\partial_{\mathbf{x}}f(\mathbf{x}_{t})=\partial_{\mathbf{x}}f(\mathbf{x})|_{\mathbf{x}=\mathbf{x}_{t}}\).
\[\Delta\mathbf{x}_{t}(\epsilon) =\operatorname*{arg\,min}_{\Delta\mathbf{x}:\mathcal{D}_{\phi^{\prime }}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\leq\epsilon}[f(\mathbf{x}_{t}+\Delta\mathbf{x })-f(\mathbf{x}_{t})] \tag{4.2}\] \[\approx\operatorname*{arg\,min}_{\Delta\mathbf{x}:\frac{1}{2}\Delta \mathbf{x}^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t})\Delta\mathbf{x}\leq\epsilon} \partial_{\mathbf{x}}f(\mathbf{x}_{t})^{\intercal}\Delta\mathbf{x}. \tag{4.3}\]
Here, \(G_{\phi^{\prime}}(\mathbf{x}_{t})\) is the Hessian given by
\[[G_{\phi^{\prime}}(\mathbf{x}_{t})]_{i,j}\coloneqq\frac{\partial^{2}}{ \partial x_{i}\partial x_{j}}\phi^{\prime}(\mathbf{x}_{t}), \tag{100}\]
where \([\cdot]_{i,j}\) is the \((i,j)\)-th element. In the case of Eq. (36), Eq. (101) reads
\[[G_{\phi^{\prime}}(\mathbf{x}_{t})]_{i,j}=\delta_{i,j}\frac{1}{[\mathbf{x}_{t}]_{i}}, \tag{102}\]
where \(\delta_{i,j}\) is the Kronecker delta function and \([\cdot]_{i}\) is the \(i\)-th element. To derive Eq. (100), we have used the following expansion of the Bregman divergence:
\[\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\] \[\quad=\phi^{\prime}(\mathbf{x}_{t}+\Delta\mathbf{x})-\phi^{\prime}(\mathbf{x} _{t})-\langle(\mathbf{x}_{t}+\Delta\mathbf{x})-\mathbf{x}_{t},\partial_{\mathbf{x}}\phi^{ \prime}(\mathbf{x}_{t})\rangle \tag{103}\] \[\quad\approx\phi^{\prime}(\mathbf{x}_{t})+\partial_{\mathbf{x}}\phi(\mathbf{ x}_{t})^{\intercal}\Delta\mathbf{x}+\frac{1}{2}\Delta\mathbf{x}^{\intercal}G_{\phi^{ \prime}}(\mathbf{x}_{t})\Delta\mathbf{x}\] \[\quad\quad-\phi^{\prime}(\mathbf{x}_{t})-\langle(\mathbf{x}_{t}+\Delta \mathbf{x})-\mathbf{x}_{t},\partial_{\mathbf{x}}\phi^{\prime}(\mathbf{x}_{t})\rangle\] (104) \[\quad=\frac{1}{2}\Delta\mathbf{x}^{\intercal}G_{\phi^{\prime}}(\mathbf{x }_{t})\Delta\mathbf{x}. \tag{105}\]
Note that \(\Delta t\) in Eq. (100) is set to unity in the conventional formulation of NG; in the following section, we will impose a specific relationship between \(\Delta t\) and \(\epsilon\) in Eq. (100) to connect NG and CRNs.
To find the solution of Eq. (100), we employ the method of Lagrange multipliers where the Lagrange function reads
\[L(\Delta\mathbf{x},\lambda)\coloneqq\partial_{\mathbf{x}}f(\mathbf{x}_{t})^{ \intercal}\Delta\mathbf{x}-\frac{\lambda}{2}(\Delta\mathbf{x}^{\intercal}G_{\phi^{ \prime}}(\mathbf{x}_{t})\Delta\mathbf{x}-\epsilon). \tag{106}\]
The derivative of Eq. (106) with respect to \(\Delta\mathbf{x}\) takes the following form:
\[\frac{\partial}{\partial\Delta\mathbf{x}}L(\Delta\mathbf{x},\lambda)= \partial_{\mathbf{x}}f(\mathbf{x}_{t})-\lambda G_{\phi^{\prime}}(\mathbf{x}_{t})\Delta \mathbf{x}. \tag{107}\]
Then, the solution of Eq. (104) is given by
\[\Delta\mathbf{x}=\frac{1}{\lambda}G_{\phi^{\prime}}^{-1}(\mathbf{x}_{t}) \partial_{\mathbf{x}}f(\mathbf{x}_{t}). \tag{108}\]
The derivative of Eq. (106) with respect to \(\lambda\) has the following form:
\[\frac{\partial}{\partial\lambda}L(\Delta\mathbf{x},\lambda) =-\frac{1}{2}(\Delta\mathbf{x}^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t })\Delta\mathbf{x}-\epsilon) \tag{109}\] \[=0. \tag{110}\]
Taking Eq. (108) into account, the solution of Eq. (110) is written as
\[\lambda^{2}=\frac{\partial_{\mathbf{x}}f(\mathbf{x}_{t})^{\intercal}G_{ \phi^{\prime}}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t})}{\epsilon}. \tag{111}\]
Combining Eqs. (108) and (111) and taking account of the nature of the minimization problems, the solution of Eq. (100) takes the following form:
\[\Delta\mathbf{x}_{t}(\epsilon)=-\sqrt{\frac{\epsilon}{\partial_{\mathbf{x}}f(\mathbf{x}_{ t})^{\intercal}G_{\phi^{\prime}}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t})}}G_{ \phi^{\prime}}^{-1}(\mathbf{x}_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t}). \tag{112}\]
Note that \(\phi^{\prime}(\cdot)\) in Eq. (112) may be different from \(\phi(\cdot)\) appearing in Sec. II. In the case of CRNs, \(f(\mathbf{x}_{t})\) in Eq. (112) represents \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\). As shown in Eq. (112), \(\epsilon\) is a key parameter in NG. From the perspective of applying NG to CRNs, the relationship between \(\epsilon\) in NG and \(\Delta t\) in CRNs, when discretized, is still missing. Therefore, NG cannot be directly applied to CRNs. In the following section, we will explain how to address this challenge and develop a general upper bound on the dynamics of CRNs.
### Comparison with Newton's method
In this section, we compare NG with Newton's method. Newton's method is a special case of NG when Eq. (112) is adjusted according to certain conditions. Specifically, the conditions are \(\phi(\cdot)=\phi^{\prime}(\cdot)\) and \(\epsilon=\partial_{\mathbf{x}}f(\mathbf{x}_{t})^{\intercal}G_{\phi^{\prime}}^{-1}(\mathbf{x }_{t})\partial_{\mathbf{x}}f(\mathbf{x}_{t})\). The equation thus becomes equivalent to Eq. (100). This equivalency leads us to introduce a systematic NG-based method to determine the direction and step size for a gradient system that bounds CRNs of a specific class.
## V Upper bound on reaction rates
In this section, we construct a nonlinear system that gives an upper bound on reaction rates of CRNs in a given class. The class is characterized by several topological numbers of CRNs: \(N_{\text{v}}\), \(N_{\text{e}}\), and \(m\).
### Upper bound system
Comparing discretized CRE dynamics with NG dynamics, represented by Eq. (100), presents a challenge. The difficulty arises from the absence of an established relationship between \(\epsilon\), the constraint parameter in NG, and \(\Delta t\), the time step in the discretized CRE. To address this issue, we propose the following relationship between \(\epsilon\) and \(\Delta t\):
\[\epsilon=\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\|\hat{\mathbf{x}}_{t} \|_{\text{F}}\mathbf{e}_{t}\Delta t\|\mathbf{x}_{t}), \tag{113}\]
where \(\|\cdot\|_{\text{F}}\) is the Frobenius norm and \(\mathbf{e}_{t}\) is a vector that satisfies \(\|\mathbf{e}_{t}\|_{\text{F}}=1\). Then, we try to compute the maximum value of \(\epsilon\) in Eq. (113). Note that \(S:\mathbb{R}^{N_{\text{e}}}\rightarrow\mathbb{R}^{N_{\text{F}}}\) and \(S\) is a \(N_{\text{F}}\times N_{\text{e}}\) matrix. From Eq. (48), we get
\[=\left\|S\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x}_{t})}\bigg{(}- \frac{1}{2}S^{\intercal}\partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{ x}})\bigg{)}\right\|_{\mathrm{F}} \tag{100}\] \[\leq\|S\|_{\mathrm{F}}\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{ \omega}(\mathbf{x}_{t})}\bigg{(}-\frac{1}{2}S^{\intercal}\partial_{\mathbf{x}} \mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\bigg{)}\bigg{\|}_{\mathrm{F}}\] (101) \[\leq\|S\|_{\mathrm{F}}\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{ \omega}(\mathbf{x}_{t})}\bigg{(}\bigg{\|}-\frac{1}{2}S^{\intercal}\partial_{\mathbf{x} }\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\bigg{\|}_{\mathrm{abs}}\bigg{)} \bigg{\|}_{\mathrm{F}}\] (102) \[\leq\|S\|_{\mathrm{F}}\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{ \omega}(\mathbf{x}_{t})}\bigg{(}\frac{1}{2}\|S^{\intercal}\|_{\mathrm{F}}\| \partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\|_{\mathrm{F}} ^{N_{\mathbf{x}}\to N_{\mathbf{x}}}\bigg{)}\bigg{\|}_{\mathrm{F}}. \tag{103}\]
Here, \(\|\cdot\|_{\mathrm{abs}}\) and \(\|\cdot\|_{\mathrm{F}}^{N_{\mathbf{x}}\to N_{\mathbf{x}}}\) are defined as, respectively,
\[\|\mathbf{v}\|_{\mathrm{abs}} =[|v_{1}|,|v_{2}|,\ldots,|v_{N_{\mathbf{x}}}|]^{\intercal}, \tag{104}\] \[\|\mathbf{v}\|_{\mathrm{F}}^{N_{\mathbf{x}}\to N_{\mathbf{x}}}\coloneqq \underbrace{[\|\mathbf{v}\|_{\mathrm{F}},\|\mathbf{v}\|_{\mathrm{F}}, \ldots,\|\mathbf{v}\|_{\mathrm{F}}]}_{N_{\mathbf{x}}}\mathsf{I}^{\intercal}, \tag{105}\]
for \(\mathbf{v}\coloneqq[v_{1},v_{2},\ldots,v_{N_{\mathbf{x}}}]^{\intercal}\). From Eq. (31), we have
\[\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x})}(\|\mathbf{f}(\mathbf{x}) \|_{\mathrm{abs}}) =\mathbf{\omega}(\mathbf{x})\circ\sinh(\|\mathbf{f}(\mathbf{x})\|_{\mathrm{abs}}), \tag{106}\] \[\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}(\mathbf{x})}(\|\mathbf{f}(\mathbf{x}) \|_{\mathrm{F}}^{N_{\mathbf{x}}\to N_{\mathbf{x}}}) =\mathbf{\omega}(\mathbf{x})\circ\sinh(\|\mathbf{f}(\mathbf{x})\|_{\mathrm{F}}^{ N_{\mathbf{x}}\to N_{\mathbf{x}}}). \tag{107}\]
Given \(S:\mathbb{R}^{N_{\mathbf{x}}}\to\mathbb{R}^{N_{\mathbf{x}}}\) and \(\mathbf{v}\in\mathbb{R}^{N_{\mathbf{x}}}\), we have the following inequality for \(e=1,2,\ldots,N_{\mathbf{e}}\):
\[[\|S^{\intercal}\mathbf{v}\|_{\mathrm{abs}}]_{e} \leq\|S^{\intercal}\|_{\mathrm{F}}\|\mathbf{v}\|_{\mathrm{F}} \tag{108}\] \[=[\|S^{\intercal}\|_{\mathrm{F}}\|\mathbf{v}\|_{\mathrm{F}}^{N_{\mathbf{ x}}\to N_{\mathbf{x}}}]_{e}, \tag{109}\]
where \([\cdot]_{e}\) is the \(e\)-th element. Then, we have finished computing the bound on \(\|\hat{\mathbf{x}}_{t}\|_{\mathrm{F}}\) within a given class of CRNs.
Next, we compute \(\mathbf{e}_{t}\) as follows:
\[\mathbf{e}_{t} =\operatorname*{arg\,max}_{\mathbf{e}:\|\mathbf{e}\|_{\mathrm{F}}=1} \mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\|\hat{\mathbf{x}}_{t}\|_{\mathrm{F}}\mathbf{e} \Delta t\|\mathbf{x}_{t}) \tag{110}\] \[\approx\operatorname*{arg\,max}_{\mathbf{e}:\|\mathbf{e}\|_{\mathrm{F}}=1 }\biggl{(}\frac{1}{2}\|\hat{\mathbf{x}}_{t}\|_{\mathrm{F}}^{2}(\Delta t)^{2}\mathbf{e} ^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t})\mathbf{e}\biggr{)}\] (111) \[=\operatorname*{arg\,max}_{\mathbf{e}:\|\mathbf{e}\|_{\mathrm{F}}=1}\mathbf{ e}^{\intercal}G_{\phi^{\prime}}(\mathbf{x}_{t})\mathbf{e}. \tag{112}\]
Thus, \(\mathbf{e}_{t}\) is the eigenvector associated with the maximum eigenvalue of \(G_{\phi^{\prime}}(\mathbf{x}_{t})\). Substituting Eq. (103) and the solution of Eq. (112) into Eq. (103), we can calculate the maximum value of \(\epsilon\) within a given class of CRNs.
### \(S\) and \(R\) of an upper bound system
To identify the upper bound described by Eq. (103) for CRNs under certain constraints, both \(S\) in Eq. (102) and \(R\) in Eq. (104) must be carefully designed. We introduce a method for determining \(S_{\mathrm{ub}}\) and \(R_{\mathrm{ub}}\) specific to a class of CRNs characterized by \(N_{\mathbf{\chi}}\) as the number of chemicals, \(m\) as the highest coefficient in chemical reactions, and \(N_{\mathbf{e}}\) as the number of reactions. The \(S_{\mathrm{ub}}\) and \(R_{\mathrm{ub}}\) matrices are of dimensions \(N_{\mathbf{\chi}}\times N_{\mathbf{e}}\), and their elements at the \((i,e)\)-th position are defined as follows:
\[[S_{\mathrm{ub}}]_{i,e} \coloneqq m, \tag{113}\] \[[R_{\mathrm{ub}}]_{i,e} \coloneqq\mathbb{1}[x_{i}\leq 1]\min_{i}([R]_{i,e})+\mathbb{1}[x_{i}>1] \max_{i}([R]_{i,e}). \tag{114}\]
Here, \(\mathbb{1}[\cdot]\) denotes the indicator function, and \([\cdot]_{i,e}\) represents the \((i,e)\)-th element. The reader may think that \(\mathbb{1}[\cdot]\) is not necessary. This reflects the fact that \(x^{n}\geq x^{m}\) for \(x\in[1,\infty)\) and \(n\geq m\) but \(x^{n}\leq x^{m}\) for \(x\in(0,1]\) and \(n\geq m\). By solving Eq. (113) with Eqs. (103), (104), (113), and (114), we can compute the upper bound for a given class. In other words, we use the following inequality to construct an upper bound system:
\[\|\dot{\mathbf{x}}_{t}\|_{\mathrm{F}}\] \[\leq\|S_{\mathrm{ub}}\|_{\mathrm{F}}\] \[\quad\times\bigg{\|}\partial_{\mathbf{f}}\Psi^{*}_{\mathbf{\omega}_{ \mathrm{ub}}(\mathbf{x}_{t})}\bigg{(}\frac{1}{2}\|S^{\intercal}_{\mathrm{ub}}\|_{ \mathrm{F}}\|\partial_{\mathbf{x}}\mathcal{D}_{\phi}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\|_{ \mathrm{F}}^{N_{\mathbf{\chi}}\to N_{\mathbf{\varsigma}}}\bigg{)}\bigg{\|}_{\mathrm{F}}, \tag{115}\]
where
\[\mathbf{\omega}_{\mathrm{ub}}(\mathbf{x})\coloneqq 2\sqrt{\mathbf{k}^{+}\circ\mathbf{k}^{-}} \circ\mathbf{x}^{R^{\intercal}_{\mathrm{ub}}/2}. \tag{116}\]
### Upper bound system with the KL constraint
We utilize Eq. (36), represented as \(\phi^{\prime}(\cdot)=\phi_{\mathrm{KL}}(\cdot)\), as the potential function for the Bregman divergence in the constraint of NG 6. Subsequently, by substituting \(\|\partial_{\mathbf{x}}\mathcal{D}_{\mathrm{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\|_{ \mathrm{F}}^{N_{\mathbf{\chi}}\to N_{\mathbf{\varsigma}}}\) into Eq. (103), we can determine the maximum value of \(\|\dot{\mathbf{x}}_{t}\|_{\mathrm{F}}\) as stated in Eq. (103).
Footnote 6: While there are many different candidates for \(\phi^{\prime}(\cdot)\), the \(L^{2}\) constraint is often used. Then, we explain the case of the \(L^{2}\) constraint in Appendix B.
## VI Numerical simulations
In this section, numerical simulations are conducted to elucidate the upper-bound dynamics for a specified class of CRNs. The parameters are set as follows: \(N_{\mathbf{\chi}}=4\), \(m=4\), and \(N_{\mathbf{\varsigma}}=1\). The initial condition is chosen as \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\) and the time step as \(\Delta t=1.0\times 10^{-5}\). The rate constants \(k_{e}^{\pm}\) are fixed at \(1\) for all \(e\)
### Case where CRNs have different equilibrium state
We introduce CRNs that satisfy the same conditions and compare them from the viewpoint of reaction rate. Here we consider the following six different reactions, which have the same topological quantities (\(N_{\mathbb{X}}=4\), \(m=4\), and \(N_{\mathsf{e}}=1\)):
\[\mathbb{X}_{1}+4\mathbb{X}_{2} \rightleftharpoons 4\mathbb{X}_{3}+4\mathbb{X}_{4}, \tag{11a}\] \[4\mathbb{X}_{1}+4\mathbb{X}_{2} \rightleftharpoons 4\mathbb{X}_{3}+4\mathbb{X}_{4},\] (11b) \[\mathbb{X}_{1}+2\mathbb{X}_{2} \rightleftharpoons \mathbb{X}_{3}+3\mathbb{X}_{4},\] (11c) \[4\mathbb{X}_{1} \rightleftharpoons 4\mathbb{X}_{2}+4\mathbb{X}_{3}+4\mathbb{X}_{4},\] (11d) \[\mathbb{X}_{1} \rightleftharpoons 2\mathbb{X}_{2}+2\mathbb{X}_{3}+3\mathbb{X}_{4},\] (11e) \[2\mathbb{X}_{1} \rightleftharpoons 3\mathbb{X}_{2}+2\mathbb{X}_{3}+3\mathbb{X}_{4}. \tag{11f}\]
We set \(\mathbf{x}_{\text{ini}}=[9/8,87/80,27/20,27/20]^{\intercal}\), \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\), and \(\Delta t=1.0\times 10^{-5}\). In Fig. 2, we plot the dynamics of Eq. (11) and that of the system constructed in Sec. V. It clearly shows that the system constructed in Sec. V gives an upper bound on CRNs. The CRNs in Eq. (11) have equilibrium states different from \(\hat{\mathbf{x}}\) because of \(\ker(S^{\intercal})\); then the gap in \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) remains for \(t\gg 0\) and the upper bound is relatively loose.
### Case where CRNs do not have different equilibrium state
Next, we consider Eq. (11b) and set \(\mathbf{x}_{\text{ini}}=[1/2,1/2,3/2,3/2]^{\intercal}\), \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\), and \(\Delta t=1.0\times 10^{-5}\). In this case, we have \(\mathbf{x}_{\text{eq}}=\hat{\mathbf{x}}\). In Fig. 3, we plot the dynamics of Eq. (11) and that of the system constructed in Sec. V. The system constructed in Sec. V provides a tighter bound. In Fig. 4, we show the time-difference of the KL divergence \(-\Delta\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) per \(\Delta t\). We have used \(\mathbf{x}_{t}\) on the solution of Eq. (11b) with \(\mathbf{x}_{\text{ini}}=[1/2,1/2,3/2,3/2]^{\intercal}\); that is, \(-\Delta\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) of the CRN in Eq. (11b) and the system constructed in Sec. V on the orbit of the CRN in Eq. (11b). As shown in Fig. 4, the system constructed in Sec. V shows faster convergence at each \(\mathbf{x}_{t}\).
### Case of \(N_{\mathsf{e}}>1\)
We consider the fully-connected CRNs whose hypervertices are given by
\[\mathbb{V}_{1} =\{\mathbb{X}_{1}+\mathbb{X}_{2},\mathbb{X}_{2}+\mathbb{X}_{3}, \mathbb{X}_{3}+\mathbb{X}_{4},\mathbb{X}_{4}+\mathbb{X}_{1}\}, \tag{12a}\] \[\mathbb{V}_{2} =\{\mathbb{X}_{1}+3\mathbb{X}_{2}+4\mathbb{X}_{3},\mathbb{X}_{2}+ 2\mathbb{X}_{3},\] \[4\mathbb{X}_{1}+\mathbb{X}_{3}+\mathbb{X}_{4},\mathbb{X}_{1}+3 \mathbb{X}_{2}+\mathbb{X}_{4}\},\] (12b) \[\mathbb{V}_{3} =\{4\mathbb{X}_{1}+3\mathbb{X}_{2}+4\mathbb{X}_{3},4\mathbb{X}_{ 2}+2\mathbb{X}_{3}+4\mathbb{X}_{4},\] \[4\mathbb{X}_{1}+4\mathbb{X}_{3}+\mathbb{X}_{4},2\mathbb{X}_{1}+ 3\mathbb{X}_{2}+4\mathbb{X}_{4}\}. \tag{12c}\]
The CRNs in Eq. (12) belong to the class of CRNs labeled by \(N_{\mathbb{X}}=4\), \(N_{\mathsf{e}}=6\), and \(m=4\). We call the CRNs in Eq. (12) type 1, type 2, and type 3 from above.
We plot the dynamics of the CRNs in Eq. (12) and its upper bound in the case of \(\mathbf{x}_{\text{eq}}\neq\hat{\mathbf{x}}\). In Fig. 5, we set \(\mathbf{x}_{\text{ini}}=[9/8,87/80,27/20,27/20]^{\intercal}\), \(\hat{\mathbf{x}}=[1,1,1,1]^{\intercal}\), \(k_{e}^{\pm}=1\), and \(\Delta t=1.0\times 10^{-5}\). Figure 5 clearly demonstrates the upper bound holds for \(N_{\mathsf{e}}>1\).
We show the dependence of \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) on time \(t\) for the CRN in Eq. (12c) and its upper bound in the case of \(\mathbf{x}_{\text{eq}}=\hat{\mathbf{x}}\). In Fig. 6, we set \(\hat{\mathbf{x}}=[1.2547,1.1021,1.1951,1.3388]^{\intercal}\). In Fig. 7, we also show the dependence of \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}||\hat{\mathbf{x}})\) on time \(t\) for the CRN in Eq. (12c) and its upper bound in the case of \(\mathbf{x}_{\text{eq}}=\hat{\mathbf{x}}\).
### Comparison of the upper bounds
In this section, we examine the behavior of the upper bound under varying parameters. The parameters are \(N_{\mathsf{X}}=4\), \(N_{\mathsf{e}}=1\), \(\mathbf{x}_{\text{ini}}=[3/4,3/4,5/4,5/4]^{\intercal}\), and \(\mathbf{x}_{\text{eq}}=[1,1,1,1]^{\intercal}\). Figure 8 depicts the dependence of \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\) on \(t\) for \(m=1,2,3,4\). Figure 9 portrays the relationship between \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\) and \(-\Delta\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})\) for \(N_{\mathsf{X}}=4\) and \(N_{\mathsf{e}}=1\). The figures indicate that higher values of \(m\) are associated with increased rates of convergence. This behavior is consistent with the expectation that nonlinearity in CRNs tends to influence reaction rates.
The relationship between the KL divergence and the entropy production was pointed out in Ref. [36]. Letting \(\Sigma_{\text{tot}}(\mathbf{x}_{t})\) be the total entropy, the following relationship holds:
\[\Sigma_{\text{tot}}(\mathbf{x}_{t})-\Sigma_{\text{tot}}(\mathbf{x}_{t^{ \prime}})=-\frac{V}{T}[\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})- \mathcal{D}_{\text{KL}}(\mathbf{x}_{t^{\prime}}\|\hat{\mathbf{x}})], \tag{11}\]
where \(V\) is the volume of the system and \(T\) is the temperature of the environment. In NG, the right-hand side of Eq. (10) is maximized under \(\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\leq\epsilon\) as written in Eq. (12). Furthermore, \(\epsilon\) in the optimization problem to find the upper bound in Sec. V is equal to or larger than the time-difference of the KL divergence of CRNs in a given class. Thus, the entropy production of the system designed in Sec. V is larger than those of CRNs in a given class and it shows faster convergence toward \(\hat{\mathbf{x}}\).
## VII Discussion
The relationship between the KL divergence and the entropy production was pointed out in Ref. [36]. Letting \(\Sigma_{\text{tot}}(\mathbf{x}_{t})\) be the total entropy, the following relationship holds:
\[\Sigma_{\text{tot}}(\mathbf{x}_{t})-\Sigma_{\text{tot}}(\mathbf{x}_{t^{ \prime}})=-\frac{V}{T}[\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}\|\hat{\mathbf{x}})- \mathcal{D}_{\text{KL}}(\mathbf{x}_{t^{\prime}}\|\hat{\mathbf{x}})], \tag{12}\]
where \(V\) is the volume of the system and \(T\) is the temperature of the environment. In NG, the right-hand side of Eq. (10) is maximized under \(\mathcal{D}_{\phi^{\prime}}(\mathbf{x}_{t}+\Delta\mathbf{x}\|\mathbf{x}_{t})\leq\epsilon\) as written in Eq. (12). Furthermore, \(\epsilon\) in the optimization problem to find the upper bound in Sec. V is equal to or larger than the time-difference of the KL divergence of CRNs in a given class. Thus, the entropy production of the system designed in Sec. V is larger than those of CRNs in a given class and it shows faster convergence toward \(\hat{\mathbf{x}}\).
## VIII Conclusions
In this study, we developed a framework based on NG to establish an upper bound on the dynamics of a specific subset of CRNs. The physical meaning of this bound relates to the concept of entropy production, which in turn is related to the speed of convergence of the chemical reaction. The nonlinearity commonly present in CRNs presents a challenge, which is addressed here. The optimization problem in the NG derivation was found to be related to entropy production, enriching the understanding of NG within a thermodynamic context. While the primary focus has been on CRNs, the methods and discussions are applicable to a wider range of hypergraph dynamics. The study holds implications for fields beyond chemistry and physics, including information science and machine learning.
###### Acknowledgements.
H.M. was supported by JSPS KAKENHI Grant Number 23H04489. T. J. K was supported by JST (Grants No. JPMJCR2011 and No. JPMJCR1927) and JSPS (Grant No. 19H05799). L.-S.B. was partially funded by NSF award CHE-2002313.
## Appendix A Derivation of the KL divergence from the Bregman divergence
In this section, we show that the Bregman divergence, Eq. (34), with Eq. (36) is equivalent to the KL divergence, Eq. (37). Let us define the following potential for \(\alpha\in\mathbb{R}\):
\[\phi_{\text{KL}}^{(\alpha)}(\mathbf{x})\coloneqq\sum_{i=1}^{N_{\text{K}}}x_{i}( \ln x_{i}-\alpha) \tag{100}\]
The Bregman divergence, Eq. (34), with Eq. (100) is computed as follows:
\[\mathcal{D}_{\phi_{\text{KL}}^{(\alpha)}}(\mathbf{x}\|\mathbf{y}) =\phi_{\text{KL}}^{(\alpha)}(\mathbf{x})-\phi_{\text{KL}}^{(\alpha)} (\mathbf{y})-\langle(\mathbf{x}-\mathbf{y}),\nabla\phi_{\text{KL}}^{(\alpha)}(\mathbf{y})\rangle \tag{101}\] \[=\sum_{i=1}^{N_{\text{K}}}x_{i}(\ln x_{i}-\alpha)-\sum_{i=1}^{N_ {\text{K}}}y_{i}(\ln y_{i}-\alpha)\] \[\quad-\sum_{i=1}^{N_{\text{K}}}(x_{i}-y_{i})(\ln y_{i}-\alpha+1)\] (102) \[=\sum_{i=1}^{N_{\text{K}}}x_{i}\ln x_{i}-\sum_{i=1}^{N_{\text{K} }}y_{i}\ln y_{i}\] \[\quad-\sum_{i=1}^{N_{\text{K}}}(x_{i}-y_{i})\ln y_{i}-\sum_{i=1}^ {N_{\text{K}}}(x_{i}-y_{i})\] (103) \[=\sum_{i=1}^{N_{\text{K}}}x_{i}\ln\frac{x_{i}}{y_{i}}-\sum_{i=1}^ {N_{\text{K}}}(x_{i}-y_{i})\] (104) \[=\mathcal{D}_{\text{KL}}(\mathbf{x}\|\mathbf{y}) \tag{105}\]
Thus, the Bregman divergence, Eq. (34), with Eq. (100) is equivalent to the KL divergence, Eq. (37), independently from \(\alpha\). Furthermore, Eq. (36) is the special case of Eq. (100) with \(\alpha=0\).
## Appendix B Upper bound system with the \(L^{2}\) constraint
In Sec. V, we have considered \(\phi_{\text{KL}}(\cdot)\), Eq. (36), as the potential of the Bregman divergence in the constraint term since the KL divergence is minimized in CRNs. However, we are not limited to this choice, and it is expected that a different potential in the constraint may give us a different bound. Another simple candidate for the potential of the Bregman divergence in the constraint is the \(L^{2}\) norm given by
\[\phi_{L^{2}}(\mathbf{x})\coloneqq\sum_{i=1}^{N_{\text{K}}}|x_{i}|^{2}. \tag{106}\]
In this case, \(\mathcal{D}_{\text{KL}}(\mathbf{x}_{t}+\|\hat{\mathbf{x}}_{t}\|_{\text{F}}\mathbf{e}_{t} \Delta t\|\mathbf{x}_{t})\) does not depend on \(\mathbf{e}_{t}\) and the Hessian \(G_{\phi_{L^{2}}}(\mathbf{x}_{t})\) becomes the identity matrix: \(G_{\phi_{L^{2}}}(\mathbf{x}_{t})=\mathbb{1}\).
|
2309.04341 | Design of a Single-User RIS-Aided MISO System Based on Statistical
Channel Knowledge | Reconfigurable intelligent surface (RIS) is considered a prospective
technology for beyond fifth-generation (5G) networks to improve the spectral
and energy efficiency at a low cost. Prior works on the RIS mainly rely on
perfect channel state information (CSI), which imposes a huge computational
complexity. This work considers a single-user RIS-assisted communication
system, where the second-order statistical knowledge of the channels is
exploited to reduce the training overhead. We present algorithms that do not
require estimation of the CSI and reconfiguration of the RIS in every channel
coherence interval, which constitutes one of the most critical practical issues
in an RIS-aided system. | Sadaf Syed, Dominik Semmler, Donia Ben Amor, Michael Joham, Wolfgang Utschick | 2023-09-08T14:12:02Z | http://arxiv.org/abs/2309.04341v1 | # Design of a Single-User RIS-Aided MISO System Based on Statistical Channel Knowledge
###### Abstract
Reconfigurable intelligent surface (RIS) is considered a prospective technology for beyond fifth-generation (5G) networks to improve the spectral and energy efficiency at a low cost. Prior works on the RIS mainly rely on perfect channel state information (CSI), which imposes a huge computational complexity. This work considers a single-user RIS-assisted communication system, where the second-order statistical knowledge of the channels is exploited to reduce the training overhead. We present algorithms that do not require estimation of the CSI and reconfiguration of the RIS in every channel coherence interval, which constitutes one of the most critical practical issues in an RIS-aided system.
MISO, Downlink, RIS, CSI, statistical knowledge, bilinear precoders
## I Introduction
Massive multiple-input multiple-output (MIMO) systems can meet the ever-increasing demand of high throughput and low energy consumption in current wireless communication systems. However, equipping the base station (BS) with a large number of antennas may lead to high circuit energy consumption, including very high hardware costs. Recently, reconfigurable intelligent surface (RIS) has emerged as a promising low-cost solution to enhance the spectral efficiency in a wireless communication system [1]. Specifically, an RIS is a passive array composed of a large number of reconfigurable reflecting elements. Each passive element of the RIS is able to introduce a phase shift to the incident signal in a controlled manner, thereby boosting the received power for the desired user or creating a destructive interference for the non-intended users. Additionally, the passive elements of the RIS do not require any transmit radio frequency (RF) chain, and hence, their energy and hardware costs are much lower as compared to that of the traditional active antennas at the BS. Thus, they can be scaled much more easily than the antennas at the BS.
Most of the existing algorithms for RIS rely on the assumption of perfect channel state information (CSI), e.g., [1, 2, 3]. However, owing to the passive structure of the RIS as well as its massive number of reflecting elements, the acquisition of perfect CSI for the RIS-associated links is formidable. Moreover, these algorithms demand the joint optimisation of the phase shifts and the transmit filters to be performed in every channel coherence interval, which is computationally very expensive. This issue is being recently studied in the literature [4, 5, 6, 7], where the key idea is to exploit the statistical knowledge of the channels to design the phase shifts of the RIS. Since the structure of the channels varies slowly, the covariance matrices remain constant for many channel coherence intervals, and hence, it is possible to obtain accurate information of the second-order statistics of the channels through long-term observation. The phase shifts and the filters which are designed based on the covariance matrices do not need to be updated regularly, i.e., there is no need to estimate the channels and perform the joint optimisation in every channel coherence interval. This significantly reduces the channel training overhead and the design complexity of the RIS-assisted systems. The algorithms proposed in [5] and [6] consider the statistical knowledge of the channels for the phase-shift optimisation, however, they consider a hybrid online/offline approach. The phase shifts of the RIS are designed considering the long-term statistics of the channels during the offline step, whereas the filters are designed considering the perfect knowledge of the instantaneous CSI in the online step, thereby, requiring the channel to be estimated perfectly in every channel coherence interval again.
In this work, we present two low-complexity algorithms for a single-user RIS-aided multiple-input single-output (MISO) system, which are only based on the statistical knowledge of the channels. These algorithms employ the lower bound of the user's rate as the figure of merit, which is based on the worst-case noise bound [8]. We consider a more realistic setup, where the covariance matrices of the channels are known perfectly, however, the accurate knowledge of the instantaneous CSI is not available. The bilinear precoders [9] are used as the transmit filters, for which a closed-form solution of the optimal filters can be obtained for the single-user case. As such, the filters and the phase shifts can be designed jointly. The algorithm in [4] is also based on the statistical knowledge of the channels for a single-user MISO system, however, it is based on the assumption that the RIS is deployed at a favourable location and a line-of-sight (LOS) channel exists to both the BS and the user. The phase shift optimisation in [4] is only dependent on the LOS components, which are assumed to be perfectly known. In this work, we consider a general zero-mean channel model with perfectly known covariance matrices. We compare our algorithms to the one presented in [7], which assumes a similar zero-mean
channel model for a multi-antenna single-user system. The algorithm in [7] maximises the upper bound of the user's rate, which is computed using the Jensen's inequality and it is based on the alternating optimisation (AO) approach, where the filters and the phase shifts are optimised alternatingly in each subproblem. Such an AO method offers a good performance but it has convergence and complexity issues (discussed in [10]).
## II System Model
This paper investigates the downlink (DL) of an RIS-aided single-user MISO communication system. The system consists of one BS equipped with \(M\) antennas, serving one single-antenna user, and one RIS having \(N\) passive reflecting elements. The phase-shift matrix of the RIS is defined by a diagonal matrix \(\boldsymbol{\Phi}=\mathrm{diag}(\phi_{1},\cdots,\phi_{N})\), where \(\phi_{1},\cdots,\phi_{N}\) are the phase shift coefficients of the \(N\) elements of the RIS with \(|\phi_{n}|=1\;\forall\;n\), and \(\boldsymbol{\phi}=[\phi_{1},\cdots,\phi_{N}]^{\mathrm{T}}\) denotes the corresponding phase-shift vector. The direct channel from the BS to the user is denoted by \(\boldsymbol{h}_{\mathrm{d}}\in\mathbb{C}^{M\times 1}\), and it is assumed to be circularly symmetric, complex Gaussian distributed with zero mean and covariance matrix \(\boldsymbol{C}_{\mathrm{d}}\), i.e., \(\boldsymbol{h}_{\mathrm{d}}\sim\mathcal{N}_{\mathbb{C}}(\boldsymbol{0}, \boldsymbol{C}_{\mathrm{d}})\). The channel from the RIS to the user is denoted by \(\boldsymbol{r}\in\mathbb{C}^{N\times 1}\), which has a zero mean and the covariance matrix \(\boldsymbol{C}_{\boldsymbol{r}}\). The channel from the BS to the RIS is denoted by \(\boldsymbol{T}\in\mathbb{C}^{N\times M}\), and it is assumed to follow the Kronecker channel model, given by
\[\boldsymbol{T}=\sqrt{\beta}\boldsymbol{R}_{\text{RIS}}^{1/2} \boldsymbol{W}\boldsymbol{R}_{\text{Tx}}^{1/2,\text{H}}. \tag{1}\]
The entries of \(\boldsymbol{W}\in\mathbb{C}^{N\times M}\) are independent and identically distributed with unit variance and zero mean. \(\boldsymbol{R}_{\text{RIS}}\) and \(\boldsymbol{R}_{\text{Tx}}\) denote the channel correlation matrices on the side of the RIS and the BS respectively, and \(\beta\geq 0\) represents the scaling factor such that \(\mathrm{tr}(\boldsymbol{R}_{\text{Tx}})=\mathrm{tr}\left(\boldsymbol{R}_{ \text{RIS}}\right)\) is satisfied. The effective channel of the RIS-assisted system is given by
\[\boldsymbol{h}^{\text{H}}=\boldsymbol{h}_{\mathrm{d}}^{\text{H}}+ \boldsymbol{r}^{\text{H}}\boldsymbol{\Phi}^{\text{H}}\boldsymbol{T} \tag{2}\]
which has zero mean and its covariance matrix is given by \(\boldsymbol{C}\). It is assumed that the BS has only access to a noisy channel observation \(\boldsymbol{\psi}\), but not the actual CSI. The observation \(\boldsymbol{\psi}\) is the Least-Squares (LS) estimate of the channel, which is obtained by correlating the received signal with the pilot sequences during the training phase, and is given by
\[\boldsymbol{\psi}=\boldsymbol{h}\;+\;\boldsymbol{n} \tag{3}\]
where \(\boldsymbol{n}\sim\mathcal{N}_{\mathbb{C}}(\boldsymbol{0},\boldsymbol{C}_{ \mathrm{n}})\) denotes the noise in the channel observation and \(\boldsymbol{C}_{\mathrm{n}}\) is the noise covariance matrix.
The transmit filter at the BS is designed such that it only depends on the channel statistics and the noisy observation. To this end, the bilinear precoder [9] is used as the transmit filter in this work. The bilinear precoder (\(\boldsymbol{p}\)) is designed such that it linearly depends on the observation \(\boldsymbol{\psi}\), i.e., \(\boldsymbol{p}\;=\;\boldsymbol{A}\boldsymbol{\psi}\), with \(\boldsymbol{p}\;\in\mathbb{C}^{M\times 1}\) and \(\boldsymbol{A}\in\mathbb{C}^{M\times M}\) being a deterministic transformation matrix, which needs to be designed such that the user's rate is maximised. The signal received by the user reads as: \(y=\boldsymbol{h}^{\text{H}}\boldsymbol{p}\,s+v\), where \(s\sim\mathcal{N}_{\mathbb{C}}(0,1)\) denotes the data symbol and \(v\sim\mathcal{N}_{\mathbb{C}}(0,\sigma^{2})\) is the noise at the user's side.
Because of the imperfect CSI, we cannot compute the closed-form expression of the actual rate of the user. Instead of that, a lower bound on the user's rate based on the worst-case error, which is extensively used in the massive MIMO literature is employed here [8]. The lower bound of the user's rate is given by \(\log_{2}(1+\gamma^{\text{lb}})\), where \(\gamma^{\text{lb}}\) is the lower bound of the actual signal-to-noise-ratio (SNR), expressed as
\[\gamma^{\text{lb}}=\frac{|\mathbb{E}[\boldsymbol{h}^{\text{H}} \boldsymbol{p}]|^{2}}{\mathbb{E}[|\boldsymbol{h}^{\text{H}}\boldsymbol{p}- \mathbb{E}[\boldsymbol{h}^{\text{H}}\boldsymbol{p}]|^{2}]+\sigma^{2}}. \tag{4}\]
Evaluating the terms in (4) yields (cf. [9, 11])
\[\gamma^{\text{lb}}=\frac{|\mathrm{tr}(\boldsymbol{A}\boldsymbol{C})|^{2}}{ \mathrm{tr}\Big{(}\boldsymbol{A}\boldsymbol{Q}\boldsymbol{A}^{\text{H}} \boldsymbol{C}\Big{)}+\sigma^{2}} \tag{5}\]
where \(\boldsymbol{Q}=\mathbb{E}[\boldsymbol{\psi}\boldsymbol{\psi}^{\text{H}}]= \boldsymbol{C}+\boldsymbol{C}_{\mathrm{n}}\) is the covariance matrix of the LS estimate of the channel. Note that the above closed-form expression of the lower bound is obtained with the Gaussian assumption of \(\boldsymbol{h}\), which is indeed true for a large \(N\)[12]. The matrices \(\boldsymbol{C}\) and \(\boldsymbol{Q}\) implicitly depend on the phase-shift vector \(\boldsymbol{\phi}\) (shown in the next section). The objective is to maximise the user's rate w.r.t. \(\boldsymbol{\phi}\) and the transformation matrix \(\boldsymbol{A}\) of the bilinear precoder. Since the logarithm is a monotonically non-decreasing function, maximising the rate is equivalent to maximising the SNR. Hence, the rate maximisation can be equivalently written as
\[\begin{split}\max_{\boldsymbol{A},\boldsymbol{\phi}}& \gamma^{\text{lb}}\\ \text{s.t.}&\mathbb{E}[||\boldsymbol{p}||^{2}]= \mathrm{tr}\Big{(}\boldsymbol{A}\boldsymbol{Q}\boldsymbol{A}^{H}\Big{)}\leq P \\ &|\phi_{n}|=1\;\forall\;n=1,\cdots,N.\end{split}\] (P1)
## III Joint Optimisation Problem Formulation
Problem (P1) is non-convex, and hence, it is difficult to obtain a closed-form solution. We next propose theorems to simplify (P1) such that the filter and the phase shifts can be optimised jointly.
### _Simplification of the Objective Function_
**Theorem 1**: For a fixed phase-shift vector \(\boldsymbol{\phi}\) of the RIS, the optimal transformation matrix \(\boldsymbol{A}\in\mathbb{C}^{M\times M}\) maximising the SNR expression in (5) and satisfying the DL power constraint \(\mathbb{E}[||\boldsymbol{p}||^{2}]\leq P\) for a positive definite matrix \(\boldsymbol{C}\) is given by
\[\boldsymbol{A}_{\mathrm{opt}}=\eta\;\boldsymbol{Q}^{-1},\quad\text{where}\; \eta=\sqrt{\frac{P}{\mathrm{tr}(\boldsymbol{Q}^{-1})}}. \tag{6}\]
Proof.: The SNR expression in (5) is a positive real quantity, hence, Wirtinger derivatives are used to find \(\boldsymbol{A}\) maximising \(\gamma^{\text{lb}}\), which yields \(\boldsymbol{A}_{\mathrm{opt}}=\eta\;\boldsymbol{Q}^{-1}\). Further, \(\eta\) can be found from the DL power constraint \(\mathrm{tr}(\boldsymbol{A}\boldsymbol{Q}\boldsymbol{A}^{\text{H}})=P\).
Now replacing \(\boldsymbol{A}\) in (5) with the optimal transformation matrix, the lower bound of the SNR expression becomes
\[\gamma^{\text{lb}}=\frac{\eta^{2}\;\mathrm{tr}^{2}\left(\boldsymbol{Q}^{-1} \boldsymbol{C}\right)}{\eta^{2}\;\mathrm{tr}\left(\boldsymbol{Q}^{-1} \boldsymbol{C}\right)+\sigma^{2}}. \tag{7}\]
**Theorem 2**: The lower bound of the SNR given in (7) increases monotonically with \(\mathrm{tr}\big{(}\mathbf{Q}^{-1}\mathbf{C}\big{)}\) for a spatially white noise covariance matrix \(\mathbf{C}_{\mathrm{n}}=\zeta^{2}\mathbf{I}_{\mathrm{M}}\) with \(\zeta^{2}>0\).
Proof.: Please refer to Appendix A.
Since \(\gamma^{\mathrm{lb}}\) is monotonically increasing with \(\mathrm{tr}\big{(}\mathbf{Q}^{-1}\mathbf{C}\big{)}\), it is sufficient to maximise \(\mathrm{tr}\big{(}\mathbf{Q}^{-1}\mathbf{C}\big{)}\). Rewriting \(\mathbf{Q}^{-1}\mathbf{C}\) as \(\mathbf{I}_{M}-\mathbf{Q}^{-1}\mathbf{C}_{\mathrm{n}}\) along with the assumption of \(\mathbf{C}_{\mathrm{n}}\) to be spatially white, i.e., \(\mathbf{C}_{\mathrm{n}}=\zeta^{2}\mathbf{I}_{\mathrm{M}}\), (P1) can be simplified to
\[\min_{\mathbf{\phi}}\qquad\mathrm{tr}\big{(}\mathbf{Q}^{-1}\big{)}\ \text{s.t.}\quad|\phi_{n}|=1\ \forall\,n=1,\cdots,N.\] (P2)
To solve (P2), we first need to express \(\mathbf{Q}\) as a function of \(\mathbf{\phi}\) explicitly.
### _Computation of the Channel Covariance Matrix_
The channel covariance matrix of the effective channel can be computed as
\[\mathbf{C}=\mathbb{E}\left[\mathbf{h}\mathbf{h}^{\mathrm{H}}\right] =\mathbb{E}\left[(\mathbf{h}_{\mathrm{d}}+\mathbf{T}^{\mathrm{H}}\mathbf{\Phi }\mathbf{r})(\mathbf{h}_{\mathrm{d}}+\mathbf{T}^{\mathrm{H}}\mathbf{\Phi}\mathbf{r})^{\mathrm{H}}\right] \tag{9}\] \[\stackrel{{(a)}}{{=}}\mathbf{C}_{\mathrm{d}}+\mathbb{E} \left[\mathbf{T}^{\mathrm{H}}\mathbf{\Phi}\mathbf{r}\mathbf{r}^{\mathrm{H}}\mathbf{\Phi}^{ \mathrm{H}}\mathbf{T}\right] \tag{10}\]
where \((a)\) follows from the fact that the random variables \(\mathbf{h}_{\mathrm{d}}\), \(\mathbf{T}\) and \(\mathbf{r}\) are mutually independent with zero mean, and \(\mathbf{h}_{\mathrm{d}}\sim\ \mathcal{N}_{\mathrm{C}}(\mathbf{0},\mathbf{C}_{\mathrm{d}})\).
Inserting the expression of \(\mathbf{T}\) from (1), the covariance matrix of the effective channel can be written as
\[\mathbf{C}\stackrel{{(b)}}{{=}}\mathbf{C}_{\mathrm{d}}+\beta\,\mathbb{E }\left[\mathbf{R}_{\mathrm{Tx}}^{1/2}\mathbf{W}^{\mathrm{H}}\mathbf{R}_{\mathrm{RIS}}^{1/ 2,\mathrm{H}}\mathbf{\Phi}\mathbf{C}_{\mathbf{r}}\mathbf{\Phi}^{\mathrm{H}}\mathbf{R}_{\mathrm{ RIS}}^{1/2}\mathbf{W}\mathbf{R}_{\mathrm{Tx}}^{1/2,\mathrm{H}}\right]\]
where \((b)\) follows from the fact that \(\mathbf{r}\) and \(\mathbf{W}\) are independent random variables, and \(\mathbf{r}\sim\mathcal{N}_{\mathrm{C}}(\mathbf{0},\mathbf{C}_{\mathbf{r}})\). Since the entries of \(\mathbf{W}\) are i.i.d. with zero mean and unit variance, and \(\mathbf{\Phi}=\mathrm{diag}(\mathbf{\phi})\), the above expression can be simplified as
\[\mathbf{C} =\mathbf{C}_{\mathrm{d}}+\beta\mathrm{tr}\big{(}\mathbf{R}_{\mathrm{RIS} }\mathbf{\Phi}\mathbf{C}_{\mathbf{r}}\mathbf{\phi}^{\mathrm{H}}\big{)}\mathbf{R}_{\mathrm{Tx}} \tag{11}\] \[=\mathbf{C}_{\mathrm{d}}+\beta\mathrm{tr}\Big{(}\mathbf{R}_{\mathrm{RIS} }(\mathbf{C}_{\mathbf{r}}\odot\mathbf{\phi}\mathbf{\phi}^{\mathrm{H}})\Big{)}\mathbf{R}_{\mathrm{Tx}} \tag{12}\]
where \(\odot\) denotes the Hadamard product. Using Lemma 1 of Appendix B, the above expression can be rewritten as
\[\mathbf{C}=\mathbf{C}_{\mathrm{d}}+\beta\mathbf{\phi}^{\mathrm{H}}\left(\mathbf{R}_{\mathrm{ RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\right)\mathbf{\phi}\mathbf{R}_{\mathrm{Tx}}. \tag{13}\]
Thus, the covariance matrix of the LS estimate is given by
\[\mathbf{Q}=\mathbf{C}_{\mathrm{d}}+\beta\mathbf{\phi}^{\mathrm{H}}\left(\mathbf{R}_{\mathrm{ RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\right)\mathbf{\phi}\mathbf{R}_{\mathrm{Tx}}+\mathbf{C}_{ \mathrm{n}}. \tag{14}\]
## IV Low-Complexity Algorithms Depending on the Channel Statistics
In this section, we propose two low-complexity algorithms to solve (P2).
### _Algorithm 1: Projected Gradient Descent Method_
The minimisation problem in (P2) can be solved by the iterative projected gradient descent method. The gradient of \(\mathrm{tr}(\mathbf{Q}^{-1})\ \mathrm{w.r.t.}\ \mathbf{\phi}^{*}\) is given by [see (14)]
\[\frac{\partial}{\partial\mathbf{\phi}^{*}}\big{(}\mathrm{tr}(\mathbf{Q}^{-1})\big{)} =-\beta\mathrm{tr}(\mathbf{Q}^{-1}\mathbf{R}_{\mathrm{Tx}}\mathbf{Q}^{-1})(\mathbf{R}_{ \mathrm{RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\big{)}\mathbf{\phi}. \tag{15}\]
The expression of the gradient in (15) depends on \(\mathbf{Q}^{-1}\), which depends on \(\mathbf{\phi}\). This means that the computation of each gradient step would require the update of the \(\mathbf{Q}\) matrix and henceforth, the computation of the inverse. This can become computationally very expensive if the size of the matrix \(\mathbf{Q}\) is large, e.g., as in the case of massive MIMO systems. However, this problem can be easily averted by exploiting the structure of the gradient. The matrix \(\mathbf{Q}^{-1}\) only appears in the term \(\mathrm{tr}(\mathbf{Q}^{-1}\mathbf{R}_{\mathrm{Tx}}\mathbf{Q}^{-1})\). It can be easily observed that the term \(\beta\mathrm{tr}(\mathbf{Q}^{-1}\mathbf{R}_{\mathrm{Tx}}\mathbf{Q}^{-1})\) is a real non-negative quantity which can be included in the step size optimisation, and thus, we do not have to update the \(\mathbf{Q}\) matrix after each gradient step. This significantly reduces the computational complexity. The phase shift update rule can hence be summarised as
\[\mathbf{\phi}\leftarrow\mathbf{\phi}+\kappa\left(\mathbf{R}_{\mathrm{RIS}}\odot\mathbf{C}_{ \mathbf{r}}^{T}\right)\mathbf{\phi} \tag{16}\]
where \(\kappa\) is the optimal step size, which can be computed by the Armijo rule [13]. The new phase-shift vector obtained after every gradient step in (16) should be normalised to satisfy the unit modulus constraints of (P2).
### _Algorithm 2: Element-Wise Optimisation_
The objective function in (P2) can be reformulated such that it only depends on the \(n\)-th element of \(\mathbf{\phi}\), i.e., \(\phi_{n}\), and the remaining \(N-1\) elements are kept fixed in a particular iteration step. To this end, the final expression of \(\mathbf{Q}\) from (14) can be rearranged such that it explicitly depends on \(\phi_{n}\).
\[\mathbf{Q}=\mathbf{C}_{\mathrm{d}}+\beta\bigg{(}\sum_{i=1}^{N}\sum_{j=1}^{N}\phi_{i}^{*} \phi_{j}\big{[}\mathbf{R}_{\mathrm{RIS}}\odot\mathbf{C}_{\mathbf{r}}^{\mathrm{T}}\big{]}_{i,j }\bigg{)}\mathbf{R}_{\mathrm{Tx}}+\mathbf{C}_{\mathrm{n}}.\]
Rearranging the above equation, we get
\[\mathbf{Q}=\mathbf{D}+\phi_{n}\mathbf{B}_{n}+\phi_{n}^{*}\mathbf{B}_{n}^{\mathrm{H}} \tag{17}\]
where the matrices \(\mathbf{D}\) and \(\mathbf{B}_{n}\) are independent of \(\phi_{n}\), and are given by
\[\mathbf{D} =\mathbf{C}_{\mathrm{d}}+\beta\sum_{\begin{subarray}{c}i=1\\ i\neq n\end{subarray}}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq n\end{subarray}}^{N}\phi_{i}^{*}\phi_{j}\big{[}\mathbf{R}_{\mathrm{RIS}} \big{]}_{i,j}\big{[}\mathbf{C}_{\mathbf{r}}\big{]}_{j,i}\mathbf{R}_{\mathrm{Tx}} \tag{18}\] \[+\beta\big{[}\mathbf{R}_{\mathrm{RIS}}\big{]}_{n,n}\big{[}\mathbf{C}_{\mathbf{r }}\big{]}_{n,n}\mathbf{R}_{\mathrm{Tx}}+\mathbf{C}_{\mathrm{n}}\] \[\mathbf{B}_{n} =\beta\sum_{\begin{subarray}{c}i=1\\ i\neq n\end{subarray}}^{N}\phi_{i}^{*}\big{[}\mathbf{R}_{\mathrm{RIS}}\big{]}_{i,n} \big{[}\mathbf{C}_{\mathbf{r}}\big{]}_{n,i}\mathbf{R}_{\mathrm{Tx}}.\]
The optimisation problem in (P2) can now be reduced to
\[\min_{\phi_{n}}\qquad\mathrm{tr}\big{(}\mathbf{Q}^{-1}\big{)}\ \ \text{s.t.}\quad|\phi_{n}|=1.\] (P3)
The Lagrangian function for the above problem reads as
\[\mathcal{L}=\operatorname{tr}(\mathbf{Q}^{-1})+\mu(\phi_{n}\phi_{n}^{*}-1) \tag{20}\]
where \(\mu\in\mathbb{R}\) is the dual variable corresponding to the unit modulus constraint in (P3). Solving \(\dfrac{\partial\mathcal{L}}{\partial\phi_{n}^{*}}\doteq 0\), we get a closed-form update rule of \(\phi_{n}\) as follows
\[\phi_{n}\leftarrow\dfrac{\operatorname{tr}(\bar{\mathbf{Q}}^{-1}\mathbf{B}_{n}^{\text {H}}\bar{\mathbf{Q}}^{-1})}{|\operatorname{tr}(\bar{\mathbf{Q}}^{-1}\mathbf{B}_{n}^{\text {H}}\bar{\mathbf{Q}}^{-1})|} \tag{21}\]
where \(\bar{\mathbf{Q}}\) denotes the value of \(\mathbf{Q}\) from the previous iteration. In this approach, we do not need to find the optimal step size as in Algorithm 1. However, after each update step, the matrices \(\mathbf{Q}\) and \(\mathbf{B}_{n}\) need to be updated, which would be computationally expensive for large \(M\), as in the case of massive MIMO systems.
## V Results
In this section, numerical results are provided to validate the effectiveness of the proposed algorithms. The system consists of one BS equipped with \(M=4\) antennas, serving one single-antenna user. The RIS is equipped with \(N=40\) reflecting elements. The setup is illustrated in Fig. 1. The user is placed at a distance \(D\,\text{m}\) from the BS. Each of the channels is generated according to its distribution as defined in Section II. The covariance matrix of each channel is generated according to the urban micro channel model described in the 3GPP technical report [14]. For \(D=20\,\text{m}\), the convergence plot of the two proposed algorithms is shown in Fig. (2). The convergence analysis reveals that both algorithms converge in a few iterations. The element-wise optimisation algorithm converges in less than 4 iterations, and the gradient descent based algorithm requires slightly more iterations to converge. It is also observed that the low complexity gradient descent algorithm converges to a similar value as the element-wise optimisation method.
The user's rate is taken as the performance metric in Fig. (3), which is computed with the different algorithms and compared over the transmit power levels \(P\). The average rate of the user is given by \(\mathbb{E}\left[\log_{2}(1+|\mathbf{h}^{\text{H}}\mathbf{p}|^{2}/\sigma^{2})\right]\), where \(\sigma^{2}\) is set to 1. The estimation noise covariance matrix \(\mathbf{C}_{n}\) is assumed to be the identity matrix. The rate is averaged over 100 covariance matrices, which are generated by varying the distance \(D\) in between \(15\,\text{m}\) to \(60\,\text{m}\) and the path loss factors of the scatterers randomly. For each of the generated covariance matrices, the user's instantaneous rate is averaged over 1000 channel realisations. The performance of the proposed algorithms is compared with the following baselines: (i) a system without RIS with the bilinear precoders as the transmit filters [9], (ii) a system with RIS where the phase shifts are chosen randomly and the bilinear precoders are used as the transmit filters, (iii) the SDR approach of [3] for the genie-aided setup of perfectly known CSI, (iv) the SDR approach of [3] used for the imperfect CSI setup, (v) the algorithm in [7] based on the statistical channel knowledge, and (vi) the two-timescale (TTS) approach of [5]. Fig. (3) compares the user's rate for the different schemes with respect to the transmit power \(P\) in dB. The topmost curve represents the upper bound of the rate that can be achieved for the considered system setup when the CSI is perfectly known, and the optimisation of filters and phase shifts is performed in every channel coherence interval with the SDR method [3]. The SDR algorithm of [3] is then employed in an imperfect CSI setup and the user's rate degrades by 9 dB approximately. The simulation results reveal that the two proposed algorithms are very similar in performance. Moreover, their performance gap to the SDR approach for the imperfect CSI scenario is small, despite the fact that these algorithms are computationally much less expensive as the filters and the phase shifts do not need to be optimised in every channel coherence interval. Furthermore,
Figure 1: Simulation Setup
Figure 3: User’s Rate vs Transmit Power \(P\) in dB
Figure 2: Convergence Plot for \(D\) = \(20\,\text{m}\)
these algorithms based on the maximisation of the lower bound of the user's rate considering the worst-case noise bound [8] outperform the AO algorithm in [7], which maximises the upper bound of the rate obtained through Jensen's inequality. Additionally, we extend the algorithms to the TTS approach of [5]. The algorithm in [5] employs the stochastic successive convex approximation (SSCA) method [15] to compute the optimal phase shifts based on the channel statistics. In the TTS approach, the optimal phase shifts obtained by (16), (21) or the SSCA method [5] are kept fixed in the coherence interval of the covariance matrices and the filters are updated in every channel coherence interval with the matched filter (MF). It is observed that the TTS approach employing Algorithm 2 outperforms the algorithm in [5] for our system setup, i.e., the performance of the TTS optimisation is boosted by the method underlying Algorithm 2 and it offers the best performance among other approaches involving the statistical channel knowledge in Fig. (3).
## VI Conclusion
In this work, we have presented algorithms for the single-user RIS-aided MISO systems based on the bilinear precoders. The simulation results illustrate that a performance gain can be achieved by optimising the phase shifts of the RIS, even when the actual CSI is not available, by exploiting the second-order statistics. This significantly reduces the training overhead as the channels do not need to be estimated in every channel coherence interval and the phase shifts of the RIS do not need to be updated frequently. The extension of the algorithms for the multi-user setup will be presented in our next work.
## VII Appendix
### _Proof of Theorem 2_
With \(\eta=\sqrt{\frac{P}{\operatorname{tr}(\mathbf{Q}^{-1})}}\), \(\gamma^{\mathrm{lb}}\) can be rewritten as
\[\gamma^{\mathrm{lb}}=\frac{\operatorname{tr}^{2}(\mathbf{Q}^{-1}\mathbf{C})}{ \operatorname{tr}(\mathbf{Q}^{-1}\mathbf{C})+\sigma^{2}\,\operatorname{tr}(\mathbf{Q}^{- 1})/P}\,. \tag{22}\]
Assuming \(\mathbf{C}_{\mathrm{n}}=\zeta^{2}\operatorname{\mathbf{I}}_{M}\), where \(\zeta^{2}>0\), the term \(\operatorname{tr}(\mathbf{Q}^{-1})\) can be written as \(\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}(\mathbf{Q}-\mathbf{C})\bigr{)}/\zeta^{2}\), which, in fact, equals to \(\Bigl{(}M-\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}\Bigr{)}/\zeta^{2}\). Plugging this into (22), and replacing the term \(\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}\) by \(x\) for the ease of notation, the lower bound of the SNR can be expressed as a function of \(x\) by
\[\gamma^{\mathrm{lb}}=f(x)=\frac{x^{2}}{\left(1-\frac{\sigma^{2}}{P\zeta^{2}} \right)x+\frac{\sigma^{2}M}{P\zeta^{2}}}. \tag{23}\]
Replacing \(\left(1-\frac{\sigma^{2}}{P\zeta^{2}}\right)\) by \(k_{1}\) and \(\frac{\sigma^{2}M}{P\zeta^{2}}\) by \(k_{2}\), we get
\[f(x)=\frac{x^{2}}{k_{1}\,x+k_{2}}\,\text{and}\,f^{\prime}(x)=\frac{k_{1}x^{2}+2 \,k_{2}\,x}{(k_{1}\,x+k_{2})^{2}}. \tag{24}\]
It can be easily observed that \(x=\operatorname{tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}\) is always positive because \(\mathbf{Q}\) and \(\mathbf{C}\) are positive definite matrices. Hence, we are interested in the sign of the term \(k_{1}x+2\,k_{2}\) to determine the sign of \(f^{\prime}(x)\). Also, note that \(k_{2}>0\) since \(M,\,P,\,\zeta^{2},\,\sigma^{2}>0\).
Case 1: \(P\zeta^{2}-\sigma^{2}\geq 0\), i.e., \(k_{1}\geq 0\).
It is easy to verify that \(f^{\prime}(x)>0\) for this case.
Case 2: \(P\zeta^{2}-\sigma^{2}<0\), i.e., \(k_{1}<0\).
\[k_{1}x+2\,k_{2} =\left(1-\frac{\sigma^{2}}{P\zeta^{2}}\right)\operatorname{tr} \bigl{(}\mathbf{Q}^{-1}\mathbf{C}\bigr{)}+\frac{2\,\sigma^{2}M}{P\zeta^{2}}\] \[\overset{(a)}{=}M+\frac{\sigma^{2}M}{P\zeta^{2}}-k_{1}\operatorname {tr}\bigl{(}\mathbf{Q}^{-1}\mathbf{C}_{\mathrm{n}}\bigr{)}>0 \tag{25}\]
where \((a)\) follows from \(\mathbf{C}=\mathbf{Q}-\mathbf{C}_{\mathrm{n}}\). This shows that \(f^{\prime}(x)\ >\ 0\) holds for this case too. Hence, \(f(x)\) is always monotonically increasing in \(x\). This proves Theorem 2.
### _Lemma 1_
For any three matrices \(\mathbf{A}\), \(\mathbf{B}\) and \(\mathbf{C}\) of the same dimensions, we have
\[\operatorname{tr}\Bigl{(}\mathbf{A}(\mathbf{B}\odot\mathbf{C})\Bigr{)}= \operatorname{tr}\Bigl{(}(\mathbf{A}\odot\mathbf{B}^{\mathrm{T}})\mathbf{C}\Bigr{)}. \tag{26}\]
Proof.: \[\operatorname{tr}\Bigl{(}\mathbf{A}(\mathbf{B}\odot\mathbf{C})\Bigr{)} =\sum_{i}\big{[}\mathbf{A}(\mathbf{B}\odot\mathbf{C})\big{]}_{i,i}\] \[=\sum_{i}\bigg{(}\sum_{k}\big{[}\mathbf{A}\big{]}_{i,k}\,\big{[}\mathbf{B} \big{]}_{k,i}\,\big{[}\mathbf{C}\big{]}_{k,i}\bigg{)}\] \[\operatorname{tr}\Bigl{(}(\mathbf{A}\odot\mathbf{B}^{\mathrm{T}})\mathbf{C} \Bigr{)} =\sum_{i}\Big{[}(\mathbf{A}\odot\mathbf{B}^{\mathrm{T}})\mathbf{C}\big{]}_{i,i}\] \[=\sum_{i}\bigg{(}\sum_{k}\big{[}\mathbf{A}\big{]}_{i,k}\,\big{[}\mathbf{B} \big{]}_{k,i}\,\big{[}\mathbf{C}\big{]}_{k,i}\bigg{)}\]
Hence, L.H.S. = R.H.S., and this proves Lemma 1.
|
2309.14152 | Assessment of Brightness Mitigation Practices for Starlink Satellites | Photometric characteristics for all models of Starlink satellites launched to
date are reviewed. The Original design that lacked brightness mitigation is the
most luminous. SpaceX installed a sunshade on the VisorSat model which reduced
its luminosity by a factor of 3. The visor was omitted on Post-VisorSat
spacecraft with laser communication which followed, but the company added a
reflective layer which resulted in an intermediate brightness between Original
and VisorSat. SpaceX is applying advanced brightness mitigation techniques to
their Generation 2 Starlink satellites which are larger. The first of these,
called Minis, are dimmer than Gen 1 Starlinks despite their greater size.
Photometric observations verify that brightness mitigation efforts employed by
SpaceX reduce spacecraft luminosity substantially. However, the satellites
still have some negative impact on astronomical observations and the very large
satellites planned for later in Gen 2 may interfere more seriously. | Anthony Mallama, Andreas Hornig, Richard E. Cole, Scott Harrington, Jay Respler, Ron Lee, Aaron Worley | 2023-09-25T14:05:47Z | http://arxiv.org/abs/2309.14152v3 | ###### Abstract
###### Abstract
Photometric characteristics for all models of Starlink satellites launched to date are reviewed. The Original design that lacked brightness mitigation is the most luminous. SpaceX installed a sunshade on the VisorSat model which reduced its luminosity by a factor of 3. The visor was omitted on Post-VisorSat spacecraft with laser communication which followed, but the company added a reflective layer which resulted in an intermediate brightness between Original and VisorSat. SpaceX is applying advanced brightness mitigation techniques to their Generation 2 Starlink satellites which are larger. The first of these, called Minis, are dimmer than Gen 1 Starlinks despite their greater size. Photometric observations verify that brightness mitigation efforts employed by SpaceX reduce spacecraft luminosity substantially. However, the satellites still have some negative impact on astronomical observations and the very large satellites planned for later in Gen 2 may interfere more seriously.
**Assessment of Brightness Mitigation Practices for Starlink Satellites**
**Anthony Mallama\({}^{\ast}\)\({}^{\ast}\), Andreas Hornig\({}^{\ast}\), Richard E. Cole,**
**Scott Harrington, Jay Respler\({}^{\ast}\), Ron Lee and Aaron Worley**
**2023 October 1**
* Correspondence: [email protected]_
\({}^{1}\)IAU - Centre for the Protection of Dark and Quiet Skies from Satellite Constellation Interference
\({}^{2}\) University of Stuttgart, Germany
**Keywords:** starlink, brightness mitigation, photometry
## 1 Introduction
Satellite constellations are beginning to impact the work of professional astronomers as reported by Barentine et al. (2023). They point out that space objects leave streaks on images which can reduce their scientific potential. Additionally, smaller objects elevate the diffuse brightness of the sky. The authors compute the potential increase in sky brightness and address the corresponding loss of astronomical information.
Amateur astronomers and others who appreciate the aesthetics and cultural significance of the night sky are also adversely affected by satellites as discussed by Mallama and Young (2021). Spacecraft brighter than magnitude 6 are distractions visible to the unaided eye, while those brighter than 7 impact professional research.
SpaceX operates the largest satellite constellation with more than 4,000 Starlink spacecraft already in orbit and regulatory approval for many more. The initial launch of 60 satellites on one rocket in 2019 raised concerns because of their brightness. SpaceX responded by making several changes to the spacecrafts' physical designs and to their satellite operations. This paper reviews the brightness mitigation strategies and the corresponding luminosity changes recorded by observers.
Section 2 defines the terminology used in this paper. Section 3 summarizes the brightness mitigation techniques implemented by SpaceX. Section 4 describes the methods of photometry used to record satellite magnitudes. Section 5 characterizes the luminosity of Starlink satellites as derived from observed magnitudes. Section 6 describes numerical modeling of spacecraft brightness and illustrates how the models fit photometric observations. Section 7 discusses the impact of Starlink satellites on astronomy and addresses international efforts to mitigate the negative effects of all satellite constellations. Our conclusions are given in Section 8.
## 2 Definitions and abbreviations
The terms elevation, height and range are differentiated as follows in this paper. Elevation is the angular distance of a satellite above the Earth's horizon measured in degrees. Height refers to the vertical distance of a satellite above the Earth's surface in km. Range is the distance between an observer and a spacecraft in km. The term altitude is not used here to avoid confusion.
The observed brightness of a satellite is its apparent magnitude. That luminosity may be adjusted to a
standard distance of 1000 km by applying the inverse square law of light. The distance-adjusted brightness, or 1000-km magnitude in this paper, is useful for comparing satellite luminosities measured at different ranges. Magnitudes may also be adjusted to 550 km which was the orbital height of early Starlink satellites. The 550-km values are referred to as characteristic magnitudes because they correspond to the brightness of many Starlink satellites when they are near the observer's zenith.
Statistical means sometimes include variational parameters. The standard deviations, abbreviated as SD, represent the scatter about the mean. The standard deviation of the mean, SDM, is its formal uncertainty.
A bidirectional reflectance function defines how light is reflected from a surface. The BRDF is used in conjunction with the physical layout of a satellite's component parts. In the case of Starlink spacecraft, the main components are its antenna panel and solar array as shown in Figure 1. Parameters of the BRDF model may be adjusted to fit observed magnitudes.
Phase angle is the arc measured at the satellite between directions to the Sun and to the observer. This angle is used to characterize satellite brightness and it leads to the phase function which is brightness as the dependent variable of phase angle.
Orbit-raise is the phrase used by SpaceX in referring to satellites that are ascending from their injection heights to higher orbits. Parking orbits are where low height satellites wait for precession to change their orbital plane. On-station satellites are those which have attained their final heights. Spacecraft attitude refers to the orientation of the satellite in space especially with respect to the Sun and the observer. Lastly, SpaceX uses the term conops to mean 'concept of operations'.
## 3 Brightness mitigation practices
This Section reviews the strategies employed by SpaceX to dim Starlink satellites. The corresponding changes of observed brightness are also mentioned qualitatively. Quantitative photometry is addressed in Sections 4 and 5.
The Original model of Starlink spacecraft consisted of an antenna panel measuring 1.3 x 2.8 m and a solar array 2.8 x 8.1 m, with a total surface area of 26.32 m\({}^{2}\). These dimensions remained unchanged until the second generation of spacecraft described later in this Section.
No brightness mitigation measures were implemented for the Original satellites because their impact on astronomy was not foreseen. In 2020 SpaceX applied a low albedo coating to a test satellite named DarkSat. Tregloan-Reed et al. (2020) and Halferty et al. (2022) found it to be dimmer but Takahashi et al. (2020) reported that it was brighter. In any case, the spacecraft absorbed too much sunlight which caused thermal problems and this approach was abandoned.
The next design change was incorporated into the VisorSat model of Starlink. The 'visor' refers to a shade that prevents sunlight from reaching the underside of the antenna panel which faces observers on the ground. This modification reduced the brightness of satellites on-station substantially (Mallama 2021a and 2021b, Krantz et al. 2023 and Halferty et al. 2022). However, SpaceX stopped attaching visors on the next model of Starlink satellites which used laser communication because they interfered with the beam.
The spacecraft model that followed VisorSat is referred to herein as Post-VisorSat. While these satellites lacked the Sun shade, SpaceX applied a dielectric reflective layer to the bottom of the antenna panel, as shown in Figure 2, which directed sunlight into space rather than allowing it to scatter toward the ground. The Post-VisorSat spacecraft on-station were found to be intermediate in brightness between Original and VisorSat satellites by Mallama and Respler (2022) and by Krantz et al. (2023).
Additionally, SpaceX changed the roll angle for VisorSat and Post-VisorSat spacecraft in order to mitigate their brightness. This 'knife-edge' attitude, which was applied to satellites in orbit-raising, placed the Sun in the plane of their flat surfaces. Mallama and Respler (2023) found that knife-edge configuration reduced luminosity in the early mission phases.
Fig. 1: The horizontal component of Starlink is the antenna panel and the vertical one is the solar array. Illustration from SpaceX.
SpaceX began launching their second-generation Starlink spacecraft in 2023. The first model is called Mini because it is smaller than the full-sized Gen 2 satellites which will follow. The antenna panels of Mini satellites measure 2.7 x 4.1 m and their two solar panels are each 4.1 x 12.8 m. The total surface area of 116.0 m\({}^{2}\) is more than four times that of Gen 1 spacecraft.
Surface area usually correlates with brightness. So, astronomers were especially concerned about the luminosity of Gen 2 spacecraft. However, SpaceX made two changes to reduce the brightness of these satellites. First, they improved the mirror-like reflective layer on the antenna panel so that more sunlight is directed into space. Second, they developed a conops similar to knife-edge and implemented it for on-station satellites. This configuration points the plane of the solar arrays toward the Earth's limb when the satellites are near the terminator. Thus, observers only see their dark sides as shown in Figure 3. Mallama et al. (2023) found that the mitigation strategy is effective in reducing the brightness of Mini satellites.
This Section has addressed brightness mitigation strategies implemented by SpaceX. The next Section describes the methods used to measure Starlink satellite magnitudes. In Section 5 we examine the observational results for each spacecraft model more thoroughly.
## 4 Observation methods
Starlink brightness measurements have been acquired by several different techniques. These include visual perception by the human eye, recordings made with a digital camera used in video mode, output from a wide-field 9 channel system with sCMOS sensors, and telescopic observations recorded by solid state sensors.
Visual observers record Starlink magnitudes by comparing their brightness to nearby reference stars. Angular proximity between the spacecraft and those stellar objects accounts for variations in sky transparency and sky brightness. The perceptual method of observing is described more thoroughly by Mallama (2022).
Video observations were recorded with a Sony Alpha A7s-I camera and a Sony FE 1.8/50 lens. The Astrometry.net application was run on a Raspberry Pi 4 device for extracting information about the stars. Specially coded python software was executed on a Windows computer to perform the overall measurements and data processing. Magnitudes from video frames were averaged over five second time intervals to form a mean value. This system is the prototype optical ground station (OGS) for the Distributed Ground Station Network (DGSN) being developed at the University of Stuttgart. The DGSN project was started within the SmallSat-Design-Studies at the Institute of Space Systems (IRS). It was part of several annual Google and ESA Summer of Code campaigns. The DSFN is a PhD-research topic at the Institute for Photogrammetry (IFP) at the University of Stuttgart.
Observations were also gathered from the database of the MMT9 system described by Karpov et al. (2015) and Beskin et al. (2017). This robotic observatory consists of nine 71 mm diameter f/1.2 lenses and 2160 x 2560 sCMOS sensors. The detectors are sensitive to the
Figure 3: Observers only see the dark side of solar arrays. Illustration from SpaceX.
Figure 2: Reflective surfaces direct sunlight away from observers on the ground. Illustration from SpaceX.
visible spectrum from red through blue. We collected their apparent magnitudes along with date/time values and computed other quantities needed for analysis.
The methods described above were utilized by the authors of this paper to obtain observational data, and magnitudes collected from the MMT9 database have also been used in our studies. The magnitude scales for all these techniques closely agree. MMT9 values are within 0.1 magnitude of the V-band based on information in a private communication from S. Karpov as discussed by Mallama (2021). The video magnitudes match visual and V-band results closely because the camera is panchromatic in visible light. That agreement is shown empirically by Mallama et. al. (2023).
Additional observations have been reported by other groups. Their instruments include the Pomenis LEO Satellite Photometric Survey Telescope at Mt. Lemmon in Arizona USA (Krantz et al., 2023), the Chakana 0.6-m telescope in Chile (Tregloan-Reed et al., 2020), the Stingray prototype consisting of a telephone lens and CMOS sensor also located in Arizona (Halferty et al., 2022), the Zwicky Transit Facility which uses the Schmidt telescope at Palomar Observatory (Mroz et al., 2022), the Plaskett 1.6 m telescope of the Dominion Astrophysical Observatory (Boley et al., 2022), the SCUDO telescope in Italy (Hossein et al., 2022) and an ensemble of eight different telescopes (Takahashi et al., 2023).
## 5 Empirical brightness characterization
This Section characterizes the brightness of all four models of Starlink satellites that have been launched to date. Mean magnitudes, phase functions and brightness surges are discussed.
### Original design is brightest
The first photometric survey of Starlink satellites was performed by McDowell (2020) using visual magnitudes from the SeeSat email archive. He found that magnitudes spanned 'from 3 to 7 with most between visual mag \(5.5\pm 0.5\)' for satellites on-station at 550 km.
A follow-up study combining visual magnitudes from SeeSat with V-band magnitudes from MMT9 was conducted by Mallama (2020). The 830 luminosities for on-station satellites were adjusted to the standard 1000-km distance. The mean of adjusted magnitudes was 5.93 +/-0.67 +/-0.02, where the first variational quantity is the SD and the second is the SDM. When the mean 1000-km magnitude is re-adjusted to 550 km, which is the height of those on-stations spacecraft, the characteristic magnitude is 4.63.
Mallama also reported on brightness surges or 'flares'. Very bright flares spanning from magnitude -3 to -8 were reported on 8 occasions for orbit-raising satellites between 380 and 425 km. The Original design of Starlink satellites is still the brightest of all models in terms of their mean magnitudes and their flares.
### VisorSat is Fainter than Original
SpaceX added a visor to this model in order to prevent sunlight from reaching the underside of the antenna panel which faces observers on the ground. Several studies quantified the effectiveness of this brightness mitigation.
Takahashi et al. (2023) recorded 19 observations of the first VisorSat spacecraft and 12 of Original design satellite Starlink-1113, each in 8 filters. They found that VisorSat was generally dimmer than the other spacecraft.
Halferty et al. (2022) recorded 363 GAIA G magnitudes of Original and VisorSat spacecraft. Their results indicate that the brightness mitigation applied to VisorSats dimmed them by an average of 0.9 magnitudes or a luminosity factor of 2.3.
Mallama (2021a) analyzed 430 visual and MMT9 magnitudes for on-station VisorSats. The mean of 1000-km maps was 7.22 +/- 0.85 +/- 0.04. Adjustment to the 550 km on-station height of these spacecraft indicated a characteristic mag of 5.92. The difference between these results and those for Original design (Mallama, 2020) is 1.29 magnitudes which corresponds to a factor of 3.2 in dimming.
In a large-scale study of MMT9 data Mallama (2021b) analyzed more than 60,000 VisorSat magnitudes and over 40,000 Original maps for on-station spacecraft. The mean of 1000-km magnitudes was 7.21 +/- 0.89 +/- 0.01 for VisorSats and 5.89 +/- 0.46 +/- 0.01 for Originals. The characteristic magnitudes at a distance of 550-km are 5.91 and 4.59. The difference of 1.32 magnitudes implies that VisorSats were dimmer by a factor of 3.3.
This study also compared the size and frequency of flare events of these two models. The light curve of a large flare is shown in Figure 4.
The data in Table 1 indicate that VisorSats produce more flares than Originals. The mean intervals between flares exceeding 0.5 magnitude were 129 seconds for VisorSats and 622 seconds for Originals. The
percentage of the elapsed time spent above threshold amplitudes of 0.5, 1.0 and 2.0 magnitudes are also listed in the Table. They vary from 0.0% for flares of Original satellites exceeding 1.0 magnitude to 2.8% for VisorSat flares of 0.5 mag.
Finally, Hossein et al. (2022) obtained 571 RGB magnitudes for Original and VisorSat spacecraft. They analyzed the data as a function of satellite heights, ranges and other parameters. However, the results did not distinguish between Originals and VisorSats. So, no brightness comparison can be reported here.
### Post-VisorSats are intermediate in brightness
When SpaceX added lasers to Starlink satellites they stopped including visors because these structures blocked the light beams. The omission of visors would have returned the brightness of Post-VisorSat spacecraft to approximately that of Originals. However, the company added a dielectric layer (Figure 2) to the bottom of the antenna panel for brightness mitigation. This mirror-like surface directed sunlight into space rather than allowing it to scatter toward observers on the ground.
Mallama (2022b) analyzed 58 visual magnitudes for on-station Post-VisorSats and 44 for VisorSats recorded by J. Respler in 2022. After adjustment for distance the Post-VisorSat spacecraft averaged 0.5 mags brighter than VisorSat. Nevertheless, they were 0.8 mags fainter than the Original design.
### Comparison of all three models from Generation 1
Mallama and Respler (2022) analyzed a uniform set of visual magnitudes which they had recorded for on-station Original design, VisorSat and Post-VisorSat spacecraft. Figure 5 demonstrates that Original is the brightest followed by Post-VisorSat and VisorSat. A more recent set of video magnitudes for all three Gen 1 models, also shown in the figure, indicates the same ordering of brightness.
Krantz et al. (2023) reported findings similar to Mallama and Respler (2022). Their median apparent magnitudes for Original Design, VisorSat and Post-VisorSat are 5.72, 6.87 and 6.15, and the corresponding interdecile ranges span 2.58, 2.90 and 2.59 magnitudes, respectively. They point out that the brightness distribution is not statistical randomness.
An important aspect of the phase functions shown in Figure 5 is their concave upwards curvature. High luminosity at small phase angles is expected because the satellites are nearly opposite the Sun from the observer and so are almost fully lit. However, the brightness at large phase angles occurs when the spacecraft are between the Sun and the observer. In that case the high luminosity indicates forward scattering from back-lit components.
Krantz et al. (2023) reported excess brightness for satellites 'at mid-elevations opposite the Sun with an additional hot spot at low solar elongation above the below-horizon Sun'. These areas of the sky are equivalent to low and high phase angles, respectively. The great luminosity at high phase angles is due to satellites reflecting light from the dayside of the Earth. This phenomenon is discussed more fully in the next section which describes BRDF modeling.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Mean Interval & Time Percentage \\ & (seconds) & (at amplitude) \\ \hline & & 0.5 & 1.0 & 2.0 \\ \cline{3-4} Original & 622 & 0.4 & 0.0 & 0.0 \\ Visorsat & 129 & 2.8 & 1.0 & 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Flare amplitude and frequency
Figure 4: A flare of Starlink-1538 recorded by MMT9 on 2021 May 2. Illustration from Mallama (2021b).
Mallama and Respler (2023) also examined the effectiveness of roll angle adjustment in dimming different models of Gen 1 satellites. SpaceX developed this knife-edge technique which places the Sun in the plane of flat surfaces on the satellites for brightness mitigation. The company applied it during orbit-raising to VisorSat and Post-VisorSat spacecraft but not in time for Originals. Roll angle adjustment was found to reduce distance-adjusted brightness by a factor of 10 as illustrated in Figure 6.
### Gen 2 Mini satellites are largest and faintest
Mini satellites have a surface area of 116 m\({}^{2}\) which is more than 4 times that of Gen 1 spacecraft. They are called 'Mini' because regular Gen 2 spacecraft will be even larger. The increased size concerned astronomers because bigger satellites are usually brighter. However, SpaceX instituted an aggressive strategy for brightness mitigation to compensate for the larger dimensions. They improved the dielectric layer on the bottom of the antenna panel which directed more sunlight back into space (Figure 2). They also developed a new conops, similar to knife-edge, for on-station spacecraft where the planes of the solar arrays point to the Earth's limb (Figure 3) when satellites are near the terminator. Observers on the ground only see the dark sides of the arrays in this attitude.
Mallama et al. (2023) found that mitigation reduced the brightness of on-station Mini satellites when compared to spacecraft observed during early mission phases without mitigation. The means of apparent magnitudes for mitigated spacecraft along with their SDs and SDMs were 7.06 +/- 0.91 +/- 0.10 and the values for magnitudes adjusted to 1000-km distance were 7.87 +/- 0.79 +/- 0.09. The corresponding statistics for satellites recorded during early mission phases were
Fig. 5: Individual magnitudes and best-fit quadratic phase functions for the three models of Gen 1 Starlink satellites illustrate that Original design is brightest and VisorSat is faintest over most of the observed phase angles. Visual data are plotted in the panel on the top and video data are on the bottom. Subtract 1.3 magnitudes to adjust to 550-km.
Fig. 6: The knife-edge technique of brightness mitigation reduced luminosity for apparent and for distance-adjusted magnitudes. Illustration from Mallama and Respler (2023).
3.97 +/- 1.96 +/- 0.09 and 5.08 +/- 1.70 +/- 0.08. The difference of distance-adjusted means of 2.79 magnitudes indicated that mitigated satellites are more than 10 times fainter (Figure 7).
_Fig. 7. The distribution of distance-adjusted luminosity for satellites with and without brightness mitigation. Illustration from Mallama et al. (2023)._
More recently the authors have concentrated their observations on Mini satellites at small and large phase angles. These magnitudes were needed in order to fully parameterize the BRDF model discussed in Section 6. The phase function in Figure 8 demonstrates that Minis are bright at small angles and even brighter at large angles relative to mid-range angles.
On 2023 July 14 SpaceX informed our team that they were experimenting with off-pointing the solar arrays during orbit-raising for additional brightness mitigation of the Mini satellites. So, we now distinguish between on-station and orbit-raising mitigation as well as 'no mitigation'. The magnitude distribution between these three modes is shown in Figure 9. The unmitigated satellites are brightest by far, while the luminosities of on-station and orbit-raising spacecraft are much reduced.
_Fig. 8. The phase function for Mini satellites illustrates their brightness as a function of angle._
_Fig. 9. The distribution of magnitudes for on-station and orbit-raising modes as well as for no mitigation._
_Fig. 7. The distribution of distance-adjusted luminosity for satellites with and without brightness mitigation. Illustration from Mallama et al. (2023)._
_Fig. 8. The phase function for Mini satellites illustrates their brightness as a function of angle._
_Fig. 9. The distribution of magnitudes for on-station and orbit-raising modes as well as for no mitigation._
_Fig. 7. The distribution of distance-adjusted luminosity for satellites with and without brightness mitigation. Illustration from Mallama et al. (2023)._
## 6 Brightness modeling
The physical design of a spacecraft along with the reflective properties of its surfaces account for its luminosity. Observed magnitudes or laboratory measurements may be used to parameterize a brightness model. That numerical representation can then be used to predict spacecraft luminosities for any geometry involving the satellite, the Sun and the observer. So, the spacecraft brightness model is an important tool for observation planning purposes.
Cole (2020, 2021) developed a BRDF model for VisorSat which takes account of its antenna panel and solar array. These components were in the attitude that SpaceX called shark-fin where the panel faced the Earth and the array faced the Sun as shown in Figure 11.
Cole's model considers eight angles and other factors relative to the spacecraft, the observer and the Sun. Examples are the off-base view angle measured at the spacecraft between nadir and the direction to the observer, the Sun depression angle taken between the horizontal at the satellite and the direction to the Sun, and the range measured between the spacecraft and the observer.
The model has 10 adjustable parameters such as diffuse and specular reflectivity of the antenna panel, and diffuse reflectivity of the solar array. The single output parameter is the modeled apparent magnitude.
This VisorSat model was fit to 131 magnitude records for 66 satellites at their on-station heights. Visual observations were made by the authors of the paper, and V-band measurements were obtained from the MMT9 database as well as those reported in Walker et al. (2021). The RMS residual of the model was 0.4 magnitude which Cole considered to be reasonable given the accuracy of the observations. Figure 12 illustrates the correlation between model and observed luminosity over a span of 4 magnitudes.
Several insights were gleaned from the model. For example, the solar elevation at the observer is an
Figure 11: Shark-fin configuration as illustrated by SpaceX.
Figure 12: The model is too faint above the dotted line and too bright below it. Illustration from Cole (2021).
Figure 10: The phase functions for observations recorded about six months apart. Satellites were brighter in 2021 at phase angles less than 60\({}^{\circ}\) and fainter at large angles. Illustration adapted from Mallama (2021b).
important determinant of satellite brightness with larger negative elevations leading to greater brightness. Furthermore, maps of spacecraft brightness across the sky revealed that the satellites are generally fainter when seen nearer the horizon except for those in the anti-solar direction as shown in Figure 13.
Cole also found that satellites opposite the Sun were brighter in the summer of 2021 than during the previous winter with magnitudes around 4 to 4.5 as mentioned in Section 5. The BRDF model was modified to fit these observations by changing the tilt angle of the solar array.
Fankhauser et al. (2023) modeled Starlink laser communication satellites which we refer to herein as Post-VisorSats. The physical model only required the antenna panel and the solar array. SpaceX provided BRDF functions measured in the lab for these two components. In a separate solution they constrained the BRDF parameters using magnitudes recorded by Pomenis. These models were compared to a diffuse sphere model. Both the lab BRDF and the magnitude-constrained BRDF provided a better fit to observations than the diffuse sphere model as shown in Figure 14.
The numerical model developed by Fankhauser et al. is the first to include light reflected from the Earth to the satellite. They found that this light source causes noticeable brightness at low elevations in the solar direction as shown in Figure 15. They also point out that this excess luminosity may interfere with searches for potentially hazardous asteroids conducted during evening and morning twilight.
## 7 Discussion
This Section begins by addressing Starlink brightness in the context of observational astronomy. Then several reports concerning the impact of these satellites on specific instruments and facilities including radio telescopes are discussed. Next, an approach to mitigating satellite interference by scheduling observations to avoid them is described. Finally, the international effort aimed at protecting dark skies from bright satellite constellations is summarized.
### Impact of Starlink on astronomy
Tyson et al. (2020) established that streaks from satellites of magnitude 7 and fainter could be successfully removed from images obtained at the Rubin Observatory. This is an important criterion since their Legacy Survey of Space and Time (LSST) is highly vulnerable to satellite interference. The magnitude 7 limit of Tyson et al. is for the g-band of the Sloan photometric system but that for the V-band is generally taken to be about the same. Both apply to satellites near the 550 km height of many Starlink spacecraft. Meanwhile, amateur astronomers refer to magnitude 6 as the limit for satellite interference because fainter objects cannot usually be seen with the unaided eye. SpaceX has stated that it aims to make
Figure 14: The laboratory and observation-constrained BRDF models correlate more strongly with measured magnitudes than does a diffuse sphere model. Illustration from Fankhauser et al. (2023).
Figure 13: VisorSat brightness mapped onto the sky in Cartesian coordinates. The Sun is 15\({}^{\circ}\) below azimuth 90\({}^{\circ}\). The observations are shown by small white symbols. Note the bright patch centered at azimuth 270\({}^{\circ}\) and elevation 35\({}^{\circ}\). Illustration from Cole (2021).
Figure 15: Satellite brightness derived from the laboratory BRDF model mapped onto the sky in polar coordinates. The Sun is toward the east at the elevations indicated below each map. The top row does not include light reflected from the Earth while the bottom row does. Notice the additional component of brightness in map 6 that does not appear in map 5. This extra satellite illumination comes from the Earth’s day side. Illustration from Fankhauser et al. (2023).
on-station Starlink satellites invisible to the unaided eye. Original design Starlink satellites largely fail to meet the magnitude criteria for LSST and the unaided eye, while more VisorSats, Post-VisorSats and Minis do attain it. The surface area for full-sized Gen 2 satellites will be more than 10 times that of Gen 1 spacecraft. So, they will present a greater challenge for brightness mitigation.
### The impact on specific instruments and facilities
Bassa et al. (2022) evaluate the impact of Starlink satellites on a variety of astronomical instruments including narrow and wide-field imagers along with long-slit and fiber-fed spectrographs. Their results indicate that the wide-field imagers were most seriously affected. They also examined observation scheduling as a mitigation strategy. Mroz et al. (2022) addressed the impact of Starlink satellites on survey observations at the Zwicky Transient Facility. They noted a large increase in the percentage of streaked images between 2019 and 2021 but concluded that their observations were not yet strongly affected. Williams et al. (2021) reported on the potential impact of Starlink satellites on the ESO optical telescopes located at Paranal and La Silla in Chile. They found the interference to be manageable at that time. They also addressed the effect of satellite noise on the ALMA radio astronomy facility at Llano de Chajnantor and reported that only one band was affected. Di Vruno et al. (2023) reported on radio noise from Starlink satellites recorded at the LOFAR radio telescope. They detected interference at frequencies between 110 and 188 MHz. The authors characterise this noise as 'unintended' and point out that it is not subject to existing regulations.
### The scheduling approach
Hu et al. (2022) examine the effectiveness of adjusting the scheduler algorithm for the LSST to avoid satellites at the cost of decreased efficiency in executing other science goals. They find that the need for this mitigation strategy will depend on the overall impact of satellite streaks. They further state the impact is not yet well known due to incomplete information about satellite luminosities. That knowledge is incomplete, as they said, but it is rapidly growing.
### Protecting dark skies
The observations, analyses and models described in this paper quantify satellite brightness. This research contributes to a larger effort aimed at mitigating the adverse effects of spacecraft on astronomy. We have summarized our own research and that of numerous other investigators, but the complete body of literature on satellite interference is too extensive to include here. Many more useful papers can be found in the proceedings of Dark and Quiet Skies conferences (Walker et al. 2021 and Walker and Benvenuti 2022).
The International Astronomical Union established the Centre for the Protection of Dark and Quiet Skies from Satellite Constellation Interference in 2022. This organization coordinates world-wide efforts aimed at mitigating the negative impact of satellite constellations. The CPS has 'hubs' that specialize in public policy, industry and technology, and community engagement. Their SatHub offers an astronomical data repository, an orbital solutions portal, software tools, a training curriculum and real-time collaboration.
## 8 Conclusions
The Original design of Starlink satellites concerned astronomers because their large number and great brightness were seen as a serious threat to celestial observations. SpaceX responded to these concerns by changing the physical design of their VisorSat and Post-VisorSat models and by modifying their conops for spacecraft in orbit. Meanwhile photometric observers verified that these alterations substantially mitigated brightness.
There were new concerns when SpaceX revealed that their second generation satellites would be larger. The most recent observations indicate that the first model of Gen 2 spacecraft, called Mini, is actually dimmer than those of Gen 1. The full-sized satellites to come later will present a greater challenge to the company's brightness mitigation efforts. Future observations will measure the brightness of those very large spacecraft and monitor the luminosity of earlier models.
|
2309.05739 | VisActs: Describing Intent in Communicative Visualization | Data visualization can be defined as the visual communication of information.
One important barometer for the success of a visualization is whether the
intents of the communicator(s) are faithfully conveyed. The processes of
constructing and displaying visualizations have been widely studied by our
community. However, due to the lack of consistency in this literature, there is
a growing acknowledgment of a need for frameworks and methodologies for
classifying and formalizing the communicative component of visualization. This
work focuses on intent and introduces how this concept in communicative
visualization mirrors concepts in linguistics. We construct a mapping between
the two spaces that enables us to leverage relevant frameworks to apply to
visualization. We describe this translation as using the philosophy of language
as a base for explaining communication in visualization. Furthermore, we
illustrate the benefits and point out several prospective research directions. | Keshav Dasu, Yun-Hsin Kuo, Kwan-Liu Ma | 2023-09-11T18:03:47Z | http://arxiv.org/abs/2309.05739v1 | # VisActs: Describing Intent in Communicative Visualization
###### Abstract
Data visualization can be defined as the visual communication of information. One important barometer for the success of a visualization is whether the intents of the communicator(s) are faithfully conveyed. The processes of constructing and displaying visualizations have been widely studied by our community. However, due to the lack of consistency in this literature, there is a growing acknowledgment of a need for frameworks and methodologies for classifying and formalizing the communicative component of visualization. This work focuses on intent and introduces how this concept in communicative visualization mirrors concepts in linguistics. We construct a mapping between the two spaces that enables us to leverage relevant frameworks to apply to visualization. We describe this translation as using the philosophy of language as a base for explaining communication in visualization. Furthermore, we illustrate the benefits and point out several prospective research directions.
Speech act theory, Data visualization, Designer intent, Communicative visualization
## 1 Introduction
Data visualization is a vast and growing field. In this paper, we focus on the subspace of communicative visualization, a space that is concerned with the explanatory side of data visualization and is often what the average person is exposed to. In this space, the communicative goals can range depending on the designer's intentions and audience [7, 41, 56]. The role data visualizations play in communication varies where some designers use them to supplement their written and spoken messages, whereas others recognize them as an entirely effective mode of conveying the message [61, 74, 70]. Consequently, we are seeing wide usage of data visualization in industry and academia to communicate increasingly diverse and sophisticated messages. The complexity and diversity of interaction data visualization usage suggest that we could benefit from looking at it as a rich language. Developing frameworks from this perspective could then allow us to pleasin insights from naturally occurring experiments in practice and enable research that can guide future practice. Given the growing sophistication of visual communication, there is value in exploring the relevance to data visualization of frameworks developed by linguists and language philosophers. Some initial frameworks to examine are speech act and discourse theory; speech act because it distinguishes between the what is'said', the intentions of the speaker, and the consequences of what is said in how it is processed by listening. Discourse theory because it examines how our communication is shaped by external structures.
Here the focus is on intent. Designers are tasked with creating and evaluating visualizations for targeted audiences. These audiences have varying motivations to engage with the presentation and with differing levels of prior knowledge of the subject matter. Designers have a wide array of intents. These intents range from journalists attempting to _inform_ readers, teachers trying to _explain_ concepts, scientists attempting to _discover_ relationships among variables, policymakers hoping to _persuade_ the public about the rationale for a decision, a blogger seeking to _evoke_ strong emotions and activists hoping to get volunteers to _act_. How can we classify these intents in a manner that advances our ability to visualize data? Classifying intent is a prerequisite for determining if a visualization adequately satisfies the communicative intent of designers. Thus, to build and evaluate communicative visualizations we need a refined and principled language for describing communicative intent.
Recent work in communicative visualization by Adar and Lee [2] tackles the question of "how do we formally describe communicative intent in visualizations?" Their work offers an initial taxonomy for intents and enables an additional discussion on how to communicate information using visualization. Others have also identified the importance of intent. For example, Schoenlein et al. [65] note a central problem in visual communication is understanding how people infer meaning from visual features. All this points to a need to assess and understand if our communicative goals as designers are being correctly imprinted in our visual features as intended. We posit that when considering the question of "how can we formalize intents" with regards to visualization, we can draw from the philosophy of language, particularly speech act theory [5, 27, 57, 66, 67, 73].
Our work aims to link the field of visualization to the field of linguistics and demonstrates how doing so offers a broader perspective of the space and introduces new opportunities for research that can facilitate design. We illustrate the connection between these spaces by explaining the link between a sub-space of visualization, i.e., communicative visualization, to a sub-space of linguistics, i.e., speech act theory. We show how this relationship can help grow our understanding of communicative visualization. The insights and formalization developed there can guide us in developing a formal language for intent in visualization. With VisActs, we offer a framework to assist in enhancing the overall communicative precision of a data-driven visualization. Our framework complements task-based design analysis by examining the design at a granular level, providing an approach to understanding how designer intent affects low-level design decisions.
In this paper, we (a) propose VisActs, which leverages speech act theory, as a framework for studying intent in visualization and (b) delve deeper into intents by (i) identifying a set of off-encountered vis design intents (ii) illustrating the relationship between the intent and visualization (examples of same content visualized differently based on the intent), (iii) showing how the mode of achievement creates a mesh of intents, (iv) showing the impact of context and conventions on how intent is realized (or made difficult to achieve).
## 2 Data Visualization as a Language
There is an ongoing discussion on design as communication [8, 17, 24, 38] and there is a body of work [8, 17, 24] that gives credence to viewing visual design as communication. In this work, we engage with this ongoing discussion and identify the implications and research directions that emerge from viewing visualization design as a language.
Visualizations share many commonalities with language as they both express what we observe to others. The goal of visualization, like ordinary speech, often goes beyond presenting facts to achieving actions or outcomes. In any case, how data is visualized can alter how the original context is perceived, e.g., visualizing the uncertainty in data [26, 31].
Treating visualization as a language has been considered, although exploring the value of this association and what it affords is limited. Purchase et al. [60] have explicitly made these connections, and briefly describe the use of linguistic theory, namely pragmatics, to provide an
over-arching framework for information visualization. They comment on the relationship between visualization and language and discuss how information visualization should rely on a multitude of theories rather than a singular theory. Hullman and Diakopoulos [32] study the application of linguistic-based rhetoric in narrative visualizations. Several others have presented theoretical visualization frameworks [37, 47, 58, 14] and implicitly imply that visualization is a language. They elegantly demonstrate how applying frameworks from spaces such as distributed cognition, information theory, an algebraic basis, or conceptual metaphor theory can contribute to the betterment and improved understanding of the use of visualization.
A vocabulary is the body of words used in a language. If we are to claim visualization is a language, then its vocabulary would be visual encodings. This association has been observed by Wilkinson [77], who identified general rules that govern the creation and presentation of data graphics and presented a structure within which these rules might be operationalized efficiently. He supposed that if the grammar is successful, it should be possible to reduce any data visualization problem into a graphic utilizing the rules outlined. Grammar-based visual encodings, as well as declarative languages [12, 28, 29, 45, 51, 62], arose out of a need to fluidly and precisely articulate a set of intents for communicating data. These works provide formalized approaches for describing tables, charts, graphs, maps, and tables and give credence to treating visualization as a language.
In summary, researchers have recognized that visualization is a language and that it would benefit from formalizing the relationships to languages. If we are to treat the totality of visualization as a language and apply linguistic frameworks, we would have common ground for discussion and understanding of the properties of expressing visualizations, thereby facilitating the development of the field.
We present an approach for translating relevant theoretical frameworks from the space of linguistics into visualization. We develop a mapping between a subspace of visualization and linguistics _to illustrate_ the potential for more work in this direction and immediate benefits. Our focus is on the intent of the designer. We propose a theoretical structure for both describing and assessing the intents of visualization designers and the respective consequences.
The motives of the designer of a visualization - to achieve actions or outcomes - and the impact of the visualization on perceptions, whether intended or not, should be considered while developing theoretical frameworks for studying visualizations. A framework to capture how we design interactive visualizations and their effects can be obtained by developments in speech act theory. Speech act theory, a sub-field of pragmatics and linguistics, studies how words are used to both present information as well as carry out actions. We describe a mapping of speech act theory into visualization and offer a theory on visualization acts, _VisActs_. This framework will be linguistically motivated using the foundation of speech act theory but made relevant for the space of visualization. That is, it must account for fundamentally how visual elements are processed and interpreted, which delves into semiotic theory. Furthermore, it must take into account both the conventional content and the underlying principles of data visualizations. Finally, such a theory should also offer testable predictions about the kinds of _VisActs_ performed across the space of visualization. Particularly, it should offer the ability to assess how our intents manifest within visualization and their respective consequences.
Next, we delve into intent in visualization. Subsequently, we explain speech act theory and how it relates to communicative visualization. This is immediately followed by our introduction of _VisActs_, a translation of speech act theory contextualized for visualization researchers. We ground the relevance and application of this translation through a series of examples, followed by a discussion about our mapping.
## 3 Communicative intent in Visualization
Communicative visualizations are created for a broad audience and represent the majority of visualizations that the public encounters. As stated earlier, communicative visualization occurs in a range of settings including journalism, education, museums, and public policy discussions. The audience differs in terms of their backgrounds, familiarity with the subject, and initial level of interest in the topic. This is in sharp contrast to visualizations that are designed for analysts or domain experts, where the designer has an understanding of their audience's prior knowledge and has some assurance that the expert will use or attempt to use the visualization tools. Furthermore, the designer's intent is to provide visualizations that facilitate a set of specific tasks described by the experts. The diversity in the audience for communicative visualization makes it important to understand intent.
To start, we can consider intent as what the designer, or one who puts forth information, would like to be conveyed and communicated. The intent here would closely parallel the intent of a speaker in routine life. For example, if a child asks her mother while eating soup "is there any salt?", her intent could be to request some salt. However, what she stated may also be a query about the availability of salt. As this example illustrates, the intent of the speaker may not be perceived by the recipient and the desired outcome fails to occur.
Similarly, let us consider a visualization designer who creates a chart about war. One intent of such a visualization could be to terrorize [56] the audience to act in protest of the war, for example, in Figure 1-(a) the designer could produce a visualization that appears dramatic and possibly violent. However, the same data with another intent can be visualized differently. As seen in Figure 1-(b), where a more "neutral" design with a possible intent to inform the public about the war.
In both situations, we want the communicative intentions to be received accurately. However, the diversity of the audience may make the intent of the speaker or the designer different from what is perceived by the receiver. The similarity of the communication challenges in these domains stems from the nature of the audience. Hence, it seems fruitful to leverage the findings in speech act theory to inform design practices in communicative visualization. In the space of visualizations, intent has not been formally defined. In the following subsections, we will offer some formalism. Fundamentally, it is useful to consider intent from two perspectives, user-intents and designer-intents.
### _User Intent_
Dimara and Perin [21], when characterizing interaction in visualization, offer a definition of the term _data-oriented intent_. In their work, they describe interaction as a goal-oriented activity with a data-oriented intent. They survey the literature that describes intents from the perspective of a user [1, 25, 41, 46, 48, 61]. They find that the visualization literature classifies _intent_, from the perspective of a user, as a goal, task, or problem. User intent could be to explore high-level data, gain insights, or gain multiple perspectives of data. The intent of a user can also be to collect and correct data, identify insights, or make decisions. In this literature, the intent has been described and identified at a low operation level (e.g., altering representations and collecting data) as well as at a higher level (e.g., information foraging, sense-making, and knowledge creation).
Fig. 1: (a) depicts a visualization, by Simon Scarr [71], with a possible intention to terrorize the audience how devastating the Iraq War was. (b) is a revision of the same visualization by Andy Cottgerave [4]. He adjusts the design so it is potentially received as more “neutral.”
As designers, we tend to remove ourselves and our own intentions and in a manner treat ourselves as an outside entity constructing an interface from the perspective of a user to satisfy the user's intentions. Our research field has identified a variety of ways to adapt [23] our visualizations and systems to the intentions of the user. Designers spend a lot of time describing user intentions in terms of workflows and strategies and curating systems accordingly. A goal as a designer is to create interfaces that enable users to effortlessly express their intentions through the data.
### Designer Intent
In the spaces of narrative visualization and data storytelling, there are many papers [13, 19, 42, 70, 74] that provide frameworks and methodologies for communicating narratives. Although these papers do not explicitly define or identify the designer's intent, they subsume a diffuse concept of intent.
Bako et al. [7] assessed how designers utilize and apply data visualization examples by determining example usefulness, curation practices, and design fixation. Their work gives us methods for capturing the designer's intent as they begin the process of developing their visualizations. Often designers may not be able to articulate the form of what they intend to communicate. Examples are an effective way to express and collage together this form. Another explicitly identified type of designer intent is artistic intent. Artistic intent often disregards functionality, making some works unintentionally incomprehensible. Lau and Moore [40] offer a conceptual model for incorporating these intentions formally. Intent may also have social motivations such as coordination, collaboration, or presentation to an audience.
Recently, Adar and Lee [2, 44] put forth a definition and a taxonomy for expressing intents in communicative visualization. To our best knowledge, their work is the only attempt to provide a formal classification of **intents** that are broadly applicable. They proposed a cognitive taxonomy in which they frame intents to be of the structure, "The viewer will [verb] [noun]." Verbs are selected from a specified set of cognitive constructs; nouns are from a set of specified knowledge dimensions. Their primary claim is that a good language for describing intents is the language of learning objectives. They assert that the advantages of using learning objectives are: "(1) being capable of describing objectives regardless of what the viewer wants; (2) allowing for a designer to create specific tests to validate if a visualization leads to a viewer achieving an objective; (3) finding out if the objectives are achieved both when a viewer is looking at a visualization and when it is taken away" [2]. A limitation of their work is that it restricts the intent of the designer to educate the audience. On the other hand, this is the only paper that provides some formalization of the designer's intent.
We seek to add to the discussion of designer intent by providing an alternative perspective for viewing intent in visualization and demonstrating how this perspective can assess and analyze designer intent at a granular level.
### Challenges with Intentions
Our intentions can manifest in many forms in data visualization, especially as our communicative goals evolve and become more nuanced. Through examination of research [2, 43, 56, 65] that addresses the various types of intentions in communicative visualization, we highlight the following set of intentions to illustrate these forms; however, similar to spoken word, they are not limited to this set.
1. **Inform**: the intention is to have the audience be _aware_ of a
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline
**Speech Act Theory Taxonomy** & **Description** & **Translation into Visualization** \\ \hline
**Fundamental Concepts** & & **A theoretical framework for describing visualization designer’s intent.** \\ Locutionary Act & The utterance of a phrase. _What is heard._ & To show data. _What is shared._ (Section 5.2) \\ Pratic Act & An utterance of words which has meaning & The selection of data. "Data Act" \\ Propositional Act & The act of expressing the proposition, the content & Expression of data via analysis."Analytic Act" \\ Sentence Type & The type of sentence (e.g. declarative, exclamatory, etc.) has an impact on the force of an utterance. & The visualization type (i.e. informative, instructive, narrative, explorative, and subjective) has an effect. \\ Illocutionary Act & The utterance of a phrase with an intention. _What is intend._ & The design of visualization with an intention. _What is seen._ (Section 5.3) \\ Perlocutionary Act & Effect utterance had on the listener. _The consequence._ & The effect a visualization has on the viewer._What is understood._ (Section 5.4) \\ Context & A cluster of actual states of affairs or various events related to the utterance. & The objects or entities which surround a focal event and provide resources for its appropriate interpretation. \\ Convention & Societal rules and norms govern countless behaviors. & Visualization design ablics by these as well. \\ \hline
**Illocutionary Force** & The speaker’s intention behind the utterance & The designer’s design rationale behind their visualization. (Section 5.4) \\ Illocutionary Point (IP) & The point or purpose of a type of allocation & To visually state, claim, or suggest something is the case. \\ Assentive Point & Convey information. The utterance that informs how things are. & To visually state, claim, or suggest something is the case. \\ Commmissive Point & Make a commitment. & The guarantees of what a visualization will offer and abide by (data authenticity). \\ Directive Point & Attempts by the speaker to get the hearer to do something. & Engaging or motivating the viewer to do something via the visualization. \\ Declarative Point & Create a new state. Utenerates that change the world by representing it as being changed. & Data transitions or transformation as well as predictive visualizations. \\ Expressive Point & Reveal the speaker’s attitude or emotion towards a particular proposition. & Revealing personal bias or sharing personal opinions through visualization. \\ Degree of strength of IP & These points can be achieved with different degrees of strength. & Degree of the design’s effort to convey an IP through the visualization. \\ Mode of achievement of IP & The various means a speaker utilizes to achieve the IP of an utterance. & The means a designer employs to communicate the IP of the visualization. \\ Propositional Content Conditions & A limitation on the nature of the state of affairs for an IP. & Each PI has conditions that need to be met for the illocution to register. \\ Preparatory Conditions & A state of affairs that is pensyped in a necessary condition for the non-defective employment of the force. & Assumptions the designer makes about a viewer when employing a particular force. \\ Sincerity Conditions & The psychological state of the speaker concerning the IP. & The designer and the viewer take the visualization and all its content as intentional. \\ Degree of strength of sincerity conditions & The strength of the psychological state the speaker commits to when employing an IP. & The design’ and the viewer take the visualization and all its content as intentional. \\ \hline \end{tabular}
\end{table}
Table 1: Key Information on speech act theory concepts, applications, and meaning in visualization.
concept.
2. **Educate**: the intention is to have the audience _understand_ a concept.
3. **Emote**: the intention is to _illicit_ an emotional response from the audience. (enjoy, anger, sadness, etc.) from the presentation of the concept.
4. **Provoke**: the intention is to get the audience to _react_ or _take action_ to the concept presented.
5. **Discovery**: you are _obscuring_ information on purpose so that people work for it and through that work, they _gain_ some insight that can only be gained through this process.
It is known to be challenging, with absolute certainty, to derive an individual's original intentions behind an action [5] and likewise a data visualization. However, through other contexts, structures, and cues, it is possible to infer a close approximation of their intent. Linguistics has spent time studying both pragmatic and semantic structures in language as a means to accurately gauge intent, which has attracted interest from law practitioners as well as the NLP (Natural Language Processing) community. We hope frameworks for analyzing data visualizations, such as the one proposed here, can help the rapidly developing communicative visualization subspace and its relationship with ML4VIS.
## 4 Speech Act Theory Fundamentals and Terms
In this section, we will review our translation process and provide additional information on the terminology. Table I contains the terminologies that we translate and contextualize for data visualization.
The field of speech act theory examines how words are not only used to convey information but also to carry out actions. Many philosophers and linguists study speech act theory to gain insights and a better understanding of how we communicate. A speech act can be described as something that is expressed by an individual that not only offers some information but also performs an action.
The initial foundation of speech act theory was introduced by J.L. Austin [5] and the theory has since been developed and expanded by several other scholars [57, 56, 27]. Austin introduced the terms _locutionary_, _illocutionary_, and _perlocutionary_ acts. Where locutionary act is the utterance of the phrase, illocutionary is what was meant or intended by the utterance, and perlocutionary act is the effect the utterance has upon the listener. These terms of locutionary, illocutionary, and per-locutionary can, respectively, be thought of as: what is being put forth, how is it being put forth, and what does putting it forth achieve?
### _Forces_
Classical speech act theory [5, 67, 27, 68] introduces the idea that our utterances, words with some meaning that we put forth, contain a variety of forces. Grice [27] introduced the concept of speaker meaning, a speaker attempts to get the audience to believe something by relying on the audience to take the intention of the speaker as a reason for belief. Grice finds that in order for a speaker's meaning to occur, the speaker must first intend to produce an effect on an audience and also intend that this very intention be recognized by that audience. Next, the speaker must also intend this effect on the audience to be produced at least in part by their recognition of the speaker's intention. Speech act theory recognizes [69, 15, 55] that an illocutionary force contains the intent of the speaker. Namely, illocutionary force is the intended message a speaker assigns to a sentence they utter. Searle and Vanderveken [69] assert that the force in speech is comprised of 7 parts; illocutionary point (IP), degree of strength of the IP, mode of achievement, propositional content conditions, preparatory conditions, sincerity conditions (SC), strength of SC. The illocutionary point can be of the following forms: assertive, commissive, directive, declarative, and expressive.
Neo-Gricean theories modify Grice's principles to some extent. From these modifications, we are given relevance theories [72] as well as modifications to forces allowing for more focus on the speaker's intention. In this work, we use the Neo-Gricean analysis [6, 16] as a basis for our mapping between communication in visualization and speech act theory. Mapping these forces into visualization requires careful consideration of what is consistent and what additionally needs to be factored in. Murray and Starr [55] propose that the force of an utterance is its communicative function. They examined Grice's definition of communicative intention [27] and found that it did not consider how signals are used to coordinate communications. Although Murray and Starr [55] state that the approach we adopt does not address how agents use signals to coordinate, in the context of visualization, we fill this gap using semiotic theory. As visualization designers, we make use of visual signals which is explained by semiotic theory. An important takeaway of semiotics [59, 18, 35] is how social conventions influence meaning. In other words, the force of an utterance is contextual and subject to conventions.
### _Speech Act Example: Alice & Bob_
To provide a clear example of what is a speech act and what can one do with it let us observe a conversation between Alice and Bob Fig.2.
Alice: _"Would it be too much trouble for me to ask you to hand me the salt?"_
Alice utters a sentence to Bob that is asking two questions concurrently. The first question is if Bob is capable of passing the salt, while the second is an actual request for the salt. The _locutionary act_ in this case is what was said, the literal sentence. The _illocutionary act_ is what Alice means by uttering the sentence, and the _illocutionary force_ is the intent. Specifically, the intention of Alice was a request for Bob to give her salt, she then issued an utterance with the illocutionary force of command to Bob. If Bob, successfully processes this utterance and its force and proceeds to acquire and hand Alice the salt, then he has performed the _perlocutionary act_. In order for Bob to identify the illocutionary force and corresponding act, Bob must either passively or actively factor in relevant contextual information or social conventions and practices to help determine what Alice's intents are.
## 5 VisActs
The speech act was developed for understanding and examining how we communicate, specifically with words and speech; however, we find it can be extended past words for visual communication. Visualization is becoming a dominant mode of communication. As visualization becomes more complex for expressing and communicating data, it will inherit the challenges of languages. At one level, it has the ability to express more precisely but concurrently opens itself up to more ambiguity and multiple ways to be interpreted, which can have a variety of implications.
### _Proposed Framework_
With VisActs, we borrow some concepts from linguistics to use as a foundation and then proceed to contextualize and translate how these structures apply in data visualization, specifically the communicative side of data visualization. In speech act theory, we can use three structures to frame intents and their effects; locutionary, illocutionary, and perlocutionary speech acts. With this framework, we offer varying
Fig. 2: Illustration of a speech act. Person A utters a phrase, a location, to person B. This location is what person B will hear. Person A simultaneously also performs an illocutionary act by applying a force with their intention. How person B responds or what they do after processing what Person A has said would be classified as a perfocution.
levels of depth for the analysis of designer intent, as seen in, Table I. Furthermore, we focus on a retrospective analysis of designer intentions in visualizations and particularly data stories to illustrate VisActs.
To begin our translation, we first contextualize each of these for visualization. A locutionary VisAct is the _data_ or finding we present to the targeted user or audience. An illocutionary VisAct is the _visual representation_ or _encoding_ this finding or takes assumes when presented to the target user or audience. The illocutionary force or VisForce, is the _design rationale_ for the representation or encoding. Lastly, the perlocutionary VisAct represents the _evaluation_ of the encoding design after the audience has viewed and processed it. Through the perlocutionary VisAct, the designer gains an understanding of if their intended outcomes were met, that is the audience decoded the encodings and understood the findings or data presented as intended by the designer.
With this framing, we have separated stages of visualization design into several bins. In the first bin, locutionary VisAct, we focus on isolating what is the specific or derived data to convey to the audience. This bin is not concerned with _how_ this data is visually represented or modified but focused on the semantic _what_ part of the data is being shared. It is in the illocutionary VisAct and its accompanying VisForce, that we can begin teasing and understanding how the design impacts the communication of the data. There are several means through which a designer can transform the visualization to reflect their intentions. The two categories this work will focus on are encoding and interaction design. However, how we communicate, design, and interpret data-driven visual content is also affected by societal conventions and other contextual information. The goal of VisActs is to provide an alternative means to assess here how are intentions shape _visually_ the data we are communicating as well as better infer a designer's original intentions for producing a visualization.
### Locutionary VisAct
For the purpose of this work, we are only concerned with data that has been identified to be shared. VisActs does not consider data whose content is largely unknown and expected to be explored. The Locutionary VisAct made by the designer is the process of selecting data, tasks, and initial analysis methods (i.e., data cleaning). As these choices reflect part of the designer's intentions. For example, in data storytelling, this is the process of identifying the "story pieces" to be communicated [42]. The data selection and modification affect the visualization design, as it may constrain what visualization options if any [22, 54, 61]. For example, hierarchy suggests depth, temporality may imply change, and spatiality could imply closeness or bonds. Thus, we may be visualizing data as a treemap, flows, or possibly on a map. By taking data types into account, such as nominal, categorical, numerical, and their pairings, we can begin to define a space of what representations are available to fulfill our communicative goals.
### Illocutionary VisAct
The illocutionary VisAct is the process of designing a visualization from the data to then be shared with an audience. This visualization may be interactive but must be data-driven. Similar to speech acts, we are not concerned with visualizations that have no data or " "meaning" associated with them. The design of both interactive and static data visualization is heavily influenced by the designer and their choices. In this VisAct the relationship between the designer's intent, their rationale, and the resulting visualization is captured. How the designer intends to communicate this data (e.g., to educate or persuade) may influence their design rationale.
The intention, or intended purpose of a design choice, is captured by an illocutionary point (IP). VisAct IPs fall under a set of five types; assertive, commissive, directive, declarative, and expressive. An assertive point is a visual element that either states, claims, or suggests the represented data has validity and is true. For example, in Figure 3(c) the solid red lines are making an assertive point visually _stating_ the current trend of Covid-19. A commissive point sets up a guarantee between the design and the audience that it will offer either an explanation, understanding, or action. A simple example would be a slider that sets the time window for a line chart, the guarantee being that the chart should update to reflect the selected window. A directive point would be design choices that attempt to engage or motivate the audience to act. Declarative points are design elements that transition the visualization into a new state or show predictions of what could transpire (i.e., animated transitions, filtering, or drill-down). Lastly, an expressive point captures the designer's personal opinions as they appear in the visualization. In Figure 1a each choice to make the chart look like blood dripping could be an expressive point; the color, inverting the y-axis, and the title of the poster.
Whereas the design rationale is referred to as the VisForce, a force that guides and nudges design decisions into what is finally seen by the audience. A VisForce's influence appears in (1) the encoding design and (2) the interaction design.
**Encoding Design.** How we design visualizations, in terms of binding the data to the visual elements, greatly impacts how the data are perceived and understood by the audience [54, 57, 9, 22]. Certain visualization design choices may elicit emotional responses from the audience, which can also help better communicate the designer's intentions to the audience [39].
**Interaction Design.** Interaction design as it pertains to data visualization is a heavily documented [48, 41, 46, 21, 4]. From these works we can surmise the following: (1) interaction design effects visually what is seen by the audience, (2) interaction design influences how the audience perceives the data, (3) and interaction design impacts audience engagement with the data. The designer's choice of which interactions are available to the audience can steer the audience towards their goal.
Fig. 3: VisActs consists of three core actions; Locutionary, Illocutionary, and Perlocutionary acts. Through each of these actions, the designer imbues their intent into the visualization in the hope of the audience understanding and interpreting the message as intended.
### _Perlocutionary VisAct_
A perlocutionary VisAct is performed by the audience. This action informs the designer whether or not their desired outcome has transjired. If the outcome was to _call to act_ over climate change by signing a linked petition and they received a signature that would be a success. However, if the outcome was to _educate_ museum visitors on metagenomics [19] via an interactive system and the majority of visitors failed to understand the system, then it was unsuccessful. The granularity of success and failure, much like with evaluation is up to the designer to classify. In the context of data visualization and research, this stage is evaluating what was understood but the audience and how that aligns with what the designer intended to transpire [54, 52, 22].
### _Convention_
Social conventions are rules and norms that govern countless behaviors we all engage with every day. Bicchieri [10] defines conventions as descriptive norms that have endured the test of time. She states that if one's main objective is to coordinate with others, and the right mutual expectations are present, people will follow whatever convention is in place. In visualization design adhering to and following conventions is necessary for effective communication. For example, color norms differ based on culture, knowledge, and context. Rainbow-colored maps are typically associated with temperature variations [49]. If a designer applies a rainbow color map in a geospatial visualization to depict crop yields then many viewers may not properly decode the visualization.
### _Context_
In communicative visualization, several works [53, 61, 64, 52] identified challenges in the interpretation of data-driven visualizations and how different contexts affect this interpretation. Mukherjee et al. [53] proposed the use of semantic discriminability theory, a general framework for understanding conditions determining when people can infer meaning from perceptual features. There is a large body of linguistics research [63, 67, 6, 72, 5] showing how context influences the meaning of an utterance. Sbisal [63] proposes contexts are continuously shifting, but at each moment of interaction it is possible to evaluate the performed act against the context. This literature suggests context can be classified along the following dimensions: (1) Given vs constructed context, (2) limited vs unlimited context, and (3) context change.
**Given vs. constructed context:** In a given context, the context is set once the event starts and is not mutable going forward. For example, many narrative visualizations [70, 19, 42] or analytical systems pre-determine or have a fixed context. Whereas in a constructed context the context of an interactional event is created by its participants as the interaction proceeds. One form of this in visualization could be highly interactive and collaborative visualizations that function off of user inputs. These visualization evolve and change based on these interactions. A different example of this can be seen in Figure 4 where the context of a public forum influences the designer's intent. This begins with Jeffrey Shaman creating a visualization and it is shared on a public forum. The public became invested in whether the design is effective or not, how can it be improved, and what is the intent of this visualization. In response to the visualization, others were created with a different intent. As shown in Figure 3(d), Amelia Wattenberger attempted to improve on the original, Figure 3(b), with some believing she did. The constructed context in this scenario is that initially, the context of the visualization was to forecast themicron virus for a period of time; however, as more individuals debated the effectiveness of the visualization the new visualizations produced gained a constructed context of attempting to provide an improved design and convey the original message.
**Limited vs. unlimited context:** When is acquiring information to interpret what is occurring no longer necessary? Is the context finite or something that needs to be continuously provided? Context, in speech act theory, has been considered a bounded resource that only includes 'what is needed' for [36] interpretation or evaluation. Conversely, there is an argument that context is ever-changing and that there is no end to the details one might want or need. Searle [68] views context as indefinitely extensible and potentially all-inclusive. That is every speech act has meaning only against a set of background assumptions. These assumptions are indefinite and the number of processes for the analysis and development of an idea are endless. Other views [73] find context as always extensible but delimited. They believe that context is needed as background information (or what the speaker believes is, or intends to treat as background information) and is delimited on every occasion by the presuppositions a speaker happens to make. Additionally, actions typically involve results, such as bringing about a new state, referencing the past, or substituting a new state for an older one. The objective or cognitive nature of context affects the action. After an action or event occurs its content is added to the participant's presuppositions, and therefore to the cognitive context. For example, in dashboards with linked views a user may filter on a subset of the data altering the chart. An accompanying view may re-render to reflect this filter and reveal new insights on the subset reflecting that initial interaction. This change is an implicit communication to the viewer that these two views are linked and the data in one is influencing the other. The discussion of limited vs unlimited context is ongoing in speech act theory. However, the distinctions and points made, as well as points made for future works, directly apply to visualization. For example, Mantri et al. [52] examine how a variety of additional contexts impact the interpretation of communicative visualizations and synthesis of information that comes from consuming cumulative discoveries. They discuss how science and journalism often present their content as an ongoing discussion and evolution rather than a finality (i.e., discoveries that build on, contradict, contextualize, or correct prior findings).
These considerations of context clearly arise in visualization and have been defined implicitly by this classification. Several frameworks [19, 33, 42, 57] have discussed context and its influences on visualization design. For example, they describe external context as (1) an understanding of the target audience's needs and prior knowledge, (2) the type of device or medium the visualization will be expressed through, (3) and the physical setting. Context's effect on inferring and interpreting visualizations has also been examined [57, 52, 53, 61, 64]. Padilla et al. [57] identify how after viewing the encoding, the audience mentally searches long-term memory for knowledge and context relevant to interpreting the visualization. Furthermore, context, as it
Fig. 4: This figure illustrates a disconnect between an author’s intention and the audience’s reception. Jeffrey Shaman wrote an article [34] predicting when the Omrican variant of Covid would peak with an accompanying visualization. The visualization sparked an online Twitter debate (a), with some finding the original design (b) ineffective compared to (c) a line chart. Amelia Wattenberger (d) re-imagined the design with a different intent, which garnered a more positive response, and Amanda Makuiec (Executive Directory of DataVizSociely) (e) wrote a thread [3] on how visualization design can address different needs or intents.
pertains to visualization, has many influences on the design and the designer's intent. Consequently, subtle changes in intent can be reflected and seen in the visualization, as shown in Figure 4. With VisActs, we provide a framework to facilitate studying at a granular level how intents influence the design.
## 5 VisActs: Application to Visualization
To ground the value of viewing visualization as a language and applying the speech act theory we provide a set of examples. The first example uses VisActs to assess a New York Times visualization. These examples use VisAct at a granular level to study the intention from the perspective of two archetypes commonly observed in communicative visualization: storyteller and educer. As a disclaimer, we can not know for certain the original designer's intent. However, we can, to an extent, determine what the intent could be based on design decisions and available documentation.
### Example: Storyteller
The storyteller is concerned with expressing data as a narrative. Here visualization is a means to engage an audience with the data and the visualizations are carefully sequenced to illustrate causality. We apply _VisActs_ to study a recent narrative visualization piece, _How the Virus Got Out_[78].
This narrative piece is composed of several visualizations, five of which are shown in Figure 5. The story starts with Figure 5a, a visualization that promises that the authors will explain visually _How the Virus Got Out_ of China. It also asserts that the pandemic started in China.
Let us first focus on what is being shared, not inferred from the visualization. This is defined as the location or locutionary act. When contextualized as a _VisAct_, the locutionary act and the locution describe what the underlying content is, namely the data. Here, the **locutionary VisAct** is the data and analysis used to understand the spread of the virus. The _data act_ is the selection and curation of the datasets that represent people, their movements, and infection data. As mentioned in the article [78], their data came from Baidu, two Chinese telecoms, Fred Hutchinson Cancer Research Center, the University of Washington, and the Johns Hopkins Center The _analytic act_ are the estimations and relevant methods applied to the dataset to help bring out the findings to then be visualized. Lastly, the _data types_ are spatio-temporal data.
The visualizations are the illocutionary **VisAct**. The **image act**, consists of the low-level visual elements. The viewer would see a web page composed of color, shapes, and text (e.g. the marks and channels). The **encoding acts** determine how marks and channels should be paired and how they are bound to the underlying data. In this example, the data contains temporal information about people's movements, location, and estimates of the percentage of the population with COVID. Individuals are represented as points, the position of a point connotes a geo-spatial location at an instance in time, and the color denotes whether an individual has COVID. The meaning provided by the encoding act is supplemented by the **semiotic act**. The semiotic act constructs the relationships between the image and other factors such as culture and society. The grouping of shapes and text together is seen as a map of China and neighboring countries. This visualization uses projections and map aesthetics that most of the populace has familiarity with from map-based applications. Therefore, the movement of points across this map is associated with transit. In Western cultures red color has negative connotations. In this case, red is used to symbolize those infected with the virus. Although the piece presents facts, because of its emphasis on temporal flow and implied causality, the **type of visualization** is a narrative. It is important to note that the type of visualization influences the meaning it will convey.
The **VisForce** is the intention underlying Figure 5a. One intention here is a promise to the readers _to educate_ how the virus spread. A _VisForce_ is comprised of the seven properties described in Section 5.3. To understand the _VisForce_s at play let us first identify the set of **illocutionary points** at work. Figure 5a has a commissive point and an assertive point. It asserts that the virus started in China and it promises to provide a justification for this assertion. The **degree of strength** of the promise is moderate, as we have to infer the promise. The mode of achievement of the _VisForces_ is through the sequence of steps used to build the visualization. It starts with the text "China", which slides and snaps onto a map of China. Then a set of red dots quickly transitions into streams of red dots flowing out of China. The ficticity conditions for these illocutionary points are the assumptions by the designers that they have the information and understanding of the base material and that the reader will benefit from the visualization.
Throughout the story, the visualizations make several assertive points about the spread of COVID, where it originated, and facts about specific days. In Figure 5c, the designers present a set of points depicting the number of reported cases in December. The _VisForces_ consists of two assertive points and an expressive point. The illocutionary _VisAct_ of a small cluster of red points asserts that only a few dozen cases were known to the doctors. The second assertion was that the true number of infected was closer to a thousand. The corresponding illocutionary _VisAct_ is a larger cluster. The expressive point was the emphasis the authors placed on the difference between the two assertions. The illocutionary _VisAct_ to achieve the expressive point is the animation that grows the volume of the dots. Its degree of strength is high.
In Figure 5d, volumes of varying sizes of red points are shown on a map. The designer assumes that the size of the cluster will be interpreted as the size of the infected population by the viewer. We can argue that Figure 5c introduces the context necessary for this interpretation. As we have stated earlier the _VisForce_ depends on the context, which can change or evolve. The visualization in Fig 5e opens with a declarative point. There is a state change in the _VisAct_. The overall semantic state (image, encoding, and semiotic act) of the visualization has changed. The visual narrative transitions from a geo-spatial visualization to a scale-free representation that mimics a subway map. This enables the designers to make an assertive point of Wuhan's centrality in the pandemic.
In addition to internal contexts, there are external contexts such as social conventions or norms that can passively or directly influence the _VisForce_. For a viewer to recognize the _VisForces_ in this story they must be (1) aware a pandemic occurred resulting in a global lockdown, (2) familiar with reading and interpreting maps, and (3) able to understand English. Also, as we have seen, _context change_ contributes to the intended meaning. Information necessary for interpreting visual encodings is presented and then applied in different settings. Lastly, there are some **conventions** this visualization takes advantage of. Namely, it uses the popular scrolly-telling design and assumes those viewing the page understand the convention of scrolling to reveal new content. The **sincentiy** of the designers is seen on the final page of the story where they provide notes talking about limitations of what is presented as well as sources for the data and statements made.
To recap, we have organized the mapping of this narrative into two sections. The first section, **locutionary acts** addresses what the designers put forth and what it is we see. In the second part, we focus on **illocutionary acts**. We identify (infer) the intents and examine
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline
**VisActs Terminology** & **Description** \\ \hline
**Locutionary VisAct** & To show data. _What is shared_. \\ Data Act & The curation \& selection of a dataset(s) \\ Analytic Act & Expression of the data through analysis. \\ Data Type & The type of data (e.g., temporal, spatial, quantitative, discrete, etc.). \\ \hline
**Illocutionary VisAct** & To visualize the data. _What is seen_. \\ Image Act & The production of an image. \\ Semiotic Act & The expression of data through signs \& codes. \\ Encoding Act & Visual encodings mapped to data. \\ Visualization Type & The type of visualization (i.e., informative, instructive, narrative, explorative, and subjective). \\
**VisForce** & The designer’s rationale. \\ \hline
**Pervlutionary VisAct** & The effect the visualization has on the audience. _What is understood_. \\ \hline \hline \end{tabular}
\end{table}
Table 2: VisAct terminology. These terms and their mappings were derived from a breadth of linguistics and data visualization research.
how they are expressing their intentions. This is addressed by delving into the _VisForces_, the illocutionary points, modes of achievement, and the context. In this example, the designers use several assertive illocutionary _VisForces_ to convey to the viewer "How the Virus Got Out". We also identified and discussed declarative, expressive, and commissive points.
Finally, let us look at the **perlocutionary act**. This third and final component of _VisActs_ addresses the consequences of presenting the visualization to the viewer. The perlocutionary act captures the effect presenting the visualization had and assesses if the viewer understood the designer's intent. In our field, we conduct user studies and evaluations to determine this. To ascertain if the visualization was successful in communicating the intent of asserting the factors that led to the virus spreading across the world we would need to interview viewers.
### Example: Educator
A common use of communicative visualization is to teach or to inform. An educator uses visualization to simplify complex data to explain concepts. This can take place in a formal setting such as a classroom or in an informal setting like a museum. We examine the museum exhibit DeepTree [11, 20] using our framework.
Here, the **locutionary VisAct** is the phylogenetic tree of life. The _data act_ is the phylogenetic dataset and the corresponding timelines. The _analytic act_ could be any data formatting or association of the phylogenetic tree with the temporal information. Lastly, the _data types_ are temporal and image data.
DeepTree's **illocutionary act**, as seen in Figure 5(a), has a tree-like structure supporting a set of images. The **image act** of Figure 5(a) consists of point and line marks. DeepTree's **encoding act** maps the visual elements to the underlying data. As a result, what is seen in Figure 5(a) is a phylogenetic tree dataset that consists of images and text. The designers create a tree visualization where leaves are images of species. The visualization is composed of a line mark and takes advantage of size and position channels to convey the dataset. Images on the tree depict species along with a label with their common name. The **semiotic act** provides a tree metaphor, where the trunk symbolizes the universal common ancestor and the branches the children. The "branches" of this tree represent splits in species and portray the phylogenetic classification. This **type of visualization** is primarily explorative and is supplemented with some informative elements.
As with the prior example, to understand the illocution, we identify the _VisForces_. This visualization makes use of four of the five illocutionary points as described in Table I. It has assertive, directive, commissive, and declarative points. We will focus on the directive and commissive points.
Figure 5(a) is the entry point for this visualization. The _VisForces_ here include commissive and directive points. The _VisAct_ promises to inform the visitor about the Tree of Life. The mode of achievement is a collection of images of different species and animations indicating relationships among them. The directive point is to get the viewers to drag their hands across the screen. The mode of achievement is an animation of a hand dragging downward to instruct viewers how to engage with the application, Figure 5(d). This visualization has several other directive points that are achieved through different modes such as tapping a button, downward dragging to traverse down the tree, pushing upward to move up the tree, flicking to quickly move through the tree, single-touch pan, multi-touch zoom, pushing to select, and dragging an element onto a destination. Each directive point is achieved by using techniques such as highlighting, animating, or annotating visual elements to cue the viewer to interact. The degree of strength for a directive point depends on the visual emphasis placed on that technique.
External factors and conventions also influence the directive points. The museum setting and use of an interactive touch-screen table to display the visualization add to the _VisForce_.
The side panel in Figure 5(a) has a commissive point. The promise here is to inform the viewer of the location of the tree of each species portrayed in the side panel. In DeepTree when a user selects an image of a species from the side-panel, Figure 5(a), the image jumps to its position in the tree. If an image is pressed a graphical element, an arrow, appears showing the viewer where to slide (Figure 5(c)). The directive point here is to get the viewer to slide the image. This directive point is weaker than the earlier directive point for getting a viewer to drag their hand onto the table.
Let us next examine the **propositional content condition** for directive points. These are the designers' beliefs that the viewer will perform an action they request. In DeepTree, the designers believe that by animating an element to grow and shrink, adding a highlight around it, and having a text annotation above it saying "learn more" the viewer will tap on it. The **prepreparatory conditions** for all directive points assume that the viewer is able to perform the suggested actions. The **sincerity condition** of these directive points is that the designer wants the viewer to perform the actions. The degree of strength for the sincerity condition is the importance of these actions to the designer. In DeepTree it is very important for the designers that the viewers pan and navigate the tree. This action is crucial and is evidenced by the visual emphasis placed on this _VisAct_.
The viewer can press a button that takes them to a new simplified view of the tree. The illocutionary _VisAct_ here is a tree of life contextualized to the selected species, Figure 5(b). The _VisForces_ here have a declarative and commissive point. The declarative point is the transition from the earlier state, Figure 5(a), to a simplified visualization, Figure 5(b). This declarative point's mode of achievement is an animated transition.
The commissive point the simplified view makes is that it promises the viewer that the designer will return them back to the original state, Figure 5(a). The **mode of achievement** for this commissive point is a button. The **propositional content condition** for this commissive point is that the designer will fulfill the commitment. That is, upon a viewer pressing the "X" button, seen in Figure 5(b), the simplified view will disappear and "close" and they will be returned back to the overview, Figure 5(a). The **preparatory conditions** for the commissive point is
Fig. 5: “How the _Virus Got Out_” [78]. (a) Title page, (b) map of Wuhan, China, (c) visualization of what was assumed to be the number of COVID cases in December compared to what it actually was, (d) overview of the virus spread, (e) scale-tree representation of the world.
Fig. 6: (a) DeepTree [11] exhibit overview (b) simplified view (c) selection interaction (d) zoom and pan interaction. (permission pending)
that the designer is able to complete this promise within their design. The sinterity condition of this commisive point is that designer, and therefore the visualization intends to satisfy the promise.
Briefly, there are many assertive made in both the simplified and overview visualizations. These assertive points state facts about species and their ancestry. The modes of achievement the designers selected to express their assertive points include dialog/pop-up boxes, annotations, and color to highlight relationships between species.
So far, we have gone over some of the _VisForces_ present in this exhibit to illustrate how to use our framework and the structure it provides. Namely, we showed that there are assertive, declarative, commisive, and directive points and thus those respective _VisForces_. We walked through the properties of some of these forces and gave examples (i.e. we described eight directive _VisForces_, degree of strength, conditions, and mode of achievement). However, we also have to account for how social conventions and external contexts influence the visualization design and its overall meaning.
DeepTree relies on external contexts and conventions present in an informal learning environment; specifically a museum and the considerations and conventions [19] that it comes with. Furthermore, DeepTree relies on its viewers to have familiarity with touch-based devices [11] (e.g., iPads and iPhones).
Lastly, the perlocutancy _VisAct_ addresses the reaction the viewers had upon seeing the visualization. It can be used to determine if the designer was successful in conveying their intended meaning. The designers of DeepTree documented their evaluation [11] and from it we can see that their directive and a commisive _VisForces_ were understood by the viewer. For example, both the commisive force of a promise to the viewer that by tapping the find button something will occur in the future and the directive force of a request to the user to tap and drag an image off the image eel were successful. Viewers would tap on the button to find a species, signifying the viewer believes a promise will be fulfilled. Additionally, they were then presented with a slot to place an image. They inferred the directive point and dragged an image onto the slot. After doing so, the promise made by the designer is fulfilled as the visualization "flies" through the tree via animation to where the species in the image is located. This _VisAct_ had "emotional" perlocutonary response in the viewers, where the designers documented responses such as "wow, this is big" or "woah".
## 6 Discussion
With VisActs, we present a conceptual framework for inferring designer intent. This framework is not a finality but rather a foundation to be iterated and expanded upon. There is a growing need to infer and assess designer intent, as well as grow our understanding of which types of design decisions illustrate an intent (i.e., negative emotions can be conveyed via a variety of design choices [39]). With this manuscript, we provide a translation and contextualization of frameworks from linguistics to communicative visualization and offer a framework for inferring intent in interactive data-driven visualizations. In this section, we discuss the prospective values and directions VisActs can lead to.
**Biases Assessment:** One use of VisActs is to tease out design decisions that could reflect the designer's biases. For example, in Figure 6(a) we see a bar chart communicating _"America's economic growth in the 21st century"_. This chart at a glance shows the tallest bar to be for the year 2021. Upon closer review, however, Figure 6(b), we see that the y-axis has a mistake. This mistake extends the 2021 bar to be slightly more exaggerated in comparison to the other years. This mistake was shared for several hours on Twitter before a correction was made. In that span, the community responded with many complaints and comments over the chart. There were claims that it was intentionally added as a means to persuade that the 2021 administration is very effective. The public would attempt to evidence these claims by suggesting the mistake only occurred at a point on the y-axis that would only affect the 2021 bar and nothing else. Namely, the public assessed the encoding design to infer a plausible designer intent.
It is impossible to say with absolute certainty whether this case was a genuine mistake or an intentional design choice. However, there is a need to provide structures to assess and better debate the biases in charts that circulate in the wild. As these affect how society trusts data and science. One possible extension of VisActs is to apply the framework to a corpus of data visualizations and create associations between design and possible intents. Through developing richer classification models, we could develop faster or more accurate linters that could identify these discrepancies and provide community notes, Figure 6(d).
**ML4Vis application.** Bako et al. [7] call attention to how visualization examples can be used as inputs to author visualizations as a direction for future work. To help achieve this goal, we need to really understand and expand on which aspects of these examples correlate to what the designer intends to do. This could be task-based as well, however, it is possible that task-based may not be granular enough to effectively capture the needs of designers and the specific design elements they may be interested in. Expanding on classification models VisActs has an application in the ML4Vis space. Principles and frameworks from speech act and discourse theory have been studied and leveraged in NLP. Similarly, VisActs can be utilized in future works to automatically generate visualizations and their design based on designer inputs. By offering a framework to infer the designer's intentions based on design rationale and assess which features could contribute to particular intentions we can better machine learning train models on data visualization and build associations between visual features and particular intentions.
As with Midiourney and DallE prompts, VisActs can be a stepping stone to developing applications that allow users to provide their data to visualize and enter prompts to tailor the visualization to their needs. In order to achieve such automation we need to develop rich associations between designer intents and corresponding design choices. Through VisActs, it is possible to develop these associations. This framework can be applied at a granular level, as seen in the examples for Storyteller and Educator.
## 7 Conclusion
This work takes the view that visualization is a language and can therefore benefit from applying frameworks and theories from linguistics to systematically understand and analyze how we communicate through data visualization. We provide a translation of a sub-field of linguistics and offer our framework VisActs. We then use examples applying our framework to illustrate its potential application to our field. This translation affords us a means to deconstruct the language of visualization, identify low-level communicative components, and learn how these components individually and collectively accomplish the communicative goal of the visualization. Our detailed examples demonstrate how
Figure 6: (a) Tweet by the @WhiteIouse account conveying the economic growth of the current administration [75]. (b) The y-axis label increments by a half-step rather than a whole point as it was previously. (c) Nancy Pelosi shares this figure with the error to her following [76] (8.1 million followers). (d) The Twitter community flags the discrepancy and many replies and threads are made questioning the integrity of the data. (e) @WhiteIouse updates the figure stating it was a _proproreading_ issue.
these concepts can be used to examine designer and describe the forces at play.
This is an initial mapping of the two spaces and future work can tighten this association and build upon its structure. We believe that our work gives credence to the relevance of linguistics frameworks for the study of visualization and supports continued efforts in translating other frameworks and theories into our domain. We hope our work enables the future integration of theories and frameworks from linguistics into visualization and grows our framework for studying visualization design intent. VisAct provides a standard way, a language, for anybody to examine/describe a visualization, its interaction design, and the designer's intent down to the granular level.
|
2309.14634 | Synchronizing Full-Body Avatar Transforms with WebRTC DataChannel on
Educational Metaverse | Full-body avatars are suggested to be beneficial for communication in virtual
environments, and consistency between users' voices and gestures is considered
essential to ensure communication quality. This paper propose extending the
functionality of a web-based VR platform to support the use of full-body
avatars and delegated avatar transforms synchronization to WebRTC DataChannel
to enhance the consistency between voices and gestures. Finally, we conducted a
preliminary validation to confirm the consistency. | Yong-Hao Hu, Kenichiro Ito, Ayumi Igarashi | 2023-09-26T03:28:09Z | http://arxiv.org/abs/2309.14634v1 | # Synchronizing Full-Body Avatar Transforms with WebRTC DataChannel on Educational Metaverse
###### Abstract
Full-body avatars are suggested to be beneficial for communication in virtual environments, and consistency between users' voices and gestures is considered essential to ensure communication quality. This paper propose extending the functionality of a web-based VR platform to support the use of full-body avatars and delegated avatar transforms synchronization to WebRTC DataChannel to enhance the consistency between voices and gestures. Finally, we conducted a preliminary validation to confirm the consistency.
Metaverse, Real-time Communication, Web User Interface
## I Introduction
'Metaverse' is commonly defined as 3D virtual environments where interactions among users occur and are accessible via computers or Virtual Reality (VR) devices, and it has found utility across diverse areas, including education. We have started developing an educational platform in metaverse based on Mozilla Hubs, an open-source web-based VR platform [1].
'Avatar' refers to a character that represents and is controlled by a user in a virtual environment. Currently, Mozilla Hubs only supports avatars having a simplified upper body (Fig. 3a), which may reduce computation costs on the client side, ensuring Mozilla Hubs' high accessibility even on low-end devices. On the other hand, it limits the conveyance of non-verbal information through body gestures. We believed that it is also essential for effective communication, and the use of full-body avatars that possess complete body parts like a real human (Fig. 3b) can compensate this function.
In fact, previous studies have shown how full-body avatars benefit communication in virtual environments [2, 3]. Similarly, full-body avatars can improve sense of presence [4], which in turn play an important role in learning [5]. Following these findings, we decided to integrate full-body avatars to the Mozilla-Hubs-based platform we are developing.
In addition, Mozilla Hubs currently synchronizes avatar transforms (changes in the position, orientation, or size of an avatar's bones) through WebSocket on a mesh network, while we considered WebRTC DataChannel more suitable to synchronize avatar transforms due to security concerns and consistency between users' voices and gestures. In this study, we aimed to expand Mozilla Hubs' implementation to enable the use of full-body avatars and to have the full-body avatar transforms synchronized by WebRTC DataChannel.
## II Proof of Concept
### _Implementation_
#### Ii-A1 Accommodate Full-body Avatars in Mozilla Hubs
In the original implementation in Mozilla Hubs, avatars are hard-coded to contain limited bone names and hierarchy 1, and other avatars with different skeletons, including full-body avatars, are usually neither operable nor rendered properly. To address this, we prepared a bone mapping function that maps the bones by checking the similarity of their name with their corresponding body parts (e.g. LowerArm.R is mapped to right elbow). Then we implemented Cyclic Coordinate Descent Inverse Kinematics [6] so that a full-body avatar can still reflect its user's poses naturally with limited inputs.
Footnote 1: [https://github.com/MozillaReality/hubs-avatar-pipelines](https://github.com/MozillaReality/hubs-avatar-pipelines)
#### Ii-A2 Synchronization of Full-body Avatar Transforms by WebRTC DataChannel
WebRTC (Web Real-Time Communication) 2 enables real-time data transmission between web browsers without passing through a central server. WebRTC DataChannel is capable of transmitting text or binary data, implemented on top of UDP while remaining reliability similar to TCP, and incorporating DTLS to encode the data transmission.
Footnote 2: [https://webrtc.org/](https://webrtc.org/)
In our previous study [1], we introduced an alternative WebRTC SFU solution into Mozilla Hubs to enhance its audio transmission, and we also delegated the transmission of avatar transforms to DataChannel. In this current study, we continued to employ this delegation and extended it to encompass full-body avatars for the following reasons.
Firstly, latency on the synchronization of voices and gestures may cause inconsistency between verbal and non-verbal cues, impacting the communication quality and effectiveness. We argued that real-time alignment between verbal and non-verbal cues with minimal latency should be prioritized, and DataChannel holds the potential to address this concern.
Moreover, within the metaverse, avatar transforms can be regarded as sensitive personal data, especially when using full-body avatars which reproduce a user's real body poses, revealing more about the user's identity [7]. Leveraging WebRTC DataChannel allows us to encode avatar transforms transmission and avoid routing through central servers, thereby enhancing higher security.
### _Preliminary Validation in Data Transmission Latency_
A preliminary validation was conducted to measure the transmission latency between audio and avatar transforms within both the original Mozilla Hubs architecture and our proposed one. We hypothesized that our implementation brings lower latency of avatar transforms from audio, resulting in higher consistency between audio and avatar transforms.
One Apple MacBook serving as the Sender and one Windows 10 Desktop Computer serving as the Observer, two devices were connected to the same room on self-hosted Mozilla Hubs (Fig.1 and 2). For 5 minutes, the Sender repeatedly played a 3 second audio clip and synchronized circular movements of its avatar's left hand. The avatar's left hand positions were transmitted to the Observer, while the audio clip was captured by the Sender's microphone and also transmitted. Timestamps were recorded whenever the Observer received the audio or observed changes in the Sender's avatar.
Subsequently, the transmission latency was calculated between avatar transforms and audio using the recorded timestamps, and found that avatar transforms were transmitted faster than audio in both conditions (Fig. 4). In our proposed implementation, the average latency was 257.64 ms (SD = 16.10 ms), while in the original implementation, it was 184.04 ms (SD = 49.81 ms). This suggests that, compared to the original implementation, our approach results in higher dispersion between audio and avatar transforms due to either slower audio transmission or faster synchronization of avatar transforms. Further investigation is needed to clarify this.
Higher variation was also observed in latency in the original implementation, as evident from the graph and the larger standard deviation, indicating greater stability in our proposed implementation, which can also contribute to higher consistency between audio and avatar transforms.
Regarding the limitations of this validation, it is worth noting that the avatar's movement was triggered when the audio clip was played, not precisely when the Sender started the audio data transmission. This discrepancy might had contributed to slower audio data arrivals at the Observer side.
Lastly, it is essential to acknowledge that this preliminary validation involved only two devices in the same room. In rooms accommodating more users, latency in both audio transmission and avatar transforms synchronization may become more obvious, allowing for more meaningful comparisons.
## III Conclusion
We extended the functionality of Mozilla Hubs to support the use of full-body avatars and delegated full-body avatar transforms synchronization to WebRTC DataChannel. The result of a preliminary validation failed to demonstrate a more accurate synchronization but indicated more consistent time differentials between audio and avatar transforms in our implementation. To gain a clearer understanding of the latency improvement, further investigation is required under higher client load, and we are also planning an usability assessment for our implementation within an educational context.
## Acknowledgment
This work was partially supported by the following grants: JST Grant Number JPMJPF2202.
|
2301.13779 | FLAME: A small language model for spreadsheet formulas | Spreadsheets are a vital tool for end-user data management. Using large
language models for formula authoring assistance in these environments can be
difficult, as these models are expensive to train and challenging to deploy due
to their size (up to billions of parameters). We present FLAME, a
transformer-based model trained exclusively on Excel formulas that leverages
domain insights to achieve competitive performance while being substantially
smaller (60M parameters) and training on two orders of magnitude less data. We
curate a training dataset using sketch deduplication, introduce an
Excel-specific formula tokenizer, and use domain-specific versions of masked
span prediction and noisy auto-encoding as pre-training objectives. We evaluate
FLAME on formula repair, formula completion, and similarity-based formula
retrieval. FLAME can outperform much larger models, such as the Davinci (175B)
and Cushman (12B) variants of Codex and CodeT5 (220M), in 10 of 14 evaluation
settings for the repair and completion tasks. For formula retrieval, FLAME
outperforms CodeT5, CodeBERT, and GraphCodeBERT. | Harshit Joshi, Abishai Ebenezer, José Cambronero, Sumit Gulwani, Aditya Kanade, Vu Le, Ivan Radiček, Gust Verbruggen | 2023-01-31T17:29:43Z | http://arxiv.org/abs/2301.13779v2 | # FLAME: A small language model for spreadsheet formulas
###### Abstract
The widespread use of spreadsheet environments by billions of users presents a unique opportunity for formula-authoring assistance. Although large language models, such as Codex, can assist in general-purpose languages, they are expensive to train and challenging to deploy due to their large model sizes (up to billions of parameters). Moreover, they require hundreds of gigabytes of training data. We present FLAME, a T5-based model trained on Excel formulas that leverages domain insights to achieve competitive performance with a substantially smaller model (60M parameters) and two orders of magnitude less training data. We curate a training dataset using sketch deduplication, introduce an Excel-specific formula tokenizer for our model, and use domain-specific versions of masked span prediction and noisy auto-encoding as pretraining objectives. We evaluate FLAME on formula repair, formula auto-completion, and a novel task called syntax reconstruction. FLAME (60M) can outperform much larger models, such as Codex-Davinci (175B), Codex-Cushman (12B), and CodeT5 (220M), in 6 out of 10 settings.
## 1 Introduction
Despite a much larger user base, spreadsheet environments do not have access to nearly the same range of productivity tools as available for general programming environments. The latter typically have code completion, refactoring, linting, and a wide range of extensions for additional functionality, like generating tests, inserting code snippets, and summarizing code. Many of these advanced programming assistance tools are driven by advances in large language models trained on code (LLMCs). Codex [2] is used for code completion [16] and repair [14], AlphaCode [11] solves competitive programming problems, [12] built a code review system, and many other models show great performance in code related tasks [23, 15, 16].
To capture the complexity and variety of code and comments in different languages, these models need billions of parameters--the _smallest_ variant of Codex, used by GitHub Copilot, has 12 billion parameters. As a result, these models are trained for long periods on corpora containing millions of programs. For example, Incoder 6.7B used 159GB of code over a period of 24 days on 248 V100 GPUs. In addition to training costs, inference on large models is expensive due to extensive hardware requirements. For example, using Codex-Davinci to process 1000 tokens, including the prompt, costs $0.02 USD [17]. In a spreadsheet environment used by billions, these costs quickly add up.
In this paper, we present FLAME, a Formula LAnguage Model for Excel trained exclusively on Excel formulas. FLAME is based on T5-small [16] and has only 60 millions parameters, yet it can compete with much larger models (up to 175B parameters) on three formula authoring tasks: last-mile repair, formula auto-completion and syntax reconstruction. Syntax reconstruction is a novel task where all delimiters are removed from a formula, resulting in a flat stream
Figure 1: A summary of model comparisons in fine-tuned setting for different formula assistance tasks. We show the results under a top-5 cutoff on a public excel benchmark. Note that all Codex-Davinci results are few-shot, and Autocompletion is zeroshot for all systems except CodT5. For Autocompletion, results represent the fraction of benchmarks successfully (based on a sketch match metric) completed given 90% of the prefix.
of tokens, and the model must recover the original formula.
Figure 1 shows a high-level summary of results as a function of model size on a public dataset, where FLAME can outperform larger models in all three tasks. Figure 2 provides real examples, solved by FLAME, for each of these tasks.
There are three main challenges involved in training a model for Excel formulas: obtaining diverse training data, tokenizing their unique structure, and pretraining with objectives that teach the model about this distinctive structure. Spreadsheets contain many duplicate formulas due to copying down formula cells. We reduced our corpus from 927M formulas down to 6.1M by comparing formulas based on syntax, creating 540MB of training data. We combine formulas insights with byte pair encoding (BPE) to train an Excel-specific tokenizer. In addition to two generic objectives (tail-masking and denoising auto-encoding), we introduce two new pretraining objectives designed for formulas: language-aware masked span prediction and user-inspired denoising.
We extensively evaluate FLAME on three downstream tasks, showing that our proposed solutions to the modeling challenges significantly improve the performance of FLAME over T5-based models and can compete with much larger models. Specifically, we find that FLAME can outperform other models in 6 out of 10 settings in our evaluation.
We make the following contributions:
* We present FLAME, the first language model designed exclusively for Excel formulas (**SS3**). To this end, we introduce domain-specific dataset curation (**SS3.2**), tokenization (**SS3.3**), and pretraining objectives (**SS3.4**).
* We extensively evaluate FLAME on three formula assistance tasks: last-mile repair, formula autocompletion, and syntax reconstruction (**SS4.3**).
* We compare our performance to two variants of Codex (latest version of Cushman and Davinci) and CodeT5, and finetune Cushman for downstream tasks (**SS4.1**). We show that FLAME can outperform larger models in 6 out of 10 settings (**SS5.1**).
* We analyze the contribution of different design choices for FLAME (**SS5.2**,**SS5.3**)
## 2 Related Work
Language models for codeMultiple popular language model architectures have been successfully adapted to code. CodeBERT [12] trained BERT (encoder) on natural language and code. CodeT5 [23] trained T5 (encoder-decoder) on a similar corpus. Codex [2], PolyCoder [25], or CodeGen [21] are all trained variants of GPT (decoder). These models are trained on multiple programming languages and use pretraining objectives to understand or generate code and natural language, but do not adapt them for specific languages. In contrast, FLAME exploits a single domain to use domain-specific objectives, such as span masking that respects programming language tokens, to learn a better representation.
Evaluating code modelsMany tasks have been presented to evaluate code models, and CodeXGLUE [15] bundles most of these. These tasks are categorized by the modality (text/code) of their input and output. FLAME is trained on formulas exclusively and is focused on formula tasks. We now describe related work for these tasks.
Formula repairA popular code authoring task is repairing small mistakes. DeepFix [13], BIFI [26], Dr.Repair [27], and TFix [1] use deep learning to perform syntax, compilation, or diagnostics repair in general-purpose programming languages. LaMirage [1] generates repair engines for low-code languages and coins the term _last-mile repair_ for these types of fixes. RING [14] uses Codex to fix last-mile errors across multiple languages, but it requires additional information, such as examples of repairs and compiler messages.
Formula autocompletionThe generative nature of LLMCs makes them serve as code-completion engines. This feature has been shipped in commercial products, such as GitHub Copilot in Visual studio Code [13] and IntelliCode in Visual Studio [20]. Spreadsheet-Coder [2] is a model designed for predicting simple formulas from context in the spreadsheet.
Syntax reconstructionSyntax reconstruction, where all delimiters in a formula are removed, resembles component-based program synthesis, where partial programs are combined into a program that satisfies a specification. Components are provided by a user [15], generated by a model [16], or defined by an API [12].
## 3 FLAME: Approach
We now describe the FLAME architecture and how it overcomes the three key challenges (data, tokenization, and training) in pretraining a general language model for formulas.
### Architecture
To facilitate both formula understanding and generation, FLAME follows an encoder-decoder architecture based on T5 [19]. Encoder models like CodeBERT [12] show remarkable code understanding capabilities. Decoder models like CodeGen [21] and
Figure 2: We consider three downstream tasks: Last Mile Repair, Formula Autocompletion, and Syntax Reconstruction. Red and green colors denote the input and the expected output, respectively. Yellow text denotes the buggy part of the formula in the repair task, where the user has swapped the correct order of arguments resulting in a type error. Each task shows a case that FLAME successfully solves.
Codex [3] perform well on code generation. Encoder-decoder models seek to blend these strengths.
### Training Data
We start from a dataset of 927M formulas drawn from a corpus of 1.8M publicly available Excel workbooks.1 Each workbook contains one or more _worksheets_, and each worksheet contains zero or more formulas. Formulas in spreadsheets are often repeated with minor cell reference changes across rows or columns. For example, a user can drag a formula to another cell to repeat a computation on neighboring cell values.
Footnote 1: These workbooks were collected as part of a large Excel corpus planned for public release by a separate group of authors.
We compute formula sketches to preserve a single instance of each unique formula per _workbook_. In a formula sketch, numeric constants, string constants and cell references are replaced by their token type. For example, the sketch of =SUM(A1:A10) is =SUM(cell:cell). After applying sketch deduplication, we are left with 6.1M formulas. Note that applying this globally to the corpus, rather than per workbook, results in only 591K formulas. We found this globally deduplicated corpus to be insufficient for training as it skews the distribution of formulas --see evaluation (**SS5.2**) for details.
### Tokenizing Formulas
Tokenization is an essential part of language models [13]. A popular method for tokenization is byte pair encoding (BPE) [14]. BPE iteratively joins consecutive tokens that appear together most frequently until a target vocabulary size is reached. However, this procedure can have adverse effects on formulas. For example, SUI and ( are combined to get SUM(, which can reduce expressiveness and hurt performance for tasks like repair.
Our tokenizer considers punctuation, whitespace, built-in function names, and digits as individual tokens [15] and applies BPE [1] to the remaining parts of formulas, like string constants. Excel is case insensitive (with the exception of string contents) so we convert all input tokens to lowercase to map differently capitalized tokens to a single token. For example, without lowercasing, the same function SUM and sum will map to different tokens.
**Example 1**.: _A formula_
=SUMIF(B1:B5, "Not available", A1:A5)_
_is tokenized as_
_=_ _sumif ( b 1 : b 5, "not \(\cup\) available "_
, \(\cup\) a 1 : a 5 )_
_with space tokens denoted by \(\cup\)._
### Pretraining Objectives for Training
In this section, we describe the combination of generic and Excel-specific pretraining objectives, as summarized in Figure 3, that we use to train FLAME.
#### Masking objectives
We use two forms of masking to pre-train FLAME, an Excel-specific variant of masked span prediction (MSP), and a generic tail masking objective.
Language-aware masked span predictionIn contrast to traditional MSP, spans _must_ respect Excel lever bounds. For example, when an Excel cell reference BC18 is divided into four tokens B C 1 8, we ensure that either all or none of its constituent tokens is masked. Consecutive masked tokens are represented with a single <mask> token. Inspired by Mixture-of-Denoisers [15], we mask spans of tokens using combinations of high (35%) and low (15%) masking rates, and big (6 tokens) and small (2 tokens) average span lengths.
Generic tail maskingWe perform tail masking at the character level and allow partial masks of complete tokens. We keep the leading \(\{30\%,40\%,\cdots,70\%\}\) tokens of the input sequence and append a <mask> token.
#### Noisy Auto-encoding
Previous work in natural language processing has used denoising auto-encoding during pretraining [11]. We incorporate two such objectives in FLAME.
Random NoiseWe introduce generic noise by randomly inserting, deleting, or updating tokens in the input sequence. The insertion and update operators randomly sample a token from the vocabulary.
Excel-specific user-inspired noiseWe introduce noise operators that mirror mistakes that real users might make when writing Excel formulas. For example, users often write formulas with the incorrect function arity for in-built functions such as SUMIF. We implement 17 noise operators (Appendix A)
Figure 3: Four pretraining objectives used by FLAME. For each batch, we randomly (with weighted probability) choose one of the four objectives. Generic objectives (tail masking and random noise) are shown with a yellow header, while formula-specific variants (language-aware span masking and user-inspired noise) are shown with a green header. We depict inserted tokens with red and deleted tokens with blue.
based on a combination of help forum and code analysis. We randomly choose one of these noise operators when introducing noise into an input sequence.
Note that for all pretraining objectives, FLAME needs to generate a _complete_ formula (rather than just mask values).
**Combining pretraining objectives**
Rather than applying all pretraining objectives on every batch and then combining losses, we pick a single objective for each batch. We use the following probabilities {MSP: 50%, tail masking: 20%, user-inspired denoising: 20%, random denoising: 5%} for choosing the objective to be applied, and with a 5% probability, we leave the sequence intact.
## 4 Experimental Setup
We now describe our experimental setup. We start with the baseline models we compare against **(SS4.1)**, the training setup **(SS4.2)**, and then detail each downstream task in our evaluation, along with their corresponding datasets **(SS4.3)**.
### Baselines and Configurations
We compare FLAME to the following much larger language models, summarized in Table 1:
* CodeT5: a **220 million** parameter T5-based encoder-decoder model trained on both natural language and code. We present fine-tuned results.
* Codex-Cushman: a **12 billion** parameter autoregressive, decoder-only, GPT-3-based model trained on both natural language and code. We present both zeroshot and fine-tuned results.
* Codex-Davinci: a **175 billion** parameter autoregressive, decoder-only, GPT-3-based model trained on both natural language and code. We present zeroshot and few-shot results. We do not have resources to fine-tune Davinci.
For Codex-based baselines, we use nucleus sampling [1] (temperature=0.7) and sample 50 sequences per task. We sort these sequences based on their average token log probabilities following [14]. We detail the prompts in Appendix B. For CodeT5, we use beam search with a beam width of 50, and we consider the top 50 sequences.
### Training Details
We pretrain FLAME for 10 epochs and finetune CodeT5 and FLAME on a cluster with 16 AMD MI200s, 96 cores and 900 GB RAM. We finetune FLAME for 2 epochs for each downstream task and finetune CodeT5 for 25 epochs with a patience of 5 epochs. We carry out all Codex experiments on a cluster with 8 V100s, 40 cores, and 672 GB RAM. For Codex finetuning we use low-rank adaptation (LoRA) [13]. Refer to Appendix C for more details.
### Downstream Tasks
We consider three different downstream tasks.
**Last-mile Repair**
Last-mile repair refers to repairs that require few edits and fix syntax and simple semantic errors, such as wrong function call arity. In this setting, FLAME is given the buggy formula as the input sequence, and the task is to generate the user's intended (and syntactically correct) formula without any last-mile error.
**Example 2**.: _The user has used the wrong call arity for ISERROR. Red highlights the error in the buggy formula, and green denotes the required edit to match the groundtruth._
_Buggy Formula: =IF(ISERROR(GG *1.2, "# ) ) Groundtruth Formula: =IF(ISERROR(GG *1.2, ") ) Fine Tuning_
We create a finetuning dataset for all systems by taking 200K well-formed formulas from Excel help forums. We then randomly apply our user-inspired noise operators to generate broken versions.
**Evaluation Metric**
We compute an exact match with respect to the ground truth repair. We consider the top 1 and top 5 candidates produced by each system per formula and report the exact match fraction.
**Benchmarks**
We evaluate all systems on two benchmarks. We use the collection of 273 labeled Excel formulas used in recent last-mile repair literature [15]. The authors sourced these formulas from Excel help forums. We refer to this benchmark set as **Forum**.
We also reserve a split of randomly sampled 500 formulas derived using the same procedure as our finetuning dataset to create a **Test** benchmark set.
**Autocompletion**
Code completion is a popular task for language models trained on code, both due to its autoregressive nature and the practical value of code completion as a feature in developers' workflows. In this setting, FLAME is given a formula prefix, and the task is to generate the complete formula.
**Example 3**.: _Formula Autocompletion_
_Formula Prefix: =B2\(\leftarrow\)EDATE(Formula Completion: =B2\(\leftarrow\)EDATE(TODAY(),-33)_
**Fine Tuning**
We curated a finetuning dataset for autocompletion by splitting 189k formulas and sampling a prefix length of \(\{0.2,\cdots,0.7,0.8\}\) fraction of tokens.
**Evaluation Metric**
When completing formulas, some parts can be hard to predict due to lack of context [11], such as cell references, sheet names, string literals, and numerics. Therefore, in addition to **exactly match**, we also consider **sketch match** for autocompletion with respect to the ground truth. Precisely, for sketch match, we use the same sketch procedure described in **SS3**. This uses the Excel lexer
\begin{table}
\begin{tabular}{l l r} \hline \hline System & Architecture & Number of parameters \\ \hline Codex-Cushman & Decoder & 12 billion \\ Codex-Davinci & Decoder & 175 billion \\ CodeT5 (base) & Encoder-Decoder & 220 million \\ FLAME (ours) & Encoder-Decoder & 60 million \\ \hline \hline \end{tabular}
\end{table}
Table 1: Architecture and size comparison of baselines and FLAME
to tokenize a formula and preserves built-in function names but replaces all other tokens with their token type. We then compare the sketches of the formulas for a match. For instance, in Example 3, predicting the numeric \(-33\) is highly contextual, so in a sketch we match with its token type, Numeric.
BenchmarksWe evaluate autocompletion on a single benchmark, consisting of the 273 ground truth formulas from the Forum last-mile repair benchmark. For each formula, given exact match or sketch match metric, we predict completions at 0.25, 0.5, 0.75, 0.90 and 0.99 fractions of formula prefix.
Syntax ReconstructionWe introduce a new task that we term syntax reconstruction. The input to this task consists of Excel formulas which we have processed to remove any delimiters, resulting in a flat stream of lexer tokens. Excel delimiters are defined to be the following set of tokens: {( )!, ; { }. The model is then tasked with generating the original formula with appropriate delimiters.
Example 4: _Syntax Reconstruction given the excel tokens._
_Tokens: MAX 0 MOD C10 - B10 1 - D10 Reconstruction: MAX(0,MOD(C10-B10,1)-D10)_
Since, by definition, syntax reconstruction cannot introduce tokens into the output that are not delimiters or not in the original input token stream, FLAME employs constrained decoding to greedily remove invalid candidates from the search space. Our tokenizer design, particularly splitting on punctuation, makes this decoding strategy easier to implement.
Fine TuningWe curate a finetuning dataset by sampling 200k formulas from the publicly available Excel corpus that we used for FLAME's pretraining. We keep the subset that contains at least one delimiter (139k) and remove all delimiters.
Evaluation MetricWe compute an exact match with respect to the ground truth and consider the top 1 and top 5 candidates produced by each system per formula.
BenchmarksWe derive a benchmark set from the last-mile repair benchmarks by removing the delimiters for every groundtruth formula. We refer to this benchmark as **Forum**.
Finally, we also consider a **Test** split that reflects the same preparation as the fine tuning dataset.
## 5 Evaluation
We explore the following research questions in our evaluation:
* **RQ1:** How does FLAME perform on formula intelligence tasks compared to substantially larger language models?
* **RQ2:** How do pretraining design decisions such as data curation, model size, pretraining objectives, and tokenizer affect FLAME's downstream performance?
* **RQ3:** How do various decoding strategies affect different downstream-task performances for FLAME?
### RQ1: Larger Language Models
We now compare FLAME to substantially larger language models on our three formula intelligence tasks.
Last Mile Repair and Syntax ReconstructionWe finetune FLAME, CodeT5, and Codex-Cushman for last-mile repair and syntax reconstruction, and use few-shot prompts with three shots for Codex Davinci. Although one of
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multicolumn{4}{c}{Last Mile Repair} & \multicolumn{4}{c}{Syntax Reconstruction} \\ \cline{2-9} & \multicolumn{2}{c}{Forum} & \multicolumn{2}{c}{Test} & \multicolumn{2}{c}{Forum} & \multicolumn{2}{c}{Test} \\ \cline{2-9} & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 \\ \hline Cushman & **0.79** & 0.88 & **0.87** & **0.93** & 0.70 & 0.80 & 0.84 & **0.91** \\ Davinci (FS) & 0.76 & 0.89 & 0.54 & 0.77 & 0.62 & 0.77 & 0.61 & 0.73 \\ CodeT5 (220M) & 0.70 & 0.84 & 0.84 & 0.90 & 0.70 & 0.84 & 0.82 & 0.89 \\ CodeT5 (60M) & 0.72 & 0.83 & 0.82 & 0.89 & 0.65 & 0.81 & 0.83 & 0.89 \\ FLAME & 0.76 & **0.89** & 0.83 & 0.91 & **0.75** & **0.89** & **0.84** & 0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fine-tuned performance for Last Mile Repair and Syntax reconstruction tasks. Codex-Davinci uses few-shots and is denoted by an FS suffix). FLAME outperforms larger models at last-mile repair in the Forum benchmark at top-5, and comes in second at top-1. In syntax reconstruction, FLAME outperforms all models at both cutoffs in the Forum benchmark. **Bold** denotes best performing model and Underline represents second best.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c}{**Exact Match**} & \multicolumn{4}{c}{**Sketch Match**} \\ \cline{2-11} & **0.25** & **0.50** & **0.75** & **0.90** & **0.99** & **0.25** & **0.50** & **0.75** & **0.90** & **0.99** \\ \hline Cushman & 0.0 & 0.04 & 0.27 & 0.61 & 0.86 & **0.12** & **0.26** & 0.47 & 0.71 & 0.86 \\ Davinci & 0.0 & 0.03 & 0.31 & 0.64 & 0.85 & 0.10 & 0.25 & 0.53 & 0.76 & 0.85 \\ CodeT5 & 0.0 & 0.02 & 0.10 & 0.27 & 0.21 & 0.03 & 0.09 & 0.20 & 0.39 & 0.22 \\ FLAME & **0.01** & **0.06** & **0.34** & **0.70** & **0.93** & 0.10 & 0.24 & **0.55** & **0.84** & **0.94** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zeroshot autocompletion performance of FLAME, Codex-Cushman and Codex-Davinci, and fine-tuned CodeT5 (as denoted by FT suffix). Given \(\{0.25,0.50,0.75,0.90,0.99\}\) fraction of formula prefix, we report the proportion of formulas completed in the top 5. We observe that FLAME outperforms all the large language models in the exact match setting and most (3/5) of the sketch match settings. **Bold** denotes best performing model and Underline represents second best.
our pretraining objectives closely resembles last-mile repair (noisy auto-encoding) we find that finetuning FLAME helps direct it towards a particular task.
We summarize the results in Table 2 and observe that on the Forum last-mile repair benchmark FLAME outperforms all models at top-5, and is second best to Codex-Cushman at top-1. In the Test benchmark, we find that FLAME is second-best to Codex-Cushman at top-5 and is close to CodeT5's second-best performance at top-1. In the Test benchmark, Davinci's performance is substantially worse than the fine-tuned models.
On further analysis, we found that all models solve 73% of the Forum benchmark. FLAME solves 4% of the benchmarks that no other model solves and fails on 1% of the benchmarks that all other models fix. FLAME also generates syntactically correct formulas for 98% of the benchmarks in top 5. In Figure 4, we show examples where FLAME gets the correct fix, and other models do not, and vice versa. We note that in some cases, FLAME's fixes appear to be more natural, but fail to match the user's ground truth repair.
For syntax reconstruction Forum, we find that FLAME outperforms other models across the top-1 and top-5. Interestingly, CodeT5 also solves more syntax reconstruction tasks than both Codex models. We hypothesize that since syntax reconstruction is a new task, as compared to the more traditional repair problem, after fine-tuning, encoder-decoder models perform better than decoder-only models, as shown by [14]. In Test, we find that FLAME performs similar to Codex-Cushman (same at top-1 and -2 points lower at top-5).
We find that 54% of the Forum syntax reconstruction benchmarks are solved by all the models, 1% is solved only by FLAME, and there are no benchmarks that all other models solve but FLAME doesn't. We attribute this performance to our pretraining design choices. First, FLAME learns to generate syntactically correct code as a result of its noisy auto-encoding pretraining objective. Second, FLAME learns the natural distribution of formulas by generating complete sequences during pretraining, rather than just mask values and sentinel tokens.
Zeroshot PerformanceFLAME's pretraining objectives allow us to consider zeroshot performance for both last-mile repair and syntax reconstruction. In Table 4, we observe that FLAME outperforms Codex models for last-mile repair across all benchmarks. We attribute this to the closeness of our noisy auto-encoding pretraining objectives and the last-mile repair task. We find that in the syntax reconstruction task, FLAME outperforms Codex-Cushman. We believe this is because syntax reconstruction can be considered an extreme case of repair.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multicolumn{6}{c}{Last Mile Repair} & \multicolumn{6}{c}{Syntax Reconstruction} \\ \cline{2-10} & \multicolumn{3}{c}{Forum} & \multicolumn{3}{c}{Test} & \multicolumn{3}{c}{Forum} & \multicolumn{3}{c}{Test} \\ \cline{2-10} & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 & T@1 & T@5 \\ \hline Cushman & 0.55 & 0.85 & 0.41 & 0.63 & 0.27 & 0.53 & 0.23 & 0.46 \\ Davinci & 0.60 & 0.82 & 0.51 & 0.75 & **0.51** & **0.65** & 0.31 & 0.45 \\ FLAME & **0.71** & **0.88** & **0.74** & **0.85** & 0.41 & 0.53 & **0.50** & **0.58** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zeroshot last-mile repair and syntax reconstruction performance of FLAME and Codex models. FLAME outperforms all the larger models in Last Mile Repair task and solves more benchmarks than Codex-Cushman for the Syntax Reconstruction task. **Bold** denotes best performing model and Underline represents second best.
Figure 4: Repair tasks with diverging performance. In Example 1, the user did not use the AND function and missed double quotes around string literals yes and no. FLAME fixes this (in top-5), while other models fail. In Example 2 FLAME’s top candidate is syntactically valid but does not match the user’s fix, while other models’ predictions do.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multicolumn{6}{c}{Zeroshot} & \multicolumn{6}{c}{Finetuned} \\ \cline{2-10} & LMR & SR & AC (EM) & AC (SM) & \multicolumn{3}{c}{LMR} & SR \\ \cline{2-10} & Forum & Test & Forum & Test & 0.75 & 0.90 & 0.75 & 0.90 & Forum & Test & Forum & Test \\ \hline FLAME (60M) & **0.71** & **0.74** & **0.41** & **0.50** & **0.34** & **0.70** & **0.55** & **0.84** & **0.76** & **0.83** & **0.75** & **0.84** \\ \hline FLAME (16M) & 0.68 & 0.64 & 0.23 & 0.42 & 0.24 & 0.59 & 0.54 & 0.76 & 0.73 & 0.78 & 0.73 & 0.78 \\ Global Deduplication & 0.57 & 0.56 & 0.16 & 0.2 & 0.15 & 0.45 & 0.41 & 0.59 & 0.68 & 0.76 & 0.73 & 0.81 \\ T5 (Generic objectives and tokenizer) & 0.11 & 0.12 & 0.02 & 0.05 & 0.07 & 0.22 & 0.25 & 0.37 & 0.62 & 0.82 & 0.49 & 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 5: We compare multiple pretraining design decisions: model size, pretraining data curation, domain-specific pretraining objectives and tokenizer. We consider at top-1 for Last-Mile Repair (LMR) and Syntax Reconstruction (SR) and top-5 for Autocompletion (AC) with Exact Match (EM) and Sketch Match (SM). For details refer to Appendix D. Smaller model performs worse across the board. Curating data with global deduplication reduces performance by up to 30 points. Removing domain-specific objectives and tokenizer impacts performance most.
### Formula Autocompletion
The autoregressive nature of Codex models and FLAME's pretraining objectives allows us to evaluate their zeroshot performance2 for formula auto-completion. Note that we fine-tune CodeT5 for this task as it is pretrained on smaller span lengths (1 to 5 tokens) and generates special mask tokens (e.g., <MASK1>) in a zeroshot setting. We compute exact match and sketch match metrics with top-5 results.
Footnote 2: We finetuned Codex-Cushman and FLAME but observed worse performance, possibly from over-fitting.
In Table 3, we observe that FLAME performs better than all the larger models on the exact match metric and 3 out of 5 prefix lengths for sketch match. We note that Codex-Cushman and Codex-Davinci fail to complete 14% and 15% of the benchmarks with 0.99 fraction of the prefix, respectively, whereas FLAME fails to complete 6% of the benchmarks. We observe significantly lower performance by CodeT5, likely due to the lack of longer masks spans during pretraining. Surprisingly, Codex-Davinci performs slightly worse than the smaller Codex-Cushman for 3 out of 5 prefix lengths. Inspection of completions shows that Codex-Davinci tends to generate more tokens than required when completing these benchmark tasks. We also observe cases where models succeed with a shorter prefix but fail given a longer prefix.
### RQ2: Pretraining design decisions
We investigate FLAME's data curation, model size, the use of domain-specific pretraining objectives, and domain-specific tokenizer, and present results in Table 5.
#### 5.2.1 Training data curation
Previous work [10, 11] have shown that deduplication can improve the performance of language models and reduce the memorization of training data. Therefore, we curate a pretraining dataset by performing workbook-level sketch-based formula deduplication. Alternatively, one might consider performing global (pooled across all workbooks) sketch-based deduplication. This alternative results in a pretraining set of 591K formulas. Table 5 shows that training on this smaller corpus results in a lower performance model. We find that FLAME's zeroshot performance falls by 14 points and finetuned performance falls by 18 points for last-mile repair in Forum benchmarks.
#### 5.2.2 Model size
We trained two variants of FLAME with 16M and 60M parameters. Table 5 compares FLAME-16M and FLAME-60M. We find that performance declines slightly across tasks/benchmarks when we reduce the model size to 16M. However, note that FLAME-16M can still outperform larger models such as Codex in 5 out of 10 zeroshot and finetuned settings, highlighting the efficacy of our design choices for FLAME.
#### 5.2.3 Pretraining objectives and Tokenizer
To evaluate the effectiveness of our domain-specific pretraining objectives and tokenizer, we pretrained a 60M parameters T5 model with generic pertaining objectives and tokenizer. Specifically, this model uses tail-masking, masked span prediction without accounting for lexer token boundaries, and random denoising objectives. Additionally, it uses the CodeT5 tokenizer trained on our pretraining data. Table 5 shows that this variant performs worse across all tasks and benchmarks, both in a zeroshot and finetuned setting. We attribute the huge drop, up to 62 points, in last-mile repair tasks in zeroshot to our user-inspired denoising pretraining objective. Moreover, we hypothesize that FLAME's good syntax reconstruction performance can be attributed to the domain-specific tokenizer. Figure 5 illustrates how the generic tokenizer treats tokens with different capitalizations, resulting in incorrect generation.
### RQ3: Decoding strategy
In Table 6, we evaluate FLAME using four different decoding strategies, Beam Search, Group Beam Search [13], Nucleus Sampling [15] and Top K Sampling [14]. We find FLAME to perform better with group beam search decoding (group size of 2) for all the formula intelligence tasks. However, for autocompletion with sketch match, nucleus sampling showed superior performance. We believe this is because autocompletion requires more diverse results, particularly at shorter prefixes. Refer to Appendix E for autocompletion table.
## 6 Conclusions and Future Work
We present FLAME, a small (60M parameter) language model for spreadsheet formulas, which captures domain-specific properties in its data curation, tokenization, and pretraining objectives. We implemented FLAME for Excel formulas and evaluate on three downstream tasks: last-mile repair, autocompletion, and a novel task that we term syntax reconstruction. We compare with the much larger models CodeT5, Codex-Cushman, and Codex-Davinci. When fine-tuned, FLAME can achieve top performance in 6 of our 10 experimental settings, despite having two orders of magnitude fewer parameters.
Future work will explore downstream tasks that require additional spreadsheet context (e.g. tables). To tackle such tasks we will explore extending our pretraining objectives to incorporate context and the extent to which FLAME can integrate with existing table encoder models.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Decoding Method**} & \multicolumn{2}{c}{**LMR (Forum)**} & \multicolumn{2}{c}{**SR (Forum)**} \\ \cline{2-5} & **T@1** & **T@5** & **T@1** & **T@5** \\ \hline Beam Search & 0.76 & 0.88 & **0.75** & **0.89** \\ Group Beam & **0.76** & **0.89** & 0.75 & 0.89 \\ Nucleus Sampling & 0.72 & 0.85 & 0.7 & 0.84 \\ Top K & 0.67 & 0.86 & 0.67 & 0.84 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance by decoder strategy for last mile repair (LMR) and syntax reconstruction (SR). Beam and Grouped Beam Search have similar performance, and outperform Nucleus, Top K Sampling.
Figure 5: Failing case of syntax reconstruction. Due to the different capitalization of Sum and SUM, the model treats them as different tokens, converting them to an identifier and a function, respectively.
## Acknowledgments
We thank Microsoft Research Cambridge for sharing the Excel corpus used for pretraining FLAME. We thank OCTO at Microsoft (in particular Gopi Kumar and the AMD vTeam) for providing us with compute resources. We also thank the Excel team for their feedback and encouragement in pursuing this work.
|
2309.10452 | Essential cohomology modules | In this article, we give a generalization to injective modules by using
$e$-exact sequences introduced by Akray in [1] and name it $e$-injective
modules and investigate their properties. We reprove both Baer criterion and
comparison theorem of homology using $e$-injective modules and $e$-injective
resolutions. Furthermore, we apply the notion $e$-injective modules into local
cohomology to construct a new form of the cohomology modules call it essential
cohomology modules (briefly $e$-cohomology modules). We show that the torsion
functor $\Gamma_a ( - )$ is an $e$-exact functor on torsion-free modules. We
seek about the relationship of $e$-cohomology within the classical cohomology.
Finally, we conclude that they are different on the vanishing of their $i_{th}$
cohomology modules. | Runak H. Mustafa, Ismael Akray | 2023-09-19T09:12:22Z | http://arxiv.org/abs/2309.10452v1 | # Essential cohomology modules
###### Abstract.
In this article, we give a generalization to injective modules by using \(e\)-exact sequences introduced by Akray in [1] and name it \(e\)-injective modules and investigate their properties. We reprove both Baer criterion and comparison theorem of homology using \(e\)-injective modules and \(e\)-injective resolutions. Furthermore, we apply the notion \(e\)-injective modules into local cohomology to construct a new form of the cohomology modules call it essential cohomology modules (briefly \(e\)-cohomology modules). We show that the torsion functor \(\Gamma_{a}(-)\) is an \(e\)-exact functor on torsion-free modules. We seek about the relationship of \(e\)-cohomology within the classical cohomology. Finally, we conclude that they are different on the vanishing of their \(i_{th}\) cohomology modules.
Key words and phrases:essential exact sequence, homology module, essential injective module, local cohomology module 2020 Mathematics Subject Classification: 13C11, 46M18, 13D45
## 1. Introduction
Exact sequences play an important roles in module theory and homological algebra. Some notions such as injectivity, projectivity, flatness and derived functors have been defined and analyzed by exact sequence approach. Generalizing exact sequences gives possibilities to generalize some related notions which are defined by exact sequences. An exact sequence \(0\to A\xrightarrow{i}B\xrightarrow{p}C\to 0\) is split if there exists a morphism \(j:C\to B\) (or \(f:B\to A\)) such that \(pj=I_{C}\) (or \(fi=I_{A}\)). In 1972, R. S. Mishra introduced a generalization for split sequence where a semi-sequence \(M_{i-1}\xrightarrow{f_{i-1}}M_{i}\xrightarrow{f_{i}}M_{i+1}\) is called semi-split if \(Ker(f_{i})\) is a direct summand of \(M_{i}\)[7]. So a semi-split is split if and only if it is exact. In 1999, Davvaz and parnian-Goramaleky introduced a generalization for exact sequences called it a \(U\)-exact sequence, where a sequence of \(R\)-modules and \(R\)-homomorphisms \(\cdots\to M_{i-1}\xrightarrow{f_{i-1}}M_{i}\xrightarrow{f_{i}}M_{i+1}\to\dots\) is a \(U_{i+1}\)-exact at \(M_{i}\) if \(Im(f_{i})=f_{i+1}^{-1}(U_{i+1})\), where \(U_{i+1}\) is a submodule of \(M_{i+1}\)[5]. Also, a short sequence \(0\to A\xrightarrow{f_{i}}B\xrightarrow{g}C\to 0\) of \(R\)-modules is a \(U\)-exact if it is \(0\)-exact at \(A\), \(U\)-exact at \(B\) and \(0\)-exact at \(C\), equivalently, if \(f\) is monic, \(g\) is epic and \(Imf=g^{-1}(U)\) for a submodule \(U\) of
## 1. Introduction
Let \(G\) be a group of \(n\)-dimensional algebraic groups. Let \(G\) be a group of \(n\)-dimensional algebraic groups.
Section three devoted for discussing the application of \(e\)-exact sequence to local cohomology modules. We construct cohomology modules by using \(e\)-injective resolutions and called the \(i_{th}\)-local cohomology of an \(R\)-module \(M\) by \({}_{e}H^{i}_{a}(M)\). We prove that \({}_{e}H^{0}_{a}()\) is naturally equivalent to \(r\Gamma_{a}(\ )\) on torsion-free \(R\)-modules (Theorem 3.7). We study some cases of vanishinig of \(e\)-cohomology modules (Theorem 3.4 and Theorem 3.5). Furthermore, we show that by an example that not necessary an \(a\)-torsion \(R\)-module \(M\) has zero \(e\)-cohomology modules \({}_{e}H^{i}_{a}(M)=0\), for \(i>0\) as this is the case in local cohomology (Example 3.11).
## 2. Essential injective modules
In this section we introduce essential injective module and investigate some properties and results on such topic. We begin with their definition.
**Definition 2.1**.: We call an \(R\)-module \(E\) essential injective briefly e-injective if it satisfies the following condition: for any monic \(f_{1}:A_{1}\to A_{2}\) and any map \(f_{2}:A_{1}\to E\) of \(R\)-modules, there exist \(0\neq r\in R\) and \(f_{3}:A_{2}\to E\) such that \(f_{3}f_{1}=rf_{2}\).
\(0\)\(f_{2}\)\(A_{1}\)\(f_{1}\)\(A_{2}\)
In this case, we say the map \(f_{3}\) is essentially extends to the map \(f_{2}\).
Now, we give one of our main results in this section which is the Baer criterion for \(e\)-injectives.
**Theorem 2.2**.: _[e-Baer Criterion] An \(R-\)module \(E\) is e-injective if and only if for a nonzero ideal \(\boldsymbol{a}\) of \(R\), every \(R\)-map \(f:\boldsymbol{a}\to E\) can be extends essentially to \(g:R\to E\) in the sence that there exists \(0\neq r\in R\) with \(gi=rf\), where \(i:\boldsymbol{a}\to R\) is an inclusion map as in the
following diagram._
Proof.: Since any ideal of \(R\) can be considered as a submodule of the \(R\)-module \(R\), the existence of an extension \(g\) of \(f\) is just a special case of the definition of e-injectivity of \(E\). Suppose we have the following diagram, where \(A\) is a submodule of \(B\) and \(i\) is the inclusion map:
Let \(X\) be the set of all ordered pairs \((A^{\prime},g^{\prime})\), where \(A\subseteq A^{\prime}\subseteq B\) and \(g^{\prime}:A^{\prime}\to E\) essentially extend \(f\): that is, \(g^{\prime}|_{A}=rf\) for nonzero r in \(R\). Partially ordered \(X\) by defining \((A^{\prime},g^{\prime})\preceq(A^{\prime\prime},g^{\prime\prime})\), which means \(A^{\prime}\subseteq A^{\prime\prime}\) and \(g^{\prime\prime}\) essentially extends \(g^{\prime}\), so the chains in \(X\) have upper bounds in \(X\), hence using Zorn Lemma, there exists a maximal element \((A_{0},g_{0})\) in \(X\). If \(A_{0}=B\), we are done, if it is not, there is \(b\in B\) with \(b\notin A_{0}\). Define \(\mathbf{a}=\{r\in R:rb\in A_{0}\}\), it is clear that \(\mathbf{a}\) is an ideal in \(R\). Define \(h:\mathbf{a}\to E\), by \(h(r)=g_{0}(rb)\). By hypothesis, there is a map \(h^{*}:R\to E\) essentially extends \(h\). Finally, define \(A_{1}=A_{0}+<b>\) and \(g_{1}:A_{1}\to E\) by \(g_{1}(a_{0}+rb)=g_{0}(a_{0})+rh^{*}(1)\), where \(a_{0}\in A_{0}\) and \(r\in R\). Let us show that \(g_{1}\) is well-defined. If \(a_{0}+rb=a_{0}^{\prime}+r^{\prime}b\), then \((r-r^{\prime})\in\mathbf{a}\). Therefore, \(g_{0}((r-r^{\prime})b)\) and \(h(r-r^{\prime})\) are defined, and we have \(g_{0}(a_{0}^{\prime}-a_{0})=g_{0}((r-r^{\prime})b)=h(r-r^{\prime})=h^{*}(r-r^{ \prime})=(r-r^{\prime})h^{*}(1)\). Thus \(g_{0}(a_{0}^{\prime})-g_{0}(a_{0})=rh^{*}(1)-r^{\prime}h^{*}(1)\) and \(g(a_{0}^{\prime})+r^{\prime}h^{*}(1)=g_{0}(a_{0})+rh^{*}(1)\) as desired. Clearly, \(g_{1}(a_{0})=g_{0}(a_{0})\) for all \(a_{0}\in A_{0}\), so that the map \(g_{1}\) essentially extends \(g_{0}\). We conculude that \((A_{0},g_{0})\prec(A_{1},g_{1})\) contradicting the maximality of \((A_{0},g_{0})\). Therefore, \(A_{0}=B\), the map \(g_{0}\) is an essentially extends \(f\) and \(E\) is an e-injective.
The following example shows that an \(e\)-injective module may not be injective.
**Example 2.3**.: The \(\mathbb{Z}\)-module \(\mathbb{Z}\) is \(e\)-injective module, but it is not injective. We can show that \(\mathbb{Z}\) is an \(e\)-injective by using \(e\)-Baer Criterion. Let \(f_{1}:n\mathbb{Z}\rightarrow\mathbb{Z}\) be an inclusion map defined as \(f_{1}(x)=sx;s\in\mathbb{Z}\) and \(f_{2}:n\mathbb{Z}\rightarrow\mathbb{Z}\) defined as \(f_{2}(x)=mx;m\in\mathbb{Z}\) where \((m,s)=1\). Then by talking \(f_{3}=f_{2}\), we have \(f_{3}\circ f_{1}(x)=f_{3}(f_{1}(x))=f_{3}(sx)=msx=sf_{2}(x)\)
It's easy to see that every submodule of \(\mathbb{Z}\)-module \(\mathbb{Z}\) is \(e\)-injective. On an integral domain, every injective is divisble but this is not the case for e-injectives, since \(\mathbb{Z}\) as \(\mathbb{Z}\)-module is \(e\)-injective while not divisble.
**Definition 2.4**.: [1] An \(e\)-exact sequence \(0\to A\overset{i}{\rightarrow}B\overset{p}{\rightarrow}C\to 0\) is \(e\)-split if there exist \(0\neq s\in R\) and a morphism \(j:C\to B\) (or \(f:B\to A\)) such that \(pj=sI_{C}\) (or \(fi=sI_{A}\)).
**Proposition 2.5**.: _If an e-exact sequence \(0\to A\overset{i}{\rightarrow}B\overset{p}{\rightarrow}C\to 0\) is \(e\)-split, then there exists \(0\neq r\in R\) such that \(rB\cong A\bigoplus C\)._
Proof.: We show that \(rB=Imi\bigoplus Imj\) there exists \(0\neq r\in R\) and \(j:C\to B\) with \(pj=rI_{C}\). For any \(b\in B\), \(rb-jp(b)\in Kerp\) and by \(e\)-exactness, there exists \(a\in A\) and \(0\neq s\in R\) such that \(i(a)=s(rb-jp(b))\), that is \(srb=i(a)+sjp(b)\). Hence \(srB=Imi+Imj\). Now, if \(i(x)=rz=j(y)\) for \(x\in A\) and \(y\in C\), then \(0=pi(x)=p(rz)=pj(y)=ry\) and so \(rz=j(y)=0\) which implies that \(Imi\cap Imj=0\). Therefore \(rB=Imi\bigoplus Imj\cong A\bigoplus C\).
In the following proposition we generalize [8, Proposition 3.38] to \(e\)-injective \(R\)-modules.
**Proposition 2.6**.: _A direct product of \(R\)-modules is an e-injective if and only if each \(E_{i}\) is an e-injective \(R\)-module._
Proof.: Suppose that \(E=\prod E_{i}\), \(k_{i}:E_{i}\to E\) is the injection and \(p_{i}:E\to E_{i}\) is the projection such that \(p_{i}k_{i}=I_{E_{i}}\). Since \(E\) is an e-injective, there exist a homomorphism \(h:C\to E\) and \(0\neq r\in R\) such that \(hj=r(k_{i}\circ f_{i})\). Now,
\(p_{i}(r_{i}\circ f_{i}))=r((p_{i}\circ k_{i})\circ f_{i})=rf_{i}\).
\(p_{i}\)\(p_{
Recall from [1] the triangle diagram of \(R\)-modules an \(R\)-morphism as the following are \(e\)-commute if and only if there exists \(0\neq r\in R\) such that \(g\circ i=rf\).
As well too, a squar diagram as the following are \(e\)-commute if and only if there exists \(0\neq r\in R\) such that \(qf=rgt\)
**Proposition 2.7**.: _Let \(E\) be a torsion-free \(R\)-module. Then \(E\) is an e-injective if and only if \(Hom(\quad,E)\) is a contravariant e-exact functor._
Proof.: Suppose that \(0\to A\stackrel{{ i}}{{\to}}B\stackrel{{ p}}{{\to}}C\to 0\) is a short e-exact sequence. Since \(E\) is a torsion-free, \(Hom(\quad,E)\) is a left e-exact functor by [1, Theorem 2.7]. It remains to show that \(Hom(\quad,E)\) is a right e-exact, which means \(Hom(B,E)\stackrel{{ i^{*}}}{{\to}}Hom(A,E)\to 0\) is an e-exact. For this purpose, let \(0\neq f\in Hom(A,E)\). \(e\)-injectivity of \(E\) implies that there exist \(g:B\to E\) and \(0\neq r\in R\) such that \(i^{(}*g)=gi=rf\). Thus we have \(Imi^{*}\leqslant_{e}Hom(A,E)\). Therefore \(Hom(\quad,E)\) is an e-exact functor. Conversely, if \(Hom(\quad,E)\) is an e-exact contravariant functor so the sequence \(0\to Hom(C,E)\stackrel{{ p^{*}}}{{\to}}Hom(B,E)\stackrel{{ i^{*}}}{{\to}}Hom(A,E)\to 0\) is an e-exact sequence. Then for all \(0\neq r\in R\) and \(0\neq f\in Hom(A,E)\) there exists \(g\in Hom(B,E)\) such that \(i^{*}g=rf\) which implies that \(gi=rf\), that is, the diagram below
is an e-commute and \(E\) is an e-injective.
**Proposition 2.8**.: _Let \(E\) be a torsion-free \(e\)-injective \(R\)-module. Then any e-exact sequence \(0\to E\to B\to C\to 0\)\(e\)-splits._
Proof.: Let \(E\) be an \(e\)-injective \(R\)-module and the sequence \(0\to E\xrightarrow{i}B\xrightarrow{p}C\to 0\) be an e-exact. Then by Proposition 2.7\(0\to Hom(C,E)\xrightarrow{p^{*}}Hom(B,E)\xrightarrow{i^{*}}Hom(E,E)\to 0\) is an e-exact sequence. Since \(I_{E}\in Hom(E,E)\), there exist \(g\in Hom(B,E)\) and \(0\neq r\in R\) such that \(i^{*}(g)=gi=rI_{E}\) and so the given sequence \(e\)-splits.
**Theorem 2.9**.: _Given an e-commutative diagram of \(R\)-modules having \(e\)-exact rows and torsion-free module \(C^{\prime\prime}\):_
_there exist a unique map \(h:A^{\prime\prime}\to C^{\prime\prime}\) making the augmented diagram \(e\)-commute._
Proof.: If \(a^{\prime\prime}\in A^{\prime\prime}\), then there exist \(a\in A\) and \(0\neq r\in R\) such that \(p(a)=ra^{\prime\prime}\). Define \(h(a^{\prime\prime})=rqg(a)\). We must shows that \(h\) is well defined, that is, if \(u\in A\) satiesfies \(p(u)=ra^{\prime\prime}\). Now, \(p(a)=p(u)\) implies \(p(a-u)=0\), so \(a-u\in Kerp\) by \(e\)-exactness, there exist \(0\neq S\in R\) and \(a^{\prime}\in A\) such that \(i(a^{\prime})s(a-u)\). Thus, \(rsqg(a-u)=rqg(i(a^{\prime}))=qjf=0\), because \(qj=0\). Therefore, \(h\) is well-defined. Suppose that \(h^{\prime}:A^{\prime\prime}\to C^{\prime\prime}\) be another map satisfies \(rh^{\prime}p=qg\) and \(a\in A\). Then \(rh^{\prime}p(a)=qg=rhp(a)\) and so \(h\) is unique.
**Theorem 2.10**.: _Given an e-commutative diagram of \(R\)-modules having \(e\)-exact rows_
_there exist a unique map \(f:A^{\prime}\to C^{\prime}\) making the augmented diagram \(e\)-commute.Moreover, if \(g\) and \(h\) are isomorphism and \(A\) and \(A^{\prime\prime}\) are torsion-free, then \(A^{\prime}\cong rC^{\prime}\) for some non-zero elements \(0\neq r\in R\)._
Proof.: Let \(a^{\prime},u^{\prime}\in A^{\prime}\), define \(f(a^{\prime})=rqg(i(a^{\prime}))\). We must shows that \(f\) is well defined, let \(i(a^{\prime})=i(u^{\prime})\) implies \(qg(i(a^{\prime}-u^{\prime}))=0\), so \(g(i(a-u))\in Ker\;q\) by \(e\)-exactness, there exist \(0\neq r\in R\) and \(c^{\prime}\in C^{\prime}\) such that \(j(c^{\prime})=rg(i(a^{\prime}-u^{\prime}))\). Thus, \(rqg(i(a^{\prime}-u^{\prime}))=qj(c^{\prime})=0\), because \(qj=0\). Therefore, \(f\) is well-defined. Suppose that \(f^{\prime}:A^{\prime}\to C^{\prime}\) be
another map satisfies \(jf^{\prime}=rgi\). Then \(jf^{\prime}(a^{\prime})=rgi(a^{\prime})=jf(a^{\prime})\) this implies that \((f^{\prime}(a^{\prime})-f(a^{\prime}))\in Kerj=0\) and so \(f\) is unique.
Let \(a^{\prime}\in Ker\:f\). Then \(f(a^{\prime})=0\) and by \(e\)-commutativity there exists \(0\neq r\in R\) such that \(jf(a^{\prime})=0=rgi(a^{\prime})\). Thus \(r.i(a^{\prime})\in Ker\:g=0\) and \(A\) is torsion-free so \(i(a^{\prime})=0\) implies \(a^{\prime}\in Ker\:i=0\). Suppose that \(c^{\prime}\) is a non-zero element of \(C^{\prime}\). Since \(j(c^{\prime})\in C\) and \(g\) is onto, then there exist \(a\in A\) such that \(g(a)=j(c^{\prime})\), \(e\)-commutativity gives us \(qg(a)=rhp(a)\), for some \(0\neq r\in R\). Now, \(qg(a)=qj(c^{\prime})=0=rhp(a)\), so \(rp(a)\in Ker\:h=0\) torsion-freeness of \(A^{\prime\prime}\) gives us \(a\in Ker\:p\), then there exist \(0\neq s\in R\) and \(a^{\prime}\in A^{\prime}\) such that \(i(a^{\prime})=sa\). By \(e\)-commutativity there exist \(0\neq t\in R\) such that \(jf(a^{\prime})=tgi(a^{\prime})\) so \(jf(a^{\prime})=tg(i(a^{\prime}))=tsg(a)=tsj(c^{\prime})\). We obtain \((f(a^{\prime})-tsc^{\prime})\in Ker\:j=0\) and so \(f(a^{\prime})=tsc^{\prime}\). Therefore, \(A^{\prime}\cong rC^{\prime}\) for some non-zero element \(0\neq r\in R\).
**Lemma 2.11**.: _Consider the commutative diagram of \(R\)-modules and \(R\)-morphisms, where \(R\) is a domain, \(B\) and \(B^{\prime\prime}\) are torsion-free \(R\)-modules_
_If the columns and the first and the third rows are \(e\)-exact, then the middle row is also \(e\)-exact._
Proof.: To proof the middle row is \(e\)-exact, we have to check the following three conditions:
1) \(Ker(g)=0\). Take \(b\in Ker(g)\). Then \(j^{\prime}g(b)=0=hi^{\prime}(b)\) and \(i^{\prime}(b)\in Ker(h)\). Since \(Ker(h)=0\), so \(i^{\prime}(b)=0\) and \(b\in Ker(i^{\prime})\) and as \(Im(i)\leqslant_{e}Ker(i^{\prime})\), then there exist a non-zero element \(r\) belongs to \(R\) and \(a\in A\) such that \(i(a)=rb\) and so \(gi(a)=g(rb)=rg(b)=0=jf(a)\), so \(f(a)\in Ker(j)\). Since \(Ker(j)=0\), \(f(a)=0\) and
\(a\in Ker(f)=0\), which means \(a=0\) this implies \(rb=0\) and \(b=0\), since \(B\) is torsion-free. Therefore, \(g\) is monic.
2) \(Im(g)\leqslant_{e}Ker(g^{\prime})\). First to prove \(Im(g)\subseteq Ker(g^{\prime})\). Let \(b^{\prime}\in Im(g)\). Then there exists \(b\in B\) such that \(g(b)=b^{\prime}\) and \(j^{\prime}g(b)=hi^{\prime}(b)=j^{\prime}(b^{\prime})\), which means that \(j^{\prime}(b^{\prime})\in Im(h)\subseteq Ker(h^{\prime})\) and so \(p^{\prime}g^{\prime}(b^{\prime})=h^{\prime}j^{\prime}(b^{\prime})=0\). Hence \(g^{\prime}(b^{\prime})\in Ker(p^{\prime})\) and as \(Im(p)\leqslant_{e}Ker(p^{\prime})\), then there exist \(a^{\prime\prime}\in A^{\prime\prime}\) and a non-zero element \(s\) belongs to \(R\) such that \(p(a^{\prime\prime})=sg^{\prime}(b^{\prime})=s(g^{\prime}g(b))=0\) and \(sg^{\prime}(b^{\prime})=0\), so \(g^{\prime}(b^{\prime})=0\), since \(B^{\prime\prime}\) is torsion-free. Thus \(b^{\prime}\in Ker(g^{\prime})\) and so \(Im(g)\subseteq Ker(g^{\prime})\). Now, for essentiality, take \(b^{\prime}\) to be a non-zero element of \(Ker(g^{\prime})\). Then \(0=g^{\prime}(b^{\prime})=g^{\prime}j(a^{\prime})=pf^{\prime}(a^{\prime})\) and \(f^{\prime}(a^{\prime})\in Ker(p)\). Since \(Ker(p)=0,f^{\prime}(a^{\prime})=0\) so \(a^{\prime}\in Ker(f^{\prime})\) and as \(Im(f)\leqslant_{e}Ker(f^{\prime})\) then there exist a non-zero element \(r\) belongs to \(R\) and \(a\in A\) such that \(f(a)=ra^{\prime}\). Now, \(jf(a)=j(ra^{\prime})=rj(a^{\prime})=rb^{\prime}=gi(a)=g(b)\). Therefore, \(Im(g)\leqslant_{e}Ker(g^{\prime})\).
3) \(Im(g^{\prime})\leqslant_{e}B^{\prime\prime}\). Let \(b^{\prime\prime}\) be a non-zero element of \(B^{\prime\prime}\) and \(p^{\prime}(b^{\prime\prime})\in C^{\prime\prime}\). Then there exist \(c^{\prime}\in C^{\prime}\) and a non-zero element \(r\) belongs to \(R\) such that \(h^{\prime}(c^{\prime})=rp^{\prime}(b^{\prime\prime})\) and as \(Im(j^{\prime})\leqslant_{e}C^{\prime}\), then there exist a non-zero element \(s\) belongs to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(j^{\prime}(b^{\prime})=sc^{\prime}\). Now, we have \(p^{\prime}g^{\prime}(b^{\prime})=h^{\prime}j^{\prime}(b^{\prime})=h^{\prime}( sc^{\prime})=sh^{\prime}(c^{\prime})=srp^{\prime}(b^{\prime\prime})\) and \(p^{\prime}(g^{\prime}(b^{\prime})-srb^{\prime\prime})=0\) which means \((g^{\prime}(b^{\prime})-srb^{\prime\prime})\in Ker(p^{\prime})\) and as \(Im(p)\leqslant_{e}Ker(p^{\prime})\) then there exist a non-zero element \(k\in R\) and \(a^{\prime\prime}\in A^{\prime\prime}\) such that \(p(a^{\prime\prime})=k(g^{\prime}(b^{\prime})-srb^{\prime\prime})\). Also, we have \(f^{\prime}(a^{\prime})=ta^{\prime\prime}\), for a non-zero element \(t\) belongs to \(R\) and \(a^{\prime\prime}\in A^{\prime\prime}\), because \(Im(f^{\prime})\leqslant_{e}A^{\prime\prime}\). Thus \(pf^{\prime}(a^{\prime})=p(ta^{\prime\prime})=tp(a^{\prime\prime})=k(g^{\prime}( b^{\prime})-srb^{\prime\prime})=g^{\prime}j(a^{\prime})=g^{\prime}(b^{\prime})\) which means \(g^{\prime}(kb^{\prime}-b^{\prime})=ksrb^{\prime\prime}\). Therefore, \(Im(g^{\prime})\leqslant_{e}B^{\prime\prime}\).
**Lemma 2.12**.: _Consider the commutative diagram of \(R\)-modules and \(R\)-morphisms, where \(R\) is a domain, \(C\), \(C^{\prime}\) and \(C^{\prime\prime}\) are torsion-free
_If the columns and the above rows are \(e\)-exact, then the last one is also \(e\)-exact._
Proof.: To proof the third row is \(e\)-exact, we have to check the following three conditions:
1) \(Ker(h)=0\). Take \(c\in Ker(h)\), then \(h(c)=0\) and \(hi^{\prime}(b)=j^{\prime}g(b)\) and and as \(Im(i^{\prime})\leqslant_{e}C\) then there exist \(b\in B\) and a non-zero element \(r\) belongs to \(R\) such that \(i^{\prime}(b)=rc\). Then \(hi^{\prime}(b)=h(rc)=rh(c)=0=j^{\prime}g(b)\) so \(g(b)\in Ker(j^{\prime})\) and as \(Im(j)\leqslant_{e}Ker(j^{\prime})\), then there exist a non-zero element \(s\) belongs to \(R\) and \(a^{\prime}\in A^{\prime}\) such that \(j(a^{\prime})=sg(b)\) and so \(g^{\prime}j(a^{\prime})=sg^{\prime}g(b)=0=pf^{\prime}(a^{\prime})\), so \(f^{\prime}(a^{\prime})\in Ker(p)\). Since \(Ker(p)=0\), \(f^{\prime}(a^{\prime})=0\) and \(a^{\prime}\in Ker(f^{\prime})\) and also by essentiality there exist \(a\in A\) and a non-zero elemeny \(t\) belongs to \(R\) such that \(f(a)=ta^{\prime}\). Then \(jf(a)=j(ta^{\prime})=tj(a^{\prime})=tsg(b)=gi(a)\) and so \(g(tsb-i(a))=0\) which means \(tsb-i(a)\in Ker(g)=0\) this implies \(tsb=i(a)\) and so \(tsi^{\prime}(b)=i^{\prime}i(a)=0\), then \(i^{\prime}(b)=0\) since \(C\) is torsion-free. Thus \(rc=0\) and so \(c=0\). Therefore, \(h\) is monic.
2) \(Im(h)\leqslant_{e}Ker(h^{\prime})\). First to prove \(Im(h)\subseteq Ker(h^{\prime})\). Let \(c^{\prime}\in Im(h)\). Then there exists \(c\in C\) such that \(h(c)=c^{\prime}\) and from essentiality there exist \(b^{\prime}\in B^{\prime}\) and a non-zero element \(r\) belongs to \(R\) such that \(j^{\prime}(b^{\prime})=rc^{\prime}\) and so \(h^{\prime}j^{\prime}(b^{\prime})=h^{\prime}(rc)=rh^{\prime}(c^{\prime})=rh^{ \prime}(h(c))=p^{\prime}g^{\prime}(b^{\prime})\) and also by commutativity and essentiality \(g^{\prime}j(a^{\prime})=g^{\prime}(tb^{\prime})=pf^{\prime}(a^{\prime})\) for a non-zero \(t\) belongs to \(R\) and \(b^{\prime}\in Ker(j^{\prime})\), which means \(Im(g^{\prime})\subseteq Im(p)\subseteq Ker(p^{\prime})\) and so \(p^{\prime}g^{\prime}(tb^{\prime})=0\). Therefore \(h^{\prime}(rc^{\prime})=rh^{\prime}(h(c))=tp^{\prime}g^{\prime}(b^{\prime})=0\) and so \(rh^{\prime}(h(c))=0\) so \(h^{\prime}(h(c))=0\), since \(C^{\prime\prime}\) is torsion-free. Hence \(c^{\prime}=h(c)\in Ker(h^{\prime})\). Now, for essentiality, take \(c^{\prime}\) to be a non-zero element of \(Ker(h^{\prime})\). Then \(h^{\prime}j^{\prime}(b)=p^{\prime}g^{\prime}(b)\)
and as \(Im(j^{\prime})\leqslant_{e}C^{\prime}\) there exist a non-zero element \(r\) belongs to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(j^{\prime}(b^{\prime})=rc^{\prime}\). Hence \(hi^{\prime}(b)=j^{\prime}g(b)\), since \(Im(i^{\prime})\leqslant_{e}C\) and \(Im(g)\leqslant_{e}Ker(g^{\prime})\in B^{\prime}\) so there are non-zero elements \(c\in C,b\in B\) and a non-zero elements \(s\) and \(k\) belongs to \(R\) such that \(hi^{\prime}(b)=h(sc)=sh(c)=j^{\prime}(kb^{\prime})=krc^{\prime}\). Therefore, \(sh(c)=kr(c^{\prime})\) put \(k=s\) so \(s(h(c)-rc^{\prime})=0\) and so \(h(c)=rc^{\prime}\), since \(C^{\prime}\) is torsion-free. Then \(Im(h)\leqslant_{e}Ker(h^{\prime})\).
3) \(Im(h^{\prime})\leqslant_{e}C^{\prime\prime}\). Let \(c^{\prime\prime}\) be a non-zero element of \(C^{\prime\prime}\) and as \(Im(p^{\prime})\leqslant_{e}C^{\prime\prime}\), there exist \(b^{\prime\prime}\in B^{\prime\prime}\) and a non-zero element \(r\) belong to \(R\) such that \(p^{\prime}(b^{\prime\prime})=rc^{\prime\prime}\) and as \(Im(g^{\prime})\leqslant_{e}B^{\prime\prime}\), then there exist a non-zero element \(s\) belong to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(g^{\prime}(b^{\prime})=sb^{\prime\prime}\) and also we have \(Im(j^{\prime})\leqslant_{e}C^{\prime}\), there exist a non-zero element \(t\) belong to \(R\) and \(b^{\prime}\in B^{\prime}\) such that \(j^{\prime}(b^{\prime})=tc^{\prime}\). Hence \(h^{\prime}j^{\prime}(b^{\prime})=h^{\prime}(tc^{\prime})=th^{\prime}(c^{ \prime})=p^{\prime}g^{\prime}(b^{\prime})=p^{\prime}(sb^{\prime\prime})=sp^{ \prime}(b^{\prime\prime})=src^{\prime\prime}\), put \(s=t\), so \(t(h^{\prime}(c^{\prime}-rc^{\prime\prime}))=0\) this implies \(h^{\prime}(c^{\prime}-rc^{\prime\prime})=0\), since \(C^{\prime\prime}\) is torsion-free. Thus, \(h^{\prime}(c^{\prime})=rc^{\prime\prime}\) and \(Im(h^{\prime})\leqslant_{e}C^{\prime\prime}\).
Recall from [1] that an e-injective resolution of an \(R-\)module \(A\) is an e-exact sequence \(0\to A\stackrel{{\eta}}{{\rightarrow}}E^{0}\stackrel{{ d^{0}}}{{\rightarrow}}E^{1}\stackrel{{ d^{1}}}{{\rightarrow}}...\to E^{n}\stackrel{{ d^{n}}}{{\rightarrow}}E^{n+1}\rightarrow...\) where each \(E^{i}\) is an e-injective \(R\)-module. Let \(f,g:X\to E\) be two chain maps. Then \(f\) is \(e\)-homotopic to \(g\) if there are maps \(s^{n+1}:X^{n+1}\to E^{n}\) and non-zero elements \(s\) and \(r\) in \(R\) such that \(r(h^{n}-f^{n})=s^{n+1}d^{n}+sd^{n-1}s^{n}\) for all \(n\). Now, we are in a position to prove the new form of comparsion theorem by using \(e\)-injectivity and \(e\)-exact sequences.
**Theorem 2.13**.: _[Comparison Theorem for e-injectives] Suppose that we have the following diagram:_
_where the rows are complexes. If each \(E^{n}\), in the top row is e-injective and the bottom row is e-exact, then there exists a chain map \(f:X^{A^{\prime}}\to E^{A}\), (the dashed arrows) making the completed diagram e-commute. Furthermore, any two such chain maps are e-homotopic._
Proof.: We prove the existence of \(f^{n}\) by induction \(n\geqslant 0\). For the base step \(n=0\), consider the following diagram:
Since \(\varepsilon^{\prime}\) is monic and \(E^{0}\) is e-injective, there exist \(f^{0}:X^{0}\to E^{0}\) and \(0\neq r\in R\) such that \(f^{0}\varepsilon^{\prime}=r(\varepsilon\circ f)\). For the inductive step, consider we have \(f^{n-1}\) and \(f^{n}\) and the following diagram:
Since \(d^{n}f^{n}(d^{m-1}X^{n-1})=r(d^{n}d^{n-1}f^{n-1}(X^{n-1}))=0\), \(Imd^{\prime\,n-1}\subseteq Kerd^{n}f^{n}\) and as \(Imd^{\prime\,n-1}\leqslant_{e}Kerd^{n}\),we have the following diagram
in which \(d^{\prime n}\) is monic and \(E^{n+1}\) is e-injective, so there exist \(r_{n}\in R\) and \(f^{n+1}:X^{n+1}\to E^{n+1}\) such that \(f^{n+1}d^{\prime\,n}=r_{n}d^{n}f^{n}\). Now, to show the uniqueness of \(f\) up to e-homotopy. If \(h:X^{A^{\prime}}\to E^{A^{\prime}}\), is another chain map with \(h^{0}\varepsilon^{\prime}=r\varepsilon f\), we construct the terms \(s^{n}:X^{n+1}\to E^{n}\) of an e-homotopy \(s=s^{n}\) by induction on \(n\geq 0\) that we will show that \(s(h^{n}-f^{n})=s^{n+1}d^{n^{\prime}}+r^{\prime}d^{n+1}s^{n}\) for \(s,r^{\prime},p\) and \(q\) in \(R\). We define \(s^{0}:X^{0}\to A\) to be the zero map. Now to show that \(Im(r(h^{n}-f^{n})-r^{\prime}d^{n-1}s^{n})\subseteq Kerd^{n}\), we have \(d^{n}(r(h^{n}-f^{n})-r^{\prime}d^{n-1}s^{n}\)
= \(d^{n}(r(h^{n}-f^{n})-d^{n}(r^{\prime}d^{n-1}s^{n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}d^{n}(d^{n-1}s^{n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}d^{n}(p(h^{n}-f^{n})-qs^{n+1}d^{\prime n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}pd^{n}(h^{n}-f^{n})-r^{\prime}qd^{n}(s^{n+1}d^{ \prime n})\)
= \(rd^{n}(h^{n}-f^{n})-r^{\prime}pd^{n}(h^{n}-f^{n})-r^{\prime}qd^{n}(md^{n-1}s^{ n})=0\), put \(r^{\prime}p=r\)
Therefore, we have the following diagram:
since \(E^{n}\) is e-injective and \(d^{\prime n}\) monic, there exist \(s^{n+1}\), \(s\in R\) such that \(s^{n+1}d^{n^{\prime}}=s(r(h^{n}-f^{n}-r^{\prime}d^{n-1}s^{n}))\). Therefore, \(sr(h^{n}-f^{n})=s^{n+1}d^{n^{\prime}}+sr^{\prime}d^{n-1}s^{n}\). Hence \(f\) and \(h\) are e-homotopic.
**Proposition 2.14**.: _Let \(I^{\prime\bullet}\) and \(I^{\prime\prime\bullet}\) be two e-injective resolutions of \(M^{\prime}\) and \(M^{\prime\prime}\) respectively. Suppose that \(0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\) is an e-exact sequence. Then there exists an e-injective resolution \(I^{\bullet}\) such that the following diagram_
_is e-commutative in which the bottom row is an e-exact sequence of complexes._
Proof.: Consider the following diagram in which the rows are e-exact
By the comparison theorem for e-injectives (Theorem 2.13), there exist maps (dashed arrows) that make all squares e-commute. Now, we define \(I^{n}=I^{\prime n}\bigoplus I^{\prime\prime n},\delta^{-1}:M\to I^{0}\) by \(\delta^{-1}(m)=(-f^{0}(m),\delta^{\prime\prime-1}\circ p(m))\) and \(\delta^{n}:I^{n}\to I^{n+1}\) by \(\delta^{n}(a,b)=(\delta^{\prime n}(a)+(-1)^{n}f^{n+1}(b),\delta^{\prime\prime n }(b))\),
since \(Ker\delta^{\prime\prime-1}=0\) and \(Ker\delta^{\prime-1}=0\), \(Ker\delta^{-1}=0\). Thus \(\delta^{-1}\) is monic and from Proposition 2.13, we get \(I^{n}\) is e-injective for each \(n\geqslant 0\). To prove that \(Im\delta^{-1}\leqslant_{e}Ker\delta^{0}\) we must show first that \(Im\delta^{-1}\subseteq Ker\delta^{0}\). Thus \(\delta^{0}(\delta^{-1}(m))=\delta^{0}(-f^{0}(m),\delta^{\prime\prime-1}\circ p (m))=\delta^{\prime}(-f^{0}(m)+(-1)^{0}f^{1}(\delta^{\prime\prime-1}\circ p(m)),\delta^{\prime\prime 0}(\delta^{\prime\prime-1}\circ p(m)))=0\), which implies that \(Im\delta^{-1}\subseteq Ker\delta^{0}\). Now, for essentiality, let \(0\neq(a,b)\in Ker\delta^{0}\). Then \(0=\delta^{0}(a,b)=(\delta^{\prime 0}(a)+(-1)^{0}f^{1}(b),\delta^{\prime\prime 0}(b))\), so \(\delta^{\prime 0}(a)+f^{1}(b)=0\) and \(\delta^{\prime\prime 0}(b)=0\). Since \(Im\delta^{\prime\prime-1}\leqslant_{e}Ker\delta^{\prime\prime 0}\), there exists \(0\neq r\in R\) such that \(\delta^{\prime\prime-1}(m^{\prime\prime})=rb\) for some \(m^{\prime\prime}\) in \(M^{\prime\prime}\) and \(\delta^{\prime\prime-1}(p(m))=rb\) for some \(m\in M\). So \(f^{1}(\delta^{\prime\prime-1}(p(m)))=f^{1}(rb)=rf^{1}(b)=-r\delta^{\prime 0}(a)\) and by e-commutity there exists \(0\neq q\in R\) such that \(-r\delta^{\prime 0}(a)=q\delta^{\prime 0}f^{0}(m)\) and this implies that \(r\delta^{\prime 0}(a)+q\delta^{\prime 0}f^{0}(m)=0=\delta^{\prime 0}(ra+qf^{0}(m))\). Thus we get \(ra+qf^{0}(m)\in Ker\delta^{\prime 0}\), and since \(Im\delta^{\prime-1}\leqslant_{e}Ker\delta^{\prime 0}\), there exists \(0\neq t\in R\) such that \(\delta^{\prime-1}(m^{\prime})=tra+tqf^{0}(m)\) for some \(m^{\prime}\in M^{\prime}\). Now, \(\delta^{-1}(tqm-i(m^{\prime}))=(-f^{0}(tqm-i(m^{\prime})),\delta^{\prime\prime- 1}\circ p(tqm-i(m^{\prime}))=(-f^{0}(tqm)+f^{0}(i(m^{\prime}))),\delta^{\prime \prime-1}p(tqm)-\delta^{\prime\prime-1}p(i(m^{\prime})))=tra-\delta^{\prime-1} (m^{\prime})+p\delta^{\prime-1}(m^{\prime}),tqrb)=s(a,b)\), put \(q=p=1\) which proves that \(Im\delta^{-1}\leqslant_{e}Ker\delta^{0}\) for \(n\geqslant 0\) and \(\delta^{n+1}(\delta^{n}(a,b))=\delta^{n+1}(\delta^{\prime\prime n}(a)+(-1)^{n} f^{n+1}(b)),\delta^{\prime\prime n}(b))\)
\(=\delta^{\prime n+1}(\delta^{\prime\prime n}(a)+(-1)^{n}f^{n+1}(b)+(-1)^{n+1}f^ {n+2}(\delta^{\prime\prime n}(b)),\delta^{\prime\prime n+1}(\delta^{\prime \prime n}(b)))=0\). This proves that \(Im\delta^{n}\subseteq Ker\delta^{n+1}\). Now let \((a,b)\in Ker\delta^{n+1}\). Then \(\delta^{\prime\prime n+1}(b)=0\) and \(\delta^{\prime n+1}(a)+(-1)^{n+1}f^{n+2}(b)=0\). It follows that \(\delta^{\prime\prime n}(c)=rb\), \(0\neq r\in R\) for some \(c\in I^{\prime\prime n}\). Then \((-1)^{n}y\delta^{\prime n+1}f^{n+1}(c)=(-1)^{n}f^{n+2}(\delta^{\prime\prime n }(c))=r\delta^{\prime n+1}(a)\) so that \(\delta^{\prime n+1}(ra-(-1)^{n+1}yf^{n+1}(c))=0\) and \(ra-(-1)^{n}yf^{n+1}(c)\in Ker\delta^{m+1}\). Since \(Im\delta^{\prime n}\leqslant_{e}Ker\delta^{\prime n+1}\), there exists \(0\neq q\in R\) such that \(\delta^{\prime\prime}(d)=q(ra-(-1)^{n}yf^{n+1}(c))\) for some \(d\in I^{\prime n}\). Thus \(\delta^{n}(d,c)=(\delta^{\prime\prime}(d)+(-1)^{n}f^{n+1}(c),\delta^{\prime \prime n}(c))=r(a,b)\), put \(q=y=1\), which proves that \(Im\delta^{n}\leqslant_{e}Ker\delta^{n+1}\).
## 3. The cohomology regarding to \(e\)-exact sequences
In this section all rings are Noetherian domain and all modules are unitary \(R\)-modules. We want to describe the right derived functors to the additive covariant functor (torsion functor) \(\Gamma_{a}\) of local cohomology with e-injective resolutions and then we present new forms of some theorems of cohomology with \(e\)-exact sequences. Let \(a\) be an ideal of \(R\). The following is a useful result that shows the functor \(\Gamma_{a}(\ )\) is an \(e\)-exact under suitable condition.
**Lemma 3.1**.: _Let \(0\to L\xrightarrow{f}M\xrightarrow{g}N\to 0\) be an e-exact sequence of \(R\)-modules and \(R\)-homomorphisms. Then \(0\to\Gamma_{a}(L)\xrightarrow{\Gamma_{a}(f)}\Gamma_{a}(M)\xrightarrow{\Gamma_{ a}(g)}\Gamma_{a}(N)\) is an e-exact sequence. Furthermore, if \(N\) is a torsion-free module, then the functor \(\Gamma_{a}(\ )\) is an \(e\)-exact._
Proof.: It is clear that \(ker(\Gamma_{a}(f))=0\). To show that \(Im(\Gamma_{a}(f))\leqslant_{e}Ker(\Gamma_{a}(g))\). Let \(0\neq y\in Ker(\Gamma_{a}(g))\). Then \(y\in\Gamma_{a}(M)\) and \(g(y)=0\), that is there exist \(n\in N\) such that \(a^{n}y=0\). Now, since \(Imf\leqslant_{e}Kerg\), there exists \(0\neq r\in R\) such that \(f(l)=ry\), for some \(l\in L\). Since \(f(a^{n}l)=a^{n}f(l)=a^{n}(ry)=0,a^{n}l=0\) and \(l\in\Gamma_{a}(L)\). Now, to show that \(Im(\Gamma_{a}(g))\leqslant_{e}\Gamma_{a}(N)\). Let \(y\in\Gamma_{a}(g)\cap Rx\), for \(0\neq x\in\Gamma_{a}(N)\). Then \(y=g(m)=rx\) for \(m\in\Gamma_{a}(M)\) and \(r\in R\). By hypothesis, \(rx\neq 0\) and so \(y\neq 0\) which guarantees the \(e\)-exactness of the sequence \(0\rightarrow\Gamma_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{ \rightarrow}}\Gamma_{a}(M)\stackrel{{\Gamma_{a}(g)}}{{ \rightarrow}}\Gamma_{a}(N)\to 0\).
**Definition 3.2**.: The right e-derived functors \(R^{n}T\) of a covariant functor \(T\) are defined on an \(R\)-module \(A\) by \((R^{n}T)(A)=H^{n}(TE^{A})=\frac{ker(Td^{n})}{Im(Td^{n-1})}\), where \(E:0\to A\to E^{0}\stackrel{{ d^{0}}}{{\rightarrow}}E^{1} \stackrel{{ d^{1}}}{{\rightarrow}}\dots\) is an e-injective resolution of \(A\) and \(E^{A}\) is its deleted \(e\)-injective resolution.
Define the \(n_{th}\)\(e\)-cohomolgy module \({}_{e}H^{n}_{a}(M)\) of \(M\) with respect to an ideal \(\mathbf{a}\) as the \(n_{th}\) right e-derived functor of the torsion functors \(\Gamma_{a}()\), that is \({}_{e}H^{n}_{a}(M)=R^{n}\Gamma_{a}(M)=\frac{ker(\Gamma_{a}(d^{n}))}{Im(\Gamma _{a}(d^{n-1}))}\). To calculate \({}_{e}H^{n}_{a}(M)\) with e-injective resolution \(0\to M\stackrel{{\alpha}}{{\rightarrow}}I^{0}\stackrel{{ d^{0}}}{{\rightarrow}}I^{1}\rightarrow\dots\to I^{n} \stackrel{{ d^{n}}}{{\rightarrow}}I^{n+1}\rightarrow\dots\) for any \(R\)-module \(M\) we do the following: Apply the functor \(\Gamma_{a}\) to the deleted complex of \(M\)\(I^{E}:0\stackrel{{ d^{-1}}}{{\rightarrow}}I^{0}\stackrel{{ d^{0}}}{{\rightarrow}}I^{1}\rightarrow\dots\to I^{n} \stackrel{{ d^{n}}}{{\rightarrow}}\dots\) and obtain \(0\rightarrow\Gamma_{a}(I^{0})\stackrel{{\Gamma_{a}(d^{0})}}{{ \rightarrow}}\Gamma_{a}(I^{1})\rightarrow\dots\rightarrow\Gamma_{a}(I^{n}) \rightarrow\dots\) then take the \(n_{th}\)\(e\)-cohomology module of this complex, the result is \({}_{e}H^{n}_{a}(M)=\frac{Ker(\Gamma_{a}(d^{n}))}{Im(\Gamma_{a}(d^{n-1}))}=(R^ {n}T)(M)=R^{n}(\Gamma_{a}(M))=H^{n}(TE^{M})\).
**Theorem 3.3**.: _The right e-derived functors for \(\Gamma_{a}\) are additive functors for every integer \(n\)._
Proof.: Let \(f:M\to M^{{}^{\prime}}\) be a morphism. Then by Theorem 2.13, there is a chain map \(\breve{f}:E^{M}\to E^{M^{{}^{\prime}}}\) over \(f\), where \(E^{M}\) and \(E^{M^{{}^{\prime}}}\) are deleted e-injective resolutions for \(M\) and \(M^{\prime}\) respectively. Then \(\Gamma_{a}\breve{f}:\Gamma_{a}E^{M}\rightarrow\Gamma_{a}E^{M^{{}^{\prime}}}\) is also a chain map, and so there is a well-defined map \({}_{e}H^{n}_{a}(f)=(R^{n}\Gamma_{a})f:H^{n}(\Gamma_{a}E^{M})\to H^{n}( \Gamma_{a}E^{M^{{}^{\prime}}})\), defined by \({}_{e}H^{n}_{a}(f)=(R^{n}\Gamma_{a})f=H^{n}(\Gamma_{a}\breve{f})=(\Gamma_{a} \breve{f})\) and \({}_{e}H^{n}_{a}=R^{n}\Gamma_{a}\) is also an additive covariant functor, because \({}_{e}H^{n}_{a}(f+g)=(R^{n}\Gamma_{a})(f+g)=H^{n}(\Gamma_{a}(f+g))=H^{n}( \Gamma_{a}(f)+\Gamma_{a}(g))=H^{n}(\Gamma_{a}(f))+H^{n}(\Gamma_{a}(g))=_{e}H^{ n}_{a}(f)+_{e}H^{n}_{a}(g)\). Therefore the right e-derived functors are additive functors for every integer n.
**Proposition 3.4**.: _Let \(A\) be any \(R\)- module. Then \({}_{e}H^{n}_{a}(A)=(R^{n}\Gamma_{a})A=0\), for all negative integers \(n\)._
Proof.: Suppose that \(E:0\to A\to E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\to\dots\) be an e-injective resolution for \(A\). Then the deleted complex of \(A\) is \(E^{A}:0\to E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\to\dots\). After applying \(\Gamma_{a}\) on the deleted complex, we get \(\Gamma_{a}E^{n}=0\) for all negative integers n, because the \(n_{th}\) term of \(E^{A}\) is zero when \(n\) is negative. Hence \({}_{e}H_{a}^{n}(A)=R^{n}\Gamma_{a}(A)=0\) for all negative integers n.
**Corollary 3.5**.: _Let \(E\) be an e-injective \(R\)-module. Then \({}_{e}H_{a}^{n}(E)=(R^{n}\Gamma_{a})(E)=0\), for all \(n\geq 1\)._
Proof.: Since \(E\) is an e-injective module, the e-injective resolution of \(E\) is \(0\to E\stackrel{{ 1_{E}}}{{\to}}E\to 0\). The corresponding deleted e-injective resolution \(E^{E}\) is the complex \(0\to E\to 0\). Hence, the \(n_{th}\) term of \(\Gamma_{a}(E^{E})\) is \(0\) for all \(n\geq 1\) and so \({}_{e}H_{a}^{n}(E)=(R^{n}\Gamma_{a})(E)=H^{n}(\Gamma_{a}E^{E})=0\) for all \(n\geq 1\).
**Theorem 3.6**.: _Let \(0\to L\stackrel{{ f}}{{\to}}M\stackrel{{ g}}{{\to}}N\to 0\) be an e-exact sequence of R-modules and R-homomorphisms and \(N\) torsion-free. Then, for each \(i\in N_{0}\), there are a connectig homomorphisms \({}_{e}H_{a}^{i}(N)\stackrel{{\sigma}}{{\to}}{{}_{e}}H_{a}^{i+1}(L)\) and these connecting homomorphisms make the resulting long sequence \(0\to{}_{e}H_{a}^{0}(L)\stackrel{{\epsilon H_{a}^{0}(f)}}{{\to}}{ {}_{e}}H_{a}^{0}(M)\stackrel{{\epsilon H_{a}^{0}(g)}}{{\to}}{{}_{e }}H_{a}^{0}(N)\to{}_{e}H_{a}^{1}(L)\to\dots\to{}_{e}H_{a}^{i}(L)\stackrel{{ \epsilon H_{a}^{i}(f)}}{{\to}}{{}_{e}}H_{a}^{i}(M)\to{}_{e}H_{a}^{i}(N) \stackrel{{\sigma^{*}}}{{\to}}{{}_{e}}H_{a}^{i+1}(L)\to\dots\)\(e\)-exact._
Proof.: By applying \(\Gamma_{a}\) to an e-exact sequence \(0\to L\stackrel{{ f}}{{\to}}M\stackrel{{ g}}{{\to}}N\to 0\) we obtain an \(e\)-exact sequence \(0\to\Gamma_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{\to}}\Gamma_{a}(M) \stackrel{{\Gamma_{a}(g)}}{{\to}}\Gamma_{a}(N)\to 0\) by Lemma 3.1 and by [2, Theorem 3.1] there is a connecting homomorphism \(\sigma_{n}:H^{n}(\Gamma_{a}(N))\to H^{n+1}(\Gamma_{a}(L))\) and by [2, Theorem 3.2] there is a long \(e\)-exact sequence of \(R\)-modules and \(R\)-morphisms \(0\to{}_{e}H_{a}^{0}(L)\stackrel{{\epsilon H_{a}^{0}(f)}}{{\to}}{ {}_{e}}H_{a}^{0}(M)\stackrel{{\epsilon H_{a}^{0}(g)}}{{\to}}{{}_{ e}}H_{a}^{0}(N)\to{}_{e}H_{a}^{1}(L)\to\dots\to{}_{e}H_{a}^{i}(L)\stackrel{{ \epsilon H_{a}^{i}(f)}}{{\to}}{{}_{e}}H_{a}^{i}(M)\to{}_{e}H_{a}^{i}(N) \stackrel{{\sigma^{*}}}{{\to}}{{}_{e}}H_{a}^{i+1}(L)\to\dots\)
**Theorem 3.7**.: _For any torsion-free \(R\)-module \(M\), \({}_{e}H_{a}^{0}(M)\) is naturally equivalent to \(r\Gamma_{a}(M)\) for some \(r\neq 0\in R\)._
Proof.: Let \(E:0\to A\stackrel{{\sigma}}{{\to}}E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\stackrel{{ d^{1}}}{{\to}}E^{2}\to\dots\) be an e-injective resolution for an \(R\)-module \(A\) and \(E^{A}:0\to E^{0}\stackrel{{ d^{0}}}{{\to}}E^{1}\stackrel{{ d^{1}}}{{\to}}E^{2}\to\dots\) be the deleted \(e\)-injective resolution for \(A\). When we apply \(\Gamma_{a}(\,)\) on the deleted \(e\)-injective resolution we get \(0\to\Gamma_{a}(E^{0})\stackrel{{ d^{0^{*}}}}{{\to}}\Gamma_{a}(E^{1}) \stackrel{{ d^{1^{*}}}}{{\to}}\Gamma_{a}(E^{2})\to\dots\). Then by the definition of e-cohomology, we have \({}_{e}H_{a}^{0}(A)=H^{0}(\Gamma_{a}(E^{A}))=Kerd^{0*}\). But the left e-exactness of \(\Gamma_{a}(\,)\) gives an e-exact sequence \(0\to\Gamma_{a}(E^{0})\stackrel{{ d^{0^{*}}}}{{\to}}\Gamma_{a}(E^{1}) \stackrel{{ d^{1^{*}}}}{{\to}}\Gamma_{a}(E^{2})\to\dots\)
We define \(\sigma^{*}:\Gamma_{a}(A)\to Kerd^{0*}\). Since \(Im\sigma^{*}\leqslant_{e}Kerd^{0*}\), \(\sigma^{*}\) is well-defined and \(\Gamma_{a}(\,)\) a left e-exact functor, \(\sigma^{*}\) is monic. Now, we want to prove that \(\sigma^{*}\) is epic. Let \(x\in Kerd^{0*}\). Then \(d^{0*}(x)=d^{0}(x)=0\). Therefore \(x\in Kerd^{0}\). By e-exactness of the e-injective resolution we have \(Im\sigma\leqslant_{e}Kerd^{0}\), so there exist \(a^{\prime}\in A\) and \(0\neq r\in R\) such that \(\sigma(a^{\prime})=rx\neq 0\). Now, we define \(f:r\Gamma_{a}(A)\to A\) by \(f(ry)=a^{\prime}\). Let \(y_{1},y_{2}\in\Gamma_{a}(A)\) with \(ry_{1}=ry_{2}\). Then \(\sigma(a^{\prime}_{1})=\sigma(a^{\prime}_{2})\). By monicness of \(\sigma\) we have \(a^{\prime}_{1}=a^{\prime}_{2}\) and so \(f\) is well-defined. Now we have \(rx=\sigma(a^{\prime})=\sigma(rf(y))=r\sigma(f(y))=r\sigma^{*}(f(y))\) which is equivalent to \(\sigma^{*}(f(y))=x\). Hence \(\sigma^{*}\) is an isomorphism and since \({}_{e}H^{0}_{a}(A)=Kerd^{0*}\). Therefore \({}_{e}H^{0}_{a}(\,)\) is isomorphic to \(r\Gamma_{a}(\,)\) for some nonzero \(r\) in \(R\).
**Corollary 3.8**.: _If \(0\to L\xrightarrow{f}M\xrightarrow{g}N\to 0\) is an e-exact sequence of \(R\)-modules where \(N\) is torsion-free, then there is a long e-exact sequence \(0\to{}_{e}H^{0}_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{\to}}{}_{e}H^ {0}_{a}(M)\stackrel{{\Gamma_{a}(g)}}{{\to}}{}_{e}H^{0}_{a}(N) \stackrel{{\sigma}}{{\to}}{}_{e}H^{1}_{a}(L)\stackrel{{ eH^{1}_{a}(f)}}{{\to}}{}_{e}H^{1}_{a}(M)\stackrel{{ eH^{1}_{a}(g)}}{{\to}}{}_{\dots}\). In addition, if \(L\), \(M\) and \(N\) are torsion-free modules, then there is a long e-exact sequence \(0\to r_{1}\Gamma_{a}(L)\stackrel{{\Gamma_{a}(f)}}{{\to}}{}_{r_{2 }}\Gamma_{a}(M)\stackrel{{\Gamma_{a}(g)}}{{\to}}{}_{r_{3}}\Gamma _{a}(N)\stackrel{{\sigma}}{{\to}}{}_{e}H^{1}_{a}(L)\stackrel{{ eH^{1}_{a}(f)}}{{\to}}{}_{e}H^{1}_{a}(M)\stackrel{{ eH^{1}_{a}(g)}}{{\to}}{}_{\dots}\) for some nonzero \(r_{1},r_{2},r_{3}\) in \(R\)._
Proof.: Directly follows from Theorem 3.6 and Theorem 3.7.
**Theorem 3.9**.: _Given an e-commutative diagram of R-modules having e-exact rows where \(A^{\prime\prime}\) and \(C^{\prime\prime}\) are torsion-free:_
_Then there is an e-commutative diagram with e-exact rows,_
_Proof._: By Proposition 2.14, we have an e-exact sequence of deleted complexes \(0\to E^{A^{\prime}}\to E^{A}\to E^{A^{\prime\prime}}\to 0\). If \(T=\Gamma_{a}\), then \(0\to TE^{A^{\prime}}\to TE^{A}\to TE^{A^{\prime\prime}}\to 0\) is still an e-exact by Lemma 3.1. By [2, Remark
3.3] there is an \(e\)-commutative diagram of \(R\)-modules and \(R\)-morphisms
and the rest will achieved by applying the definition of \(e\)-cohomology \({}_{e}H^{n}_{a}(\,)=H^{n}(\Gamma_{a}(E^{A}))\).
**Corollary 3.10**.: _Let \(M\) be an \(a\)-torsion \(R\)-module. Then there exists an \(e\)-injective resolution of \(M\) in which each term is an \(a\)-torsion \(R\)-module._
In local cohomology, every \(a\)-torsion \(R\)-module \(M\) has zero \(i_{th}\) local cohomology modules, that is \(H^{i}_{a}(M)=0\), for all \(i>0\). while in \(e\)-cohomology this is not true in general. To be more precise we present the following example.
**Example 3.11**.: Consider the \(e\)-injective resolution \(0\to\frac{\mathbb{Z}}{2\mathbb{Z}}\xrightarrow{f}\frac{\mathbb{Z}}{8\mathbb{ Z}}\xrightarrow{g}\frac{\mathbb{Z}}{16\mathbb{Z}}\to 0\) for the \(\mathbb{Z}\)-module \(\frac{\mathbb{Z}}{2\mathbb{Z}}\) where \(f(1+2\mathbb{Z})=4+8\mathbb{Z}\) and \(g(n+8\mathbb{Z})=8n+16\mathbb{Z}\). Since each term of the resolution is \(2\mathbb{Z}\)-torsion while the \(e\)-cohomology module \(H^{1}_{2\mathbb{Z}}(\frac{\mathbb{Z}}{2\mathbb{Z}})=\frac{\mathbb{Z}}{8 \mathbb{Z}}\) is non-zero.
At the end of the paper and as a future work one can do research on the \(e\)-cohomology dimension and its connection with the cohomolgy dimension.
**Acknowlegments**: We would like to thank the referees for their thoughtful comments and efforts towards improving the manuscript.
|
2309.11994 | Enhancing SAEAs with Unevaluated Solutions: A Case Study of Relation
Model for Expensive Optimization | Surrogate-assisted evolutionary algorithms (SAEAs) hold significant
importance in resolving expensive optimization problems~(EOPs). Extensive
efforts have been devoted to improving the efficacy of SAEAs through the
development of proficient model-assisted selection methods. However, generating
high-quality solutions is a prerequisite for selection. The fundamental
paradigm of evaluating a limited number of solutions in each generation within
SAEAs reduces the variance of adjacent populations, thus impacting the quality
of offspring solutions. This is a frequently encountered issue, yet it has not
gained widespread attention. This paper presents a framework using unevaluated
solutions to enhance the efficiency of SAEAs. The surrogate model is employed
to identify high-quality solutions for direct generation of new solutions
without evaluation. To ensure dependable selection, we have introduced two
tailored relation models for the selection of the optimal solution and the
unevaluated population. A comprehensive experimental analysis is performed on
two test suites, which showcases the superiority of the relation model over
regression and classification models in the selection phase. Furthermore, the
surrogate-selected unevaluated solutions with high potential have been shown to
significantly enhance the efficiency of the algorithm. | Hao Hao, Xiaoqun Zhang, Aimin Zhou | 2023-09-21T12:09:55Z | http://arxiv.org/abs/2309.11994v2 | # Enhancing SAEAs with Unevaluated Solutions:
###### Abstract
Surrogate-assisted evolutionary algorithms (SAEAs) hold significant importance in resolving expensive optimization problems (EOPs). Extensive efforts have been devoted to improving the efficacy of SAEAs through the development of proficient model-assisted selection methods. However, generating high-quality solutions is a prerequisite for selection. The fundamental paradigm of evaluating a limited number of solutions in each generation within SAEAs reduces the variance of adjacent populations, thus impacting the quality of offspring solutions. This is a frequently encountered issue, yet it has not gained widespread attention. This paper presents a framework using unevaluated solutions to enhance the efficiency of SAEAs. The surrogate model is employed to identify high-quality solutions for direct generation of new solutions without evaluation. To ensure dependable selection, we have introduced two tailored relation models for the selection of the optimal solution and the unevaluated population. A comprehensive experimental analysis is performed on two test suites, which showcases the superiority of the relation model over regression and classification models in the selection phase. Furthermore, the surrogate-selected unevaluated solutions with high potential have been shown to significantly enhance the efficiency of the algorithm.
Expensive optimization, unevaluated solutions, relation model, surrogate-assisted evolutionary algorithm 1]Hao Hao
1]Xiaoqun Zhang
1]Aimin Zhou
1]Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai Frontiers Science Center of Molecule Intelligent Synthesis, Shanghai 200062, China; School of Computer Science and Technology, East China Normal University, Shanghai 200062, China
## 1 Introduction
Valued for their global search capability and adaptability, evolutionary algorithms (EAs) are extensively utilized in various fields [1]. Despite the common presumption of a method to assess each solution's fitness, expensive optimization problems often present significant challenges due to the extensive computational resources or costly experiments they require [2, 3]. Addressing such practical limitations, surrogate-assisted evolutionary algorithms (SAEAs) have gained prominence. By integrating the robust global search of EAs with cost-effective surrogate model estimation, SAEAs have emerged as a mainstream method for solving these resource-intensive problems [4]. The SAEA framework, as depicted in Figure 1, with'reproduction operators' and'surrogate-assisted selection operators' being the pivot around which the framework revolves. It is the duty of the reproduction operators to generate innovative trial solutions, thus expanding the exploration of the search space. Concurrently, the surrogate-assisted selection operators strategically select prospective high-quality solutions for real fitness evaluation. These two operators alternate execution to drive the population toward the optimal solution region.
An important question is how the aforementioned two modules,'reproduction operators' and'surrogate-assisted selection operators', cooperate in SAEAs. Given the limited evaluation budget, the choice of which solutions and how many to real evaluate will impact SAEAs' preferences in terms of exploration and exploitation. Simultaneously, the decision of which solutions to use as parent solutions for generating new ones also dictates the SAEAs' search direction. The answer to this issue depends on the algorithm's
selection strategy, which can also be referred to as the model management strategy [2, 5]. There are two typical methods,'select \(N\)' and'select 1'. In'select \(N\)' [6, 7], a large number of trial solutions are generated using the reproduction operator, which exceeds the population size (\(N\)). Then, a surrogate model is utilized to select \(N\) solutions as offspring solutions for real evaluation. This methodology presents the benefit of augmenting the quantity of superior solutions in each generation and allowing entry into the following generation to promote population mobility. However, it aggravates the depletion of the real function evaluation budget. On the contrary, in the'select 1' method [8, 9], the benefit is a noteworthy reduction in the real evaluation budget. However, ideally, only one solution in the subsequent generation populace will be altered. As a result, there is a possibility that the distribution of the population may not be substantially enriched, and newly generated solutions may remain confined to the current search area (In Section 2.4, a simple visualization experiment will confirm this). The acquisition function is used to balance the search preferences and improve the diversity of the population to a certain extent. However, its ability to improve the current population distribution is limited. Therefore, when only one solution is sampled, it is difficult for the new solution in the next generation to escape the current local optimum.
The aforementioned'select \(N\)' and'select 1' strategies both present unique advantages and challenges. This prompts the question: Can we devise a simple method that amalgamates the strengths of both'select \(N\)' and'select 1' strategies? A method where, in the current population, the solution deemed best by the surrogate model is real evaluated, and the external archive and surrogate model are updated accordingly. At the same time, certain high-quality solutions identified by the model, without real evaluations, are chosen to directly contribute to the generation of solutions for the following iteration. Even though these solutions may not necessarily be optimal, their potential to surpass some of the parent solutions in quality is plausible. Implementing such a method would not escalate the algorithm's evaluation cost, but could augment the population's diversity and accelerate the algorithm's progression towards the optimal region.
The successful implementation of the aforementioned proposal is contingent upon a pivotal prerequisite of dependable prediction results from surrogate models. A variety of regression [10, 11] and classification [12, 13] models can be employed to ascertain solution quality [7]. Despite the significant contributions of existing models, our goal in this paper is to develop surrogate models that are better aligned with the specific needs of the problem at hand. Considering the accomplishments of widely-used regression and classification models, we believe there's still room to create even more reliable surrogate models. To that end, we introduce the relation model, a new surrogate model variant that we previously proposed [7]. The relation model diverges from traditional regression and classification models in its learning objective: it doesn't target the quality of a single solution (as in regression-based models) or the category of a solution (as in classification-based models), but rather the superiority relationship between two solutions. The relation model exploits the comparative nature of evolutionary algorithms [14] and has demonstrated remarkable performance in single-objective [7] and multi-objective problems [15, 16, 17, 18, 19].
In this study, we strive to customize the construction strategy of the relation model to fulfill the framework's demand for model selection accuracy amidst the requirement for potential quality solutions. Therefore, we propose a dual relation models-assisted single-objective optimization algorithm (DRSO) and design two methods for constructing relation models. These methods respectively select the optimal solution (\(\mathcal{Q}_{best}\)) and high-quality unevaluated solutions (\(\mathcal{P}_{u}\)). We employ the distribution estimation algorithm (EDA) to study the population's distribution information and generate offspring solutions. While the strategy of utilizing unevaluated solutions has been implemented for multi-objective optimization [20], our current work specifically focuses on designing a relation model for the selection of unevaluated solutions in single-objective optimization, instead of using a classifier. The main contributions of this paper can be summarized as follows:
* Illumination of the issue of offspring quality degradation in SAEAs when only a single offspring per
Figure 1: Flowchart of surrogate-assisted evolutionary algorithms.
generation is selected. In response, we propose a simple and universal method fueled by unevaluated solutions.
* Proposal of two methods for constructing relation models, known as the fitness-based and category-based criteria. These methods leverage data relationships to construct surrogate models.
* Introduction of a novel strategy, based on the EDA, for generating solutions by integrating evaluated and unevaluated solutions. The efficacy of this novel algorithm is validated on two test suites, highlighting both the effectiveness of the relation model and the significance of incorporating unevaluated solutions.
The rest of the article unfolds as follows. Section 2 presents some preliminaries. Section 3 outlines the unevaluated solutions driven SAEAs framework, covering the construction of the relation model and the generation of trial solutions. Section 4 showcases an empirical evaluation of the proposed method and compares it with other methods across two test suites. Finally, Section 5 provides a summary of the paper and explores potential directions for future research.
## 2 Preliminaries
This section provides the preliminary knowledge related to this work. Section 2.1 presents the basic concepts of EOPs. Section 2.2 introduces different strategies for offspring selection. Section 2.3 provides an overview of surrogate models, particularly focusing on relation models. Section 2.4 discusses the impact of population variance on the efficiency of SAEAs.
### Expensive optimization problems
An unconstrained minimization expensive optimization problem can be formulated as follow:
\[\min_{\mathbf{x}\in\Omega}\ f(\mathbf{x}) \tag{1}\]
where \(\mathbf{x}=(x_{1},\ldots,x_{n})^{T}\) is a decision variable vector, \(\Omega\in R^{n}\) defined the feasible region of the search space. Given that \(f:R^{n}\to R\) is the objective function, which is essentially a black-box due to the difficulty in tracking its internal workings, optimization problems in real-world applications that involve \(f(\cdot)\) can be quite costly. In fact, the lack of a closed-form objective function and the expensive nature of evaluating \(f(\cdot)\) pose significant challenges to both numerical and heuristic optimization techniques that are traditionally employed.
### Offspring Selection Methods
The purpose of offspring selection is to guide population movement toward the optimal regions while ensuring a certain level of diversity in the distribution. Depending on the offspring selection strategy, representative works in SAEAs can be categorized into three groups as follows:
\(\bullet\) select \(N\): This strategy is employed during the iterative process of several algorithms such as BCPS [6], FCPS [21], RCPS [7] and SAMFEO [22]. With the use of the reproduction operator, it generates a significant number of trial solutions surpassing the population size (\(N\)). Following this, a surrogate model is applied to select \(N\) solutions for real evaluation, creating the offspring solutions.
\(\bullet\) select 1: In every generation, only the top solution is chosen for real function evaluation and preserved in an archive. Acquisition functions \([]\) are employed to enhance the exploratory capability of the algorithm. Specifically, GPEME [8] and SADE-Sammon [9] utilize the lower confidence bound (LCB) [23] to guide the search, EGO [24] adopts the expected improvement (EI) method [25], and SA-EDA [26] integrates multiple acquisition strategies using the GP-Hedge method [27] to enhance the robustness of the selection.
\(\bullet\) others: Customized approaches have been proposed in SAMSO [28] and SACOSO [29], where multi-particle swarm is utilized to increase diversity through mutual interactions between swarm. In LLSO [30] and DFC-MOEA [20], a hierarchical strategy is employed for solution selection. In addition, LLSO enhances population diversity by introducing random solutions, while DFC-MOEA selects solutions with medium membership degrees using a classifier.
Each of the aforementioned methods has its own advantages, with the core consideration being the balance of interests under a limited computation budget of EOPs.
### Surrogate model
In SAEAs, surrogate models typically fall into two main categories [7]: regression and classification models. In regression-based SAEAs, the original function is replaced with a curve fitting the data points' distribution. Examples of such models include polynomial regression [31], radial basis function (RBF) networks [10], and Gaussian processes (GPs) [11]. Classification-based SAEAs, on the other hand, label solutions based on their quality, using models such as support vector machines (SVM) [12], artificial neural networks (ANN) [13], and fuzzy K-nearest neighbor (KNN) [32].
A newer category in SAEAs is relation learning [7, 15, 16, 17, 18], where the model is trained on the relationships between solutions, as opposed to using a single solution in regression or classification-based SAEAs. This approach shows promise in single-objective optimization, as it leverages the superiority and inferiority relationships between solutions for pre-selection operations on offspring solutions, resulting in improved performance [7]. In multi-objective optimization, methods like REMO [15] and CREMO [17] use a penalty-based boundary intersection (PBI) [33] approach to categorize solutions in the multi-objective space. A relation dataset is constructed based on the belonging relationship between samples, and a neural network is trained to learn the sample features. This process has proven effective in creating reliable surrogates for both continuous and discrete optimization problems. Methods like \(\theta\)-DEA-DP [18] directly apply the dominance relationship as the relationship definition for solutions, focusing on the dominance relationship learning and prediction.
Previous studies have demonstrated the advantages of using relation models in SAEAs. The construction of the relation model can generally be divided into three steps: data preparation, model training, and model usage. In data preparation, a certain criterion is used to construct relationship samples, \(\mathcal{D}=(\mathbf{x}_{i},\mathbf{x}_{j}),l)|\mathbf{x}_{i},\mathbf{x}_{j} \in\mathcal{P}\), where \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) is a feature vector composed of each pair of solutions, and \(l\) is the label of \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\). Machine learning methods are then used to learn from the relation data, and a predict method based on the relation model is designed to select solutions. In this work, we address the specific needs of selecting the best solution (\(\mathcal{Q}_{best}\)) and high-quality unevaluated solutions (\(\mathcal{P}_{u}\)) and propose new methods for constructing relationship models.
### Impact of variance among adjacent generations' populations
When confronted with the EOP mentioned in Eq. (1), it is expedient to solely evaluate the optimum solution in order to conserve the evaluation budget. However, this paradigm precipitates a decline in inter-population variance, thereby engendering new solutions in the subsequent iteration that are constrained within the present search region, culminating in a low-effectiveness predicament in the algorithm. Figure 2 presents a visual representation of the results obtained from five successive generations of search on a 2-dimensional Ellipsoid function [8] using the genetic algorithm (GA) [34]. The first row shows the selection of \(N\) solutions per generation, whereas the second row illustrates the selection of only the optimal solution for the next generation. The outcomes indicate that utilizing a single solution to update the population can lower the search efficiency of the original GA algorithm. This is due to the fact that selecting only the best solution can result in a loss of diversity in the population and hinder the exploration of the search space.
Additionally, we carried out 30 independent runs utilizing GA, differential evolutionary (DE) [35], and EDA [36] (three fundamental EAs) on LZG test suite [8]. According to the experimental results shown in Table 1 and analyzed using the Wilcoxon rank sum test [37], selecting a single solution leads to a decrease in all algorithms performance. It can be seen from this that the problem of performance degradation caused by selecting only the optimal solution is commonly present in various evolutionary algorithms.
The aforementioned toy studies demonstrate that a decrease in inter-population variance can lead to a decline in the performance of some fundamental algorithm operators. Therefore, adding some unevaluated
\begin{table}
\begin{tabular}{c c c c c c} \hline Alg & method & Ellipsoid & Rosenbrock & Ackley & Griewank \\ \hline GA & select N & 3.97e-01(1.37e-01) & 5.17e+01(2.68e+01) & 2.25e+00(3.57e-01) & 1.01e+00(1.79e-03) \\ select 1 & 1.45e+01(7.57e+00)(\(-\)) & 9.27e+01(4.04e+01)(\(-\)) & 8.33e+00(2.84e+00)(\(-\)) & 1.22e+00(1.02e-01)(\(-\)) \\ \hline \multirow{2}{*}{DE} & select N & 2.64e-01(8.02e-02) & 3.36e+01(1.51e+01) & 2.60e+00(4.07e-01) & 1.01e+00(2.39e-03) \\ & select 1 & 3.27e+01(2.50e-01)(\(-\)) & 1.29e+02(4.29e+01)(\(-\)) & 9.42e+001(1.51e+00)(\(-\)) & 1.57e+00(3.91e-01)(\(-\)) \\ \hline \multirow{2}{*}{EDA} & select N & 4.62e-02(7.29e-02) & 1.92e+01(3.08e+00) & 3.69e-01(1.64e-01) & 9.61e+01(1.27e-02) \\ & select 1 & 1.00e+01(2.82e+00)(\(-\)) & 5.72e+01(1.18e+01)(\(-\)) & 7.31e+00(5.81e-01)(\(-\)) & 1.21e+00(4.18e-02)(\(-\)) \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of mean and standard deviation results obtained by GA, ED and EDA on LZG test suite with \(n=20\).
solutions to supplement diversity can be a direct and simple method to improve the performance of the SAEAs algorithm.
## 3 Proposed method
In this section, we begin by introducing a basic framework for surrogate-assisted selection and unevaluated solution driven reproduction. Then, we will present two innovative approaches for constructing relation models. Finally, we will provide a detailed explanation of the reproduction process within this framework.
### Main Framework
```
0:\(N\) (population size); \(FE_{max}\) (maximum number of FEs); \(\alpha\) (size of training data set).
0:\(\mathcal{A}_{best}\) (the optimum solution).
1:\(\mathcal{P}_{e}\gets\mathbf{Initialization}(N)\). /* initialize population*/ /* update archive*/
2:\(\mathcal{A}\leftarrow\mathcal{P}_{e}\). /* update evaluation counter*/
3:\(\mathcal{P}_{u}\leftarrow\emptyset\). /* initialize an empty set*/
4:while\(fes\leqslant FE_{max}\)do
5:\(Q\leftarrow\mathbf{Reproduction}(\mathcal{P}_{e},\mathcal{P}_{u},N)\). /* generate new solutions*/
6:\(\mathcal{M}\leftarrow\mathbf{Training}(\mathcal{A}_{1:\alpha})\). /* train surrogate model*/
7:\([\mathcal{Q}_{best},\mathcal{P}_{u}]\leftarrow\mathbf{SA\_selection}(\mathcal{Q}, \mathcal{M})\). /* surrogate-assisted selection*/
8:\(\mathcal{A}\leftarrow\mathcal{A}\cup\mathbf{Evaluation}(\mathcal{Q}_{best})\). /* evaluate new solution and update archive*/
9:\(\mathcal{P}_{e}\leftarrow\mathcal{A}_{1:N}\). /* update population*/
10:\(fes\gets fes+1\). /* update evaluation counter*/
11:endwhile
```
**Algorithm 1** Main Framework
Algorithm 1 presents a basic framework put forward in this article, comprising surrogate-assisted selection and unevaluated solution driven reproduction. The specifics are succinctly summarized as follows.
\(\bullet\)**Initialization (lines 1-4)**: A set of \(N\) initial solutions are sampled from \(\Pi_{i=1}^{n}\left[a_{i},b_{i}\right]\) by means of the Latin hypercube sampling method (LHS) [38], with each of these solutions undergoing an evaluation by the real function and subsequently being stored in the archive \(\mathcal{A}\). The fitness evaluation count of these evaluations, denoted by the \(fes\), is updated accordingly. Eventually, an empty set \(\mathcal{P}_{u}\) needs to be initialized to store the unevaluated solutions selected by the surrogate model in the subsequent steps.
\(\bullet\)**Stop condition (line 5)**: The algorithm halts once the \(fes\) surpasses the designated maximum number of evaluations (\(FE_{max}\)).
\(\bullet\)**Generate new solutions (line 6)**: Based on the current evaluated population \(\mathcal{P}_{e}\) and unevaluated population \(\mathcal{P}_{u}\), an offspring population \(\mathcal{Q}\) containing \(N\) individuals is generated utilizing various heuristic
Figure 2: Distribution of the population during continuous evolution.
operators, such as DE, GA, EDA, among others. In this study, an approach combining a variable-width histogram (VWH) model and local search will be employed to generate new solutions [36].
\(\bullet\)**Train surrogate model (line 7)**: Selecting the optimal \(\alpha\) solutions from archive \(\mathcal{A}\), surrogate models will be trained. In this work, two customized methods for constructing relation models will be provided.
\(\bullet\)**Surrogate-assisted selection (line 8)**: The surrogate model is utilized to evaluate the solutions in the offspring population \(\mathcal{Q}\), with the optimal solution being selected as \(\mathcal{Q}_{best}\). A portion of the high-quality solutions in \(\mathcal{Q}\) will be selected as unevaluated solutions, restored in \(\mathcal{P}_{u}\).
\(\bullet\)**Update archive (line 9)**: The solution \(\mathcal{Q}_{best}\) will be evaluated by the real objective function and saved in archive \(\mathcal{A}\).
\(\bullet\)**Select solution for next generation (line 10)**: \(N\) evaluated solutions are selected from the archive \(\mathcal{A}\) based on their objective function values to constitute the population \(\mathcal{P}_{e}\).
\(\bullet\)**Update the counter (line 11)**: Since only one solution, \(\mathcal{Q}_{best}\), undergoes real evaluation during each iteration, \(fes\) is incremented by one.
In order to facilitate the model-assisted selection (line 8), it is necessary to devise surrogate models that can accurately select the optimal solution \(\mathcal{Q}_{best}\) from \(\mathcal{Q}\), as well as identify a subset of potentially good solutions that have not been evaluated but meet a certain threshold to form \(\mathcal{P}_{u}\). Additionally, we need to design a method to generate offspring solutions using these unevaluated solutions. Therefore, in the following sections, we will provide a detailed description of the design of the surrogate model and the generation of new solutions.
### Relation model
This subsection proposes two relation-based methods for constructing surrogate models, which are referred to as the fitness-based criterion (C1) and the category-based criterion (C2), respectively, and are used for two specific applications. The C1 criterion is used for selecting \(\mathcal{Q}_{best}\), while the C2 criterion is used for selecting \(\mathcal{P}_{u}\). Each model consists of three components: data preparation, model training, and model usage. The following sections will provide a detailed description of the implementation details of each component.
#### 3.2.1 Data preparation
Data preparation refers to how to construct relation pairs from the original training data \(\mathcal{D}\). We have designed two data construction methods for C1 and C2 criteria.
```
0:\(\mathcal{D}=\{(\mathbf{x}_{1},f(\mathbf{x}_{1})),\cdots,(\mathbf{x}_{1},f( \mathbf{x}_{\alpha}))\}\) (Training Data).
0:\(\mathcal{D}_{r}=\{((\mathbf{x}_{i},\mathbf{x}_{j}),l)|i,j\in[1,\alpha],l\in[- 1,+1]\}\) (Relation Data).
1:\(\mathcal{D}_{r}\leftarrow\{(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle,l)| \mathbf{x}_{i},\mathbf{x}_{j}\in P,i\neq j\}\), where the label \(l\) is assigned as follow: \[l(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle)=\begin{cases}+1,&f(\mathbf{x}_{ i})<f(\mathbf{x}_{j})\\ -1,&f(\mathbf{x}_{j})\geqslant f(\mathbf{x}_{i})\end{cases}\]
```
**Algorithm 2** Data preparation in fitness criterion (C1)
\(\bullet\) Fitness-based criterion (C1): To determine the superiority or inferiority between any given pairs of relations \(\langle x_{i},x_{j}\rangle\), their corresponding fitness values (i.e., objective function values) are used as a pivotal criterion. This allows for the assignment of a label to each pair. The process is elaborated in Algorithm 2, which generates a labeled training dataset \(\mathcal{D}_{r}\) consisting of two classes. Here, \(\alpha\) denotes the total number of elements present in the dataset \(\mathcal{D}\).
\(\bullet\) Category-based criterion (C2): First, a threshold is set based on the distribution of fitness values in the current population. Then, according to the comparison between the solution in \(\mathcal{D}\) and the threshold, they are classified into different categories (\(\mathbf{X}_{good}\) and \(\mathbf{X}_{bad}\)). Finally, labels are assigned to the relation pairs \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) based on the categories that make up the solution for the pairs. The specific details are shown in Algorithm 3. In lines 1-2, based on the classification threshold \(t\), the \(t\) top solutions in the data set \(\mathcal{D}\) are selected as \(\mathbf{X}_{good}\) samples according to their fitness values from best to worst. while the rest are assigned as \(\mathbf{X}_{bad}\) samples. In line 3, the relation pair \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) is assigned a label according to the
categories to which they belong. Since \(t\) is not necessarily equal to 50%, the labels of the pairs in dataset \(\mathcal{D}_{r}\) may not be balanced. To address this, we have further implemented the balancing strategy (line 4) proposed in [15].
The label balancing strategy is described as follows. Let \(L(+1)\), \(L(-1)\), \(L(+0)\), and \(L(-0)\) represent the sets of pairs labeled as '\(+1\)', '\(-1\)', '\(+0\)' and '\(-0\)' respectively. The symbol \(|\cdot|\) denotes the cardinality of a set. It is apparent that \(|L(+1)|=|L(-1)|\), and \((|L(+0)|+|L(-0)|)>|L(+1)|\). In order to balance the training data sets, certain points from \(L(+0)\cup L(-0)\) must be removed. Let \(|L(+1)|=|L(-1)|=\theta\). There exist three situations.
* If \(|L(+0)|>0.5\theta\) and \(|L(-0)|>0.5\theta\), \(0.5\theta\) points are arbitrarily retained from both \(L(+0)\) and \(L(-0)\).
* If \(|L(+0)|>0.5\theta\) and \(|L(-0)|<0.5\theta\), \(L(-0)\) is retained, and \(\theta-|L(0)|\) points are randomly selected from \(L(+0)\).
* If \(|L(+0)|<0.5\theta\) and \(|L(-0)|>0.5\theta\), \(L(+0)\) is retained, and \(\theta-|L(+0)|\) points are randomly selected from \(L(0)\). By following this method, the three training data sets all have a size of \(\theta\).
After employing two data preparation strategies and customizing the training data based on the C1 and C2 criteria, we have generated a 2-class dataset for \(\mathcal{D}_{r}\) using the C1 strategy and a 3-class dataset using the C2 strategy. In the following section, we will introduce the model training process.
#### 3.2.2 Model training
Extreme Gradient Boosting (XGBoost) [39] is a machine learning algorithm that is widely used in various data-driven applications. XGBoost is based on the concept of gradient boosting, where weak models are combined to create a strong model. In this work, XGBoost was used to learn the data features of \(\mathcal{D}_{r}\).
The relation pair samples \(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\) are the data features of \(\mathcal{D}_{r}\), and the label \(l\) indicates the relationship between two solutions in a set of pairs. The model \(\mathcal{M}\) is trained using the XGBoost algorithm, as shown in Eq. (2).
\[l=\mathcal{M}(\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle) \tag{2}\]
In line 7 of Algorithm 1, two models need to be trained for the two criteria, hence we differentiate them as \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\). The next step is to explain how to select the potential solutions based on the two models.
#### 3.2.3 _Molde usage_
For selecting appropriate solutions based on the two models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), we propose two different model usage strategies, corresponding to the selection of the \(\mathcal{Q}_{best}\) and \(\mathcal{P}_{u}\). Specifically, we adopt the basic idea of 'voting-scoring' used in previous works [15] and redesign the rules for its implementation.
The term 'vote' pertains to the prediction process of labeling \(\langle\mathbf{u},\mathbf{x}\rangle\) and \(\langle\mathbf{x},\mathbf{u}\rangle\), where an unknown solution \(\mathbf{u}\) and an evaluated solution \(\mathbf{x}\) are combined. This procedure can be regarded as an assessment of the unknown solution's quality based on the quality of the known solution \(\mathbf{x}\). As such, we refer to this process as a 'voting' mechanism. The'score' is determined based on the voting outcomes of all solutions \(\mathbf{x}\) in the training dataset \(\mathcal{D}\), and a specific rule is employed for statistical analysis. The rule's configuration
necessitates consideration of the position between \(\mathbf{x}\) and \(\mathbf{u}\), as well as \(\mathbf{x}\)'s fitness or category. Next, we will introduce the 'vote-score' strategies that are devised based on the C1 and C2 criteria.
* Fitness-based criterion (C1): For a newly generated solution \(\mathbf{u}\in\mathcal{Q}\), it combines all the evaluated solutions \(\mathbf{x}\in\mathcal{D}\). Based on the positions of the two solutions, two sets of relation pairs can be obtained, e.g., \(\langle\mathbf{x},\mathbf{u}\rangle\) and \(\langle\mathbf{u},\mathbf{x}\rangle\). Thus, utilizing Eq. (3), two sets of predicted outcomes, \(l^{I}\) and \(l^{II}\), can be derived.
\[\begin{split} l^{I}&=\{\mathcal{M}_{1}(\langle \mathbf{x},\mathbf{u}\rangle),\mathbf{x}\in\mathbf{X}\}\\ l^{II}&=\{\mathcal{M}_{1}(\langle\mathbf{u}, \mathbf{x}\rangle),\mathbf{x}\in\mathbf{X}\}\end{split} \tag{3}\]
The scoring rules are defined by Eq. (4).
\[S_{1}(\mathbf{u})=c(l^{II}_{+1})+c(l^{I}_{-1})-c(l^{I}_{+1})-c(l^{II}_{-1}) \tag{4}\]
Here, the function \(c(\cdot)\) returns the cardinality of elements present in the input set. The subscript of \(l\) denotes the labels of the relation pairs that constitute the current subset. For example, \(l^{I}_{+1}\) denotes a set that encompasses all elements in the set \(l^{I}\) whose predicted label equals \(+1\). The quality of solution \(\mathbf{u}\) can be assessed by utilizing Eq. (4), where a higher value indicates superior quality of \(\mathbf{u}\). Under the C1 criterion, the ultimate learning outcome can be perceived as a regression process for the original data distribution.
* Category-based criterion (C2): Under the C2 criterion, the 'voting' rule is formulated as Eq. (5). As \(\mathbf{x}\) possesses a categorical attribute ('good', 'bad'), the voting outcomes are classified into four categories based on the position and category of \(\mathbf{x}\). The relation model \(\mathcal{M}_{2}\) forecasts the outcomes of the four groups of relation pairs, denoted by set \(l^{I}\), \(l^{II}\), \(l^{III}\), and \(l^{IV}\), respectively.
\[\begin{split} l^{I}=&\{\mathcal{M}_{2}(\langle \mathbf{x},\mathbf{u}\rangle),\mathbf{x}\in\mathbf{X}_{good}\}\\ l^{II}=&\{\mathcal{M}_{2}(\langle\mathbf{u}, \mathbf{x}\rangle),\mathbf{x}\in\mathbf{X}_{good}\}\\ l^{III}=&\{\mathcal{M}_{2}(\langle\mathbf{x}, \mathbf{u}\rangle),\mathbf{x}\in\mathbf{X}_{bad}\}\\ l^{IV}=&\{\mathcal{M}_{2}(\langle\mathbf{u}, \mathbf{x}\rangle),\mathbf{x}\in\mathbf{X}_{bad}\}\end{split} \tag{5}\]
The scoring rules are defined by Eq.(6).
\[\begin{split} S_{2}(\mathbf{u})=\frac{1}{|\mathbf{X}|}\times(c(l ^{II}_{+1})+c(l^{IV}_{+1})+c(l^{I}_{0})+c(l^{II}_{0})+c(l^{I}_{-1})+c(l^{III}_{ -1})\\ -c(l^{I}_{+1})-c(l^{III}_{+1})-c(l^{III}_{0})-c(l^{IV}_{0})-c(l^{ II}_{-1})-c(l^{IV}_{-1}))\end{split} \tag{6}\]
In Eq. (6), the symbolism is similar to that of Eq. (4), but with a focus on the processing of the '0' label. According to the definition of relation pairs in the C2 criterion (Algorithm 3), the '0' label indicates that the two solutions in the pair belong to the same category. Therefore, based on the category of \(\mathbf{x}\), the contribution to the scoring can be determined. For instance, \(l^{II}_{+1}\) denotes the prediction result of \(\langle\mathbf{u},\mathbf{x}\rangle\) as '\(+1\)', indicating that \(\mathbf{u}\) is considered better than \(\mathbf{x}\). As a result, the score \(c(l^{II}_{+1})\) has a positive impact on the quality of \(S_{2}(\cdot)\). \(S_{2}(\cdot)\) can be scaled to \([-1,+1]\) by multiplying it with \(\frac{1}{|\mathbf{X}|}\). When \(S_{2}(\mathbf{u})>0\), it indicates that the relation model considers the current solution \(\mathbf{u}\) to be in the 'good' category, whereas when \(S_{2}(\mathbf{u})<0\), it indicates that the relation model considers the current solution \(\mathbf{u}\) to be in the 'bad' category. Moreover, the larger the \(|S_{2}(\mathbf{u})|\) value, the greater the likelihood of belonging to either of the two categories. Under the C2 criterion, the final learning outcome can be viewed as a classification process for the original data distribution.
After processing the features of the original training data (data preparation) and training the models, we obtain two models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) based on the relation of the solutions. These models can be used to select solutions in line 8 of Algorithm 1. Specifically, each solution in the offspring population \(\mathcal{Q}\) will be predicted by \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), and then based on the C1 criterion, the solution with the maximum \(S_{1}\) value will be selected as \(\mathcal{Q}_{best}\), and based on the C2 criterion, all solutions that satisfy \(S_{2}>0\) will be selected as the \(\mathcal{P}_{u}\) population.
### Reproduction
This work employs the EDA/LS, proposed by Zhou et al [36], as the fundamental method for generating new solutions, while incorporating information from the unevaluated solutions in population \(\mathcal{P}_{u}\) to generate offspring population \(\mathcal{Q}\). The EDA/LS algorithm includes two key operators, namely the variable-width histogram (VWH) and the local search method that combines global statistical information with individual location information to improve the performance of an EDA. First, a brief introduction to the VWH is presented, followed by an explanation of the local search method. Finally, the method for integrating unevaluated solutions to generate offspring population \(\mathcal{Q}\) is described.
#### 3.3.1 Variable-width histogram model
An Estimation of Distribution Algorithm (EDA) is an evolutionary algorithm variant that uses a probabilistic model to guide the search toward promising regions of the search space. It replaces traditional crossover or mutation operators with this model. A specific type, Variable-width histograms (VWH) [36], assumes no cross-dimensional correlations and uses a histogram model to track the population distribution. VWH emphasizes promising areas, reducing probabilities in other regions to prevent premature convergence, making it ideal for enhancing convergence in EOPs.
Fig. 3 illustrates the process of VWH. For the \(j\)-th variable, the search space \([a_{j},b_{j}]\) is partitioned into \(M\) bins, where the M-2 bins in the middle correspond to the regions with solutions in the current population \(\mathcal{P}\). The values of the bins are determined by the number of solutions in each bin's interval, while the first and the last bins are assigned a lower value. To generate a new solution, a bin is randomly selected based on its value, and then a uniform random value is sampled from the selected bin's interval as the value of the new solution for the j-th variable. This process is repeated \(n\) times to obtain a complete solution in the probability model VWH. By repeating this process \(N\) times, \(N\) offspring solutions are generated. For details on the modeling and sampling process, please refer to [36]. It is worth noting that the modeling and sampling stages of VWH only use the distribution information in the decision space, making it suitable to incorporate unevaluated solutions to update VWH.
#### 3.3.2 Local search
In order to compensate for the lack of local solution information, EDA/LS [36] proposes incorporating the results of local search into the offspring generated by the VWH model, given that the EDA model only uses the global information of the population to generate new solutions. In particular, a local model is constructed based on some of the best solutions from the current population \(\mathcal{P}\), which is then utilized to generate a portion of the new solutions. Afterward, these solutions are randomly combined with the solutions sampled from VWH to form the final offspring population \(\mathcal{Q}\). For more details, please refer to EDA/LS. Only the evaluated solutions are used for local search in this work, as they are driven by objective values.
#### 3.3.3 Unevaluated solutions driven reproduction
In each iteration, the process of generating the offspring population, using a combination of VWH and local search with both \(\mathcal{P}_{e}\) and \(\mathcal{P}_{u}\), will be executed according to the flowchart illustrated in Fig 4.
Figure 3: Illustration of VWH model for population on early and late search stage.
The two populations, one consisting of evaluated solutions and the other consisting of unevaluated solutions, will be merged and modeled using the VWH model to capture their distribution. The resulting distribution will be sampled to generate a new population. Since the VWH only utilizes information about the search space, whether a solution has been evaluated or not does not affect the operation of the model. The local search method only uses the population \(\mathcal{P}_{c}\) to generate a new population, which is then randomly merged with the new population generated by the VWH model to obtain the final offspring population \(\mathcal{Q}\). The implementation details and parameter settings of the VWH model, as well as the local search method and the ratio for merging the two temporary populations, will be set to the default values specified in EDA/LS [36].
## 4 Experimental study
In this section, we will evaluate the performance of the proposed algorithm and the relation model through comprehensive numerical experiments. Specifically, these experiments encompass comparative studies, ablation studies, and further analyses of the relation model and unevaluated solutions.
### Experimental Settings
#### 4.1.1 Test suites
In the empirical study, we utilized two well-known test suites. The first test suite, LZG [8], consists of four test functions: Ellipsoid, Rosenbrock, Ackley, and Griewank. These functions exhibit a range of landscapes, including unimodal, gully, and multi-modal. The second test suite used was the YLL test suite [40], which contains functions F1 through F4 with unimodal landscapes, and F5, F8 through F13 with multimodal landscapes. Function F6 has a step landscape, while function F7 has random noise. We evaluated the problems in both test suites in dimensions \(n=20\) for small-scale and \(n=50\) for median-scale.
#### 4.1.2 Algorithms in study
For the empirical study, seven algorithms have been selected, namely CMA-ES [41], FCPS-CoDE [32], EDA/LS [36], SAMSO [28], Skopt 1, GPEME [8], and DRSO. These algorithms can be classified into three categories.
Footnote 1: [https://github.com/scikit-optimize/scikit-optimize](https://github.com/scikit-optimize/scikit-optimize)
* Basic EAs: CMA-ES and EDA/LS are two generic EAs, not explicitly tailored for expensive optimization.
* Bayesian optimization: Skopt is an effective global optimization algorithm that operates within the Bayesian optimization framework. It employs GPs as the surrogate model.
* Surrogate-assisted EAs: FCPS-CoDE utilizes a fuzzy K-nearest neighbor-based classification model for evaluating candidate solutions. GPEME employs GPs for the evaluation of candidate solutions. SAMSO is a surrogate-assisted PSO algorithm that incorporates RBFs. DRSO is a dual relation models-assisted EDA that incorporates unevaluated solutions to generate new candidate solutions which are proposed in this work.
Figure 4: Flowchart for generating new solutions.
Due to the high computational complexity of Gaussian processes in high-dimensional spaces, GPEME and Skopt were only compared in the experiments for \(n=20\).
#### 4.1.3 Parameters setting
To ensure a fair comparison in the empirical study, we employ the recommended parameters specified in the original literature for each algorithm 2). The specifics of these parameters are outlined below.
Footnote 2) CMA-ES and SAMSO are implemented in Platemo [42]; Skopt:[https://github.com/scikit-optimize/scikit-optimize](https://github.com/scikit-optimize/scikit-optimize); FCPS-CoDE and GPEME are implemented by us based on the original report.
* Termination condition: The maximum number of function evaluations (\(FE_{max}\)) is employed as the termination condition, set at 500 for all instances.
* Population size: Set \(N=30\) for CMA-ES, EDA/LS, and FCPS-CoDE, Set \(N=40\) for SAMSO (default set in PlatEMO [42]). Set \(N=50\) for GPEME and DRSO.
* DRSO employs \(t=50\%\) for the C2 criterion to choose \(\mathcal{P}_{u}\).
* Parameters of compared algorithms: default setting according to the original version.
Each algorithm is executed on each test instance for 30 independent runs to account for randomness. The initial step involves evaluating the independence of the results generated by the algorithms for each test instance using the Friedman test [43]. The Wilcoxon rank sum test [37] is employed to compare the results. In the tables, the symbols '+', '-', '\(\sim\)' signify that the value achieved by an algorithm is smaller than, greater than, or similar to the value obtained by the DRSO, at a significance level of 0.05.
### Comparison study
Table 2 presents the statistical results of seven optimization algorithms evaluated on two test suites. The results are presented in terms of p-values obtained from the Friedman test, mean ranks, and the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline problem & p-value & CMA-ES & FCPS-CoDE & EDA/LS & Skopt & GPEME & SAMSO & DRSO \\ \hline \multirow{2}{*}{Ellipsoid} & \multirow{2}{*}{1.80e-33} & 1.50e+02[7](\(-\)) & 1.30e+02[6](\(-\)) & 7.17e+01[5](\(-\)) & 8.20e-02[16](\(-\)) & 3.20e-01[2](\(+\)) & 1.87e+01[4](\(\approx\)) & 6.17e+00[3] \\ & & & (4.25e+01) & (3.13e+01) & (1.56e+01) & (1.91e-02) & (1.77e-01) & (2.52e+01) & (4.57e+00) \\ \hline \multirow{2}{*}{Rosenbrock} & \multirow{2}{*}{4.94e-32} & 3.03e+02[6](\(-\)) & 3.22e+02[7](\(-\)) & 2.37e+02[2](\(-\)) & 2.27e+01[2](\(+\)) & 1.27e+02[4](\(-\)) & 8.57e+01[16](\(+\)) & 1.02e+02[3] \\ & & & (7.50e+01) & (1.05e-02) & (4.02e+01) & (1.21e+01) & (4.32e-01) & (2.47e+01) & (2.94e+01) \\ \hline \multirow{2}{*}{Ackley} & \multirow{2}{*}{2.45e-34} & 1.59e+01[6](\(-\)) & 1.48e+01[5](\(-\)) & 1.33e+01[4](\(-\)) & 7.16e+00[3](\(-\)) & 3.77e+00[16](\(-\)) & 1.83e+01[7](\(-\)) & 6.08e+00[2] \\ & & & (1.28e+00) & (1.00e+00) & (7.37e-01) & (3.27e-01) & (5.52e-01) & (1.33e+00) & (1.05e+00) \\ \hline \multirow{2}{*}{Griewank} & \multirow{2}{*}{6.07e-34} & 5.09e+01[6](\(-\)) & 5.46e+01[7](\(-\)) & 2.96e+01[5](\(-\)) & 1.02e+00[16](\(-\)) & 1.17e+00[2](\(+\)) & 2.06e+01[4](\(-\)) & 3.08e+00[3] \\ & & & (1.06e-01) & (1.33e+01) & (7.62e+00) & (1.52e-02) & (8.7e-02) & (1.33e+01) & (1.54e+00) \\ \hline \multirow{2}{*}{YLLF01} & \multirow{2}{*}{1.12e-34} & 6.32e+03[7](\(-\)) & 5.37e+03[6](\(-\)) & 3.17e+03[5](\(-\)) & 2.07e+00[16](\(+\)) & 1.67e+01[2](\(+\)) & 6.73e+02[4](\(-\)) & 2.47e+02[3] \\ & & & (1.98e+03) & (1.86e+03) & (5.92e+02) & (1.36e+00) & (1.09e+01) & (6.43e+02) & (1.37e+02) \\ \hline \multirow{2}{*}{YLLF02} & \multirow{2}{*}{6.34e-26} & 1.15e+02[6](\(-\)) & 2.44e+01[3](\(-\)) & 2.67e+01[4](\(-\)) & 4.30e+18[18](\(\approx\)) & 5.03e+02[7](\(-\)) & 3.27e+01[5](\(-\)) & 4.06e+00[2] \\ & & & (2.00e-02) & (3.67e+00) & (0.39e+00) & (3.55e+18) & (1.04e+03) & (1.72e+01) & (1.60e+00) \\ \hline \multirow{2}{*}{YLLF03} & \multirow{2}{*}{9.88e-23} & 2.09e+04[5](\(-\)) & 1.26e+04[2](\(+\)) & 2.32e+04[6](\(-\)) & 5.79e+01[1](\(+\)) & 2.99e+04[7](\(-\)) & 1.71e+04[4](\(\approx\)) & 1.53e+04[3] \\ & & & (5.68e+03) & (3.48e+03) & (4.21e+03) & (2.57e+03) & (6.03e+03) & (1.38e+04) & (3.94e+03) \\ \hline \multirow{2}{*}{YLLF04} & \multirow{2}{*}{5.24e-23} & 4.03e+01[5](\(-\)) & 4.17e+01[6](\(-\)) & 3.28e+01[4](\(-\)) & 3.28e+01[18](\(-\)) & 3.09e+01[3](\(-\)) & 7.06e+01[7](\(-\)) & 2.62e+01[2] \\ & & & (1.25e+01) & (6.55e+00) & (3.26e+00) & (1.12e+01) & (1.89e+00) & (7.40e+00) & (7.09e+00) \\ \hline \multirow{2}{*}{YLLF05} & \multirow{2}{*}{4.69e-31} & 3.11e+06[6](\(-\)) & 4.71e+06[7](\(-\)) & 1.13e+06[5](\(-\)) & 1.65e+05[3](\(-\)) & 3.05e+05[4](\(-\)) & 1.12e+05[2](\(-\)) & 5.52e+041[18](\(-\)) & 5.52e+041[19](\(-\)) & 5.52e+041[18](\(-\)) \\ & & & (1.34e+06) & (2.84e+06) & (5.76e+05) & (7.14e+04) & (1.60e+05) & (8.27e+04) & (8.19e+04) \\ \hline \multirow{2}{*}{YLLF06} & \multirow{2}{*}{1.00e-34} & 5.79e+03[6](\(-\)) & 2.66e+03[7](\(-\)) & 3.20e+03[3](\(-\)) & 2.27e+01[4](\(+\)) & 3.23e+01[21](\(+\)) & 8.59e+04[2](\(-\)) & 2.48e+02[3] \\ & & & (1.20e+03) & (1.28e+03) & (3.05e+02) & (1.49e+00) & (1.06e+01) & (1.12e+03) & (1.42e+02) \\ \hline \multirow{2}{*}{YLLF07} & \multirow{2}{*}{3.93e-28} & 1.62e+00[6](\(-\)) & 2.09e+00[7](\(-\)) & 6.67e-01[5](\(-\)) & 2.39e-01[2](\(\approx\)) & 2.90e-01[3
corresponding Wilcoxon rank sum test. The highest rank in each row is denoted by grey shading, along with the corresponding ranks enclosed in brackets of each result. The p-value obtained from the Friedman test is considerably lower than 0.05, signifying a substantial difference between the outcomes. The analysis demonstrates that DRSO achieves the best mean rank of 2.29 out of seven algorithms across 17 test instances. The Skopt algorithm secured second position, and GPEME ranked third. Although EDA/LS is not primarily designed for expensive optimization, it still displays a strong competitive performance due to the use of VWH and local search methods. The FCPS-CoDE algorithm selects \(N\) solutions for evaluation at each iteration, but its advantages are limited by the 500 evaluations allowed, nevertheless, it still outperforms the CMA-ES algorithm. According to the Wilcoxon rank sum test results, compared to DRSO, the most competitive algorithm, Skopt, achieved 7 better results, 7 worse results, and 3 results roughly equivalent. In low-dimensional problems, DRSO, driven by the relational model, has achieved statistical results similar to the most advanced BO algorithms, and DRSO even has an advantage in mean rank. Therefore, based on the aforementioned analysis, the DRSO algorithm demonstrates the best overall performance in the 20-dimensional search space.
The statistical results in the 50-dimensional problem are shown in Table 3, and DRSO still shows the best performance, achieving an average rank of 1.47 among the five algorithms. Based on the results of the Wilcoxon rank sum test, the four comparative algorithms have 15, 14, 13, and 13 results inferior to DRSO out of 17 problems, respectively.
### Ablation study
In this section, we will conduct ablation experiments on several important components of the DRSO, including the offspring selection strategy, the relation model, and the generation of new solutions. The details of the algorithm variants are shown in Table 4.
\begin{table}
\begin{tabular}{c c c c c c c} \hline problem & p-value & CMA-ES & EDA/LS & SAMSO & FCPS-CoDE & DRSO \\ \hline Ellipsoid & 4.04e-22 & 1.91e+03[5](\(-\)) & 1.52e+03[3](\(-\)) & 1.31e+03[2](\(\approx\)) & 1.61e+03[4](\(-\)) & 6.66e+02[1] \\ & (2.77e+02) & (2.23e+02) & (1.13e+03) & (3.39e+02) & (1.19e+02) \\ \hline Rosenbrock & 1.88e-17 & 2.09e+03[5](\(-\)) & 1.78e+03[3](\(-\)) & 1.58e+03[2](\(\approx\)) & 1.99e+03[4](\(-\)) & 8.81e+02[1] \\ & & (4.12e+02) & (2.84e+02) & (1.70e+03) & (5.11e+02) & (1.66e+02) \\ \hline Ackley & 5.24e-29 & 1.85e+01[4](\(-\)) & 1.76e+01[3](\(-\)) & 1.86e+01[5](\(-\)) & 1.75e+01[2](\(-\)) & 1.34e+01[1] \\ & & (9.68e-01) & (4.24e-01) & (1.18e+00) & (6.00e-01) & (5.64e+01) \\ \hline Griewank & 3.37e-22 & 2.38e+02[2](\(-\)) & 2.41e+02[3](\(-\)) & 6.19e+02[5](\(-\)) & 2.69e+02[4](\(-\)) & 1.81e+02[2](\(-\)) \\ & & (3.80e+01) & (2.86e+01) & (3.66e+02) & (6.72e+01) & (2.26e+01) \\ \hline YLLF01 & 7.52e-24 & 2.61e+04[2](\(-\)) & 2.77e+04[3](\(-\)) & 6.69e+04[5](\(-\)) & 2.95e+04[4](\(-\)) & 2.92e+04[4](\(-\)) & 2.92e+04[4](\(-\)) \\ & & (3.48e-03) & (3.92e+03) & (3.70e+04) & (5.86e+03) & (2.56e+03) \\ \hline YLLF02 & 3.17e-33 & 8.62e+13[4](\(-\)) & 2.79e+04[3](\(-\)) & 1.05e+18[5](\(-\)) & 8.92e+01[2](\(-\)) & 8.54e+01[1] \\ & & (3.40e+14) & (9.04e+04) & (3.77e+18) & (8.24e+00) & (7.20e+00) \\ \hline YLLF03 & 4.41e-21 & 1.36e+05[3](\(-\)) & 1.58e+05[4](\(-\)) & 2.97e+05[3](\(-\)) & 8.26e+04[1](\(-\)) & 1.34e+05[2] \\ & & (2.57e+04) & (2.51e+04) & (9.13e+04) & (8.16e+04) & (2.25e+04) \\ \hline YLLF04 & 5.21e-22 & 9.16e+01[5](\(-\)) & 5.80e+01[1](\(\approx\)) & 8.74e+01[4](\(-\)) & 5.88e+01[2](\(\approx\)) & 5.96e+01[3] \\ & & (1.26e+01) & (2.78e+00) & (8.06e+00) & (5.58e+00) & (4.95e+00) \\ \hline YLLF05 & 1.86e-25 & 3.78e+07[3](\(-\)) & 2.93e+07[2](\(-\)) & 1.55e+08[5](\(-\)) & 4.49e+07[4](\(-\)) & 1.16e+07[1] \\ & & (1.25e+07) & (5.30e+06) & (8.04e+07) & (1.82e+07) & (3.71e+06) \\ \hline YLLF06 & 1.67e-28 & 2.82e+04[3](\(-\)) & 2.74e+04[2](\(-\)) & 7.82e+04[5](\(-\)) & 2.92e+04[4](\(-\)) & 1.19e+04[1] \\ & & (5.82e+03) & (4.14e+03) & (3.06e+04) & (7.35e+03) & (2.11e+03) \\ \hline YLLF07 & 1.42e-25 & 3.14e+04[1](\(-\)) & 2.17e+01[2](\(-\)) & 1.31e+02[5](\(-\)) & 2.96e+01[3](\(-\)) & 8.80e+001[1] \\ & & (9.15e+00) & (4.75e+00) & (7.91e+01) & (1.28e+01) & (3.10e+00) \\ \hline YLLF08 & 2.68e-28 & 1.65e+04[4](\(-\)) & 1.51e+04[2](\(-\)) & 1.66e+04[5](\(-\)) & 1.63e+04[3](\(-\)) & 1.38e+04[1] \\ & & (5.98e+02) & (5.84e+02) & (6.62e+02) & (6.11e+02) & (6.19e+02) \\ \hline YLLF09 & 6.58e-26 & 6.21e+02[5](\(-\)) & 5.19e+04[2](\(-\)) & 4.97e+02[3](\(\approx\)) & 3.80e+02[1](\(\approx\)) & 4.82e+02[2] \\ & & (4.52e+01) & (1.78e+01) & (1.11e+02) & (3.15e+01) & (2.39e+01) \\ \hline YLLF10 & 2.76e-30 & 1.89e+01[4](\(-\)) & 1.75e+01[3](\(-\)) & 1.94e+01[5](\(-\)) & 1.72e+01[2](\(-\)) & 4.78e+001[3] \\ & & (1.45e+00) & (3.88e-01) & (2.95e+00) & (8.67e+01) & (2.54e+01) \\ \hline YLLF11 & 1.13e-21 & 2.47e+02[3](\(-\)) & 2.42e+02[2](\(-\)) & 6.31e+02[5](\(-\)) & 2.52e+02[4](\(-\)) & **1.10e+02[1]** \\ & & (3.82e+01) & (2.55e+01) & (3.41e+02) & (6.26e+01) & (2.07e+01) \\ \hline YLLF12 & 5.66e-222 & 4.07e+07[3](\(-\)) & 1.64e+07[2](\(-\)) & 5.10e+08[5](\(-\)) & 5.44e+07[4](\(-\)) & **.61e+06[1]** \\ & & (2.32e+07) & (5.98e+06) & (3.16e+08) & (3.49e+07) & (4.32e+06) \\ \hline YLLF13 & 8.00e-29 & 1.03e+08[2](\(+\)) & 5.38
DRSO-Sel-1 and DRSO-Sel-2 are used to verify the importance of reliably selecting \(\mathcal{P}u\) and \(\mathcal{Q}_{best}\). DRSO-Gen-1 and DRSO-Gen-2 serve to verify the significance of \(\mathcal{P}u\) and the local search algorithm. DRSO-Mod is utilized to validate the effectiveness of the relation model. Experiments were independently conducted 30 times on LZG test suit with 20 and 50 dimensions. The experimental design and result statistics are consistent with the 4.2 section. The experimental results are presented in Table 5.
Broadly speaking, all the algorithmic variants that omit a particular module are inferior to the original algorithm in terms of results. Specifically, the results of DRSO-GEN-1 and DRSO-GEN-2 are significantly inferior to the original version on 7 problems, indicating the importance of \(\mathcal{P}_{u}\) and local search in generating solutions. The results of DRSO-SEL-2 are also poor, being inferior to the original algorithm on all problems, highlighting the importance of the reliable selection of \(\mathcal{Q}_{best}\). The performance of DRSO-MODEL is significantly worse than DRSO on 7 problems, demonstrating the significance of the relation model. The performance deterioration of DRSO-SEL-1 is not obvious, as it is inferior to the original algorithm on only one problem, but its mean rank is still worse than the DRSO, indicating that the contribution of \(\mathcal{P}_{u}\) to the algorithm is not as significant as \(\mathcal{Q}_{best}\). In summary, each component of DRSO is effective, and their synergistic effect is also effective.
### Analysis of relation model
To analyze the fitting capacity of the relation model, four representative functions from the LZG test suite were chosen. The fitting ability of the relation model was visualized in a two-dimensional search space. Additionally, a comparison was made between the relation model's capability and that of regression and classification models in selecting \(\mathcal{Q}_{best}\) and \(\mathcal{P}_{u}\) in 20-dimensional problems. The results demonstrate the advantages of the relation model in model-assisted selection.
#### 4.4.1 Visualization analysis
In the first row of Figure 5, the contour distributions of the four test functions are depicted in their corresponding search spaces. Following this, LHS sampling was utilized to generate 100 points from the space as the original training data. Subsequently, a relation model (\(\mathcal{M}_{1}\)) is then constructed using the C1 criterion on training data. Based on the predicted values from the \(\mathcal{M}_{1}\) model, the contour distribution is displayed in the second row of Figure 5. It is apparent that the relation model, under the C1 criterion, resembles a regression-like process and is capable of acquiring knowledge on the original
\begin{table}
\begin{tabular}{c|c} \hline algorithm & details \\ \hline DRSO & Utilizes default settings \\ DRSO-Sel-1 & Selects \(\mathcal{P}u\) at random, \(\mathcal{M}2\) model excluded \\ DRSO-Sel-2 & Selects \(\mathcal{Q}best\) at random, \(\mathcal{M}_{1}\) model excluded \\ DRSO-Gen-1 & Employs \(\mathcal{P}e\) for new solutions in EDA, without \(\mathcal{P}_{u}\) \\ DRSO-Gen-2 & Excludes local search in the improvement of solution quality \\ DRSO-Mod & Excludes relation model, employs only XGBost for classification and regression, selects \(\mathcal{P}u\) and \(\mathcal{Q}best\) \\ \hline \end{tabular}
\end{table}
Table 4: Design of algorithm variants.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline n & problem & p-value & DRSO-GEN-1 & DRSO-GEN-2 & DRSO-SEL-1 & DRSO-SEL-2 & DRSO-MODEL & DRSO \\ \hline \multirow{5}{*}{20} & \multirow{2}{*}{Ellipsoid} & \multirow{2}{*}{1.13e-15} & 3.36e+01[5](\(-\)) & 1.18e+01[3](\(-\)) & 7.24e+00[2](\(\approx\)) & 4.09e+01[6](\(-\)) & 1.39e+01[4](\(-\)) & 6.17e+001[0] \\ & & & (3.73e+01) & (5.15e+00) & (4.82e+00) & (1.33e+01) & (7.24e+00) & (4.57e+00) \\ \cline{2-9} & \multirow{2}{*}{Rosenbrock} & \multirow{2}{*}{5.04e-05} & 1.27e+02[5](\(-\)) & 1.05e+02[3](\(\approx\)) & 9.54e+01[16](\(-\)) & 1.39e+02[6](\(-\)) & 1.24e+02[4](\(\approx\)) & 1.02e+02[2] \\ & & & (4.79e+01) & (3.19e+01) & (3.26e+01) & (3.35e+01) & (4.50e+01) & (2.94e+01) \\ \cline{2-9} & \multirow{2}{*}{Ackley} & \multirow{2}{*}{6.18e-21} & 8.17e+00[4](\(-\)) & 7.86e+00[3](\(-\)) & 8.88e+00[10](\(\approx\)) & 1.12e+01[6](\(-\)) & 9.94e+00[5](\(-\)) & 6.08e+00[02] \\ & & & (2.52e-00) & (9.18e-01) & (9.91e-01) & (9.96e-01) & (1.39e+00) & (1.05e+00) \\ \cline{2-9} & \multirow{2}{*}{Griewank} & \multirow{2}{*}{1.06e-19} & 1.31e+01[5](\(-\)) & 6.45e+00[3](\(-\)) & 6.56e+00[2](\(\approx\)) & 1.65e+01[6](\(-\)) & 8.57e+00[4](\(-\)) & 8.08e+00[01] \\ & & & (1.67e+01) & (2.09e+00) & (1.49e+00) & (4.59e+00) & (3.01e+00) & (1.54e+00) \\ \hline \multirow{5}{*}{50} & \multirow{2}{*}{Ellipsoid} & \multirow{2}{*}{9.52e-22} & 7.61e+02[3](\(-\)) & 1.28e+03[6](\(-\)) & 6.97e+02[2](\(\approx\)) & 1.05e+04[3](\(-\)) & 1.13e+03[5](\(-\)) & 6.66e+02[4] \\ & & & (2.08e-02) & (2.01e+02) & (9.68e+01) & (1.65e+02) & (2.08e+02) & (1.19e+02) \\ \cline{2-9} & \multirow{2}{*}{Rosenbrock} & \multirow{2}{*}{1.51e-21} & 1.02e+03[3](\(-\)) & 2.01e+03[6](\(-\)) & 9.69e+02[2](\(\approx\)) & 1.13e+03[4](\(-\)) & 1.54e+03[5](\(-\)) & 8.81e+02[1] \\ & & & (2.10e+02) & (3.02e+02) & (1.99e+02) & (2.18e+02) & (2.88e+02) & (1.66e+02) \\ \cline{2-9} & \multirow{2}{*}{Ackley} & \multirow{2}{*}{4.03e-23} & 1.50e+01[3](\(-\)) & 1.73e+01[6](\(-\)) & 1.49e+01[21](\(-\)) & 1.59e+01[4](\(-\)) & 1.71e+01[5](\(-\)) & 1.45e+01[1] \\ & & & (6.13e-01) & (4.31e-01) & (5.92e-01) & (6.57e-01) & (5.49e-01) & (5.64e-01) \\ \cline{2-9} & \multirow{2}{*}{Griewank} & \multirow{2}{*}{4.37e-22} & 1.26e+02[3](\(\approx\)) & 2.06e+02[6](\(-\)) & 1.16e+02[2](\(\approx\)) & 1.70e+02[4](\(-\)) & 1.98e+02[5](\(-\)) & 1.12e+02[1] \\ & & & (3.32e+01) & (2.90e+01) & (1.58e+01) & (2.61e+01) & (2.95e+01) & (2.26e+01) \\ \hline \multirow{5}{*}{50} & \multirow{2}{*}{mean rank} & \multirow{2}{*}{3.875} & 4.50 & 1.7 & 5.00 & 4.625 & 1.25 \\ & + / - / \(\approx\) & & 0/7/1 & 0/7/1 & 0/1/7 & 0/8/0 & 0/7/1 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study results comparing DRSO and its five variants on LZG test suites with n=20,50.
function's landscape. For instance, in the case of the Ellipsoid function, distinguished by its unimodal feature, and the Rosenbrock function, identified by a gully. the relation model does not intentionally fit the distribution of local extremums. However, it can still effectively represent the overall landscape of these two functions, which is vital for model-assisted search.
In the first row of Figure 6, the distribution of data based on the true objective function and the threshold is depicted, where cases Figure 6(a)-6(d) correspond to a classification threshold of \(t=10\%\), and cases Figure 6(e)-6(h) correspond to a threshold of \(t=30\%\). A smaller threshold indicates a narrower focus area. LHS is utilized to extract 100 data points from the decision space. Subsequently, a relation model \(\mathcal{M}_{2}\) is trained based on the C2 criterion and the specified threshold. The prediction outcomes are presented in the second row of Figure 6. Notably, the C2 criterion resembles a classification-like model, proficient in recognizing the classification boundaries of the original data and modifying the range of the model fitting boundary as per the threshold \(t\). Additionally, the relation data's label balance strategy ensures that the model training remains unaffected by imbalanced class proportions, even when \(t=10\%\).
#### 4.4.2 Accuracy analysis
The relation model showcases properties akin to both classification and regression models. This raises a valid question - why not directly employ either a classification or regression model? In the subsequent analysis, we will explore the advantages of utilizing the relation model over classification and regression models in the context of model-assisted selection.
To accentuate the importance of the data preparation and model usage stages in the relation model, we have excluded the differences in the learning abilities of the machine learning algorithms. We have opted for XGBoost with default parameters as the fundamental method for regression, classification, and the two relation models. These models are denoted as XGB, XGBC, R-C1, and R-C2, respectively. To eliminate the randomness in the operation of EAs, the parent population \(\mathcal{P}\) and offspring population \(\mathcal{Q}\) information generated by GA in 50 consecutive generations on the 20-dimensional LZG test suits are
Figure 5: Contour plot of predicted results for the C1 criterion relation model in 2-dimensional space. The first row shows results based on real function values, while the second row shows predicted results.
Figure 6: Contour plot of predicted results for the C2 criterion relation model in 2-dimensional space. The first row shows results based on true function values, while the second row shows predicted results. Fig (a)-(d) show results for \(t=10\%\), while Fig (e)-(h) show results for \(t=30\%\).
stored. The population size \(N\) is set to 50. The parent population is used as the training data for the model, while the offspring population is used as the testing data. To uniformly evaluate the capabilities of each model, two accuracy indicators, \(acc_{1}\) and \(acc_{2}\), are used to evaluate the performance on \(\mathcal{Q}_{best}\) and \(\mathcal{P}_{u}\), respectively. The calculation methods for \(acc_{1}\) and \(acc_{2}\) are as follows:
\[acc_{1}=R(Q^{\prime}_{best},\mathcal{Q}) \tag{7}\]
where \(\mathcal{Q}\) refers to both the offspring population and the test data. \(\mathcal{Q}^{\prime}_{best}\) denotes the best solution that is selected by the model within \(\mathcal{Q}\). The function \(R(\cdot)\) returns the ranking of \(\mathcal{Q}^{\prime}_{best}\) within the \(\mathcal{Q}\) based on the real objective values. A smaller value of \(acc_{1}\) indicates a higher effectiveness of the model in selecting the best solution.
\[acc_{2}=\frac{|\mathcal{P}_{u}\cap\mathcal{P}^{\prime}_{u}|}{|\mathcal{P}_{u}|} \tag{8}\]
\(\mathcal{P}_{u}\) represents the top \(t\) of solutions selected based on the real objective values. \(\mathcal{P}^{\prime}_{u}\) denotes the selection made by the model, while \(acc_{2}\) represents the proportion of cases where the model's selection matches the actual result.
A higher value of \(acc_{2}\) indicates a stronger ability of the model to make accurate selections.
Based on the results shown in Figure 7, which is the bar of the \(acc1\) metric for selecting \(Q_{best}\) over 50 generations, it can be observed that the R-C1 performs the best across all problems, with the smallest average rank value and error bar. This suggests that the R-C1 is more suitable for scenarios where the top 1 solution needs to be selected from a set. The R-C2 performs worse than the regression model XGBR, but better than the classification model XGBBC.
Figure 8 shows the ability of different models to select \(\mathcal{P}_{u}\), with \(t=50\%\). It can be observed that the R-C1 and R-C2 criteria exhibit better performance than XGBBC and XGBR. Among them, the Interquartile range of R-C1 is more concentrated, but there are some outliers, while the maximum value range of R-C2 is more optimal.
Based on the analysis above, it can be concluded that the relation model can provide more accurate and detailed partitioning than classification models, while avoiding overfitting of data in regression models and losing the order of ranks between test points. Therefore, it is more suitable for use in model-assisted scenarios.
### Importance of unevaluated solutions
Another key aspect to consider is whether the algorithm's efficiency is truly improved by the unevaluated solutions. In order to investigate this, we conducted an ablation experiment by designing a variant
Figure 8: Box chart of the \(acc_{2}\) statistics of \(\mathcal{P}_{u}\) selections for different surrogate models.
Figure 7: Bar chart of the \(acc_{1}\) statistics of \(\mathcal{Q}_{best}\) selections for different surrogate models.
DRSO' which expunged the population \(\mathcal{P}_{u}\). Moreover, we designed variants DRSO-10%, DRSO-30%, and DRSO-50%, corresponding to varied values of \(t\) to examine the influence of the parameter \(t\) on the algorithm's efficacy.
Figure 9 depicts the runtime curve of DRSO under various parameters on the LZG test suite, with a search space dimension of 20. Other parameters remain consistent with Section 4.1. The figure reveals that the DRSO', lacking unevaluated solutions, converges at a slower pace than DRSO, indicating that high-quality unevaluated solutions can effectively enhance the algorithm's convergence speed. With regards to the performance of DRSO under different values of \(t\), it can be observed that DRSO performs better under \(t=30\%\) and \(50\%\) than under \(t=10\%\) on the Ellipsoid and Griewank test functions. On the Ackley function, the performance of all three values of \(t\) is comparable. On the Rosenbrock function, the performance of \(t=10\%\) is intermediate to that of \(t=30\%\) and \(t=50\%\). We surmise that when \(t=10\%\), the algorithm is excessively focused on the optimal region, leading to inadequate diversity provided by unevaluated solutions in population \(\mathcal{P}\) and resulting in a decrease in performance. Therefore, we recommend \(t=50\%\) as the default parameter for the algorithm in this context.
## 5 Conclusions
This paper highlights an objective but often overlooked issue in SAEAs, which is the lack of variance in the adjacent population \(\mathcal{P}\) due to the limited number of solutions selected for real evaluation in each iteration. To tackle this problem, this work proposes a simple method of generating new solutions based on unevaluated solutions. Employing surrogate models, the best solution in the offspring population is selected for real evaluation, and the archive and surrogate model are updated accordingly. Additionally, some potential 'good' solution solutions are directly used to generate offspring without evaluation. We have designed customized relation-based surrogate models for SAEAs. Two specific relation model construction methods namely the fitness criterion (C1) and category criterion (C2), are proposed to address two selection scenarios. The C1 criterion constructs a relation pair based on relative fitness, while the C2 criterion divides the data into categories and constructs relation pairs based on category. XGBoost is utilized for data learning, and 'voting-scoring' strategies are designed to enhance the model's predictive ability. Some reproduction methods in EDA/LS are employed to generate offspring solutions, and unevaluated solutions are utilized to update the VWH model. Ultimately, a dual relation models-assisted single-objective optimization algorithm (DRSO) is designed.
To verify the effectiveness of the relation model, and to demonstrate the search capability of the DRSO, This work conducted experiments on the LZG and YLL test suites in 20 and 50-dimensional search spaces. The DRSO algorithm was compared with EAs, SAEAs, and BOs, and it showed strong competitiveness. Through ablation experiments, the efficacy of each module was verified. Furthermore, the paper also scrutinized the fitting ability of the relation model to the function landscape and the predictive ability for new solution quality. The effectiveness of unevaluated solutions in the algorithm search process was affirmed through experiments and analysis of algorithm hyperparameters. Overall, the results of the experiments testify to the effectiveness of the relation model and the competitiveness of the proposed DRSO with unevaluated solutions.
In future work, it is worth exploring more detailed strategies for using unevaluated solutions to improve the quality of new solutions. The relation model can be tried on more algorithm frameworks and problem types.
Figure 9: The runtime performance of DRSO and its variant on LZG test suite over 30 independent runs. |
2309.16023 | Q-REG: End-to-End Trainable Point Cloud Registration with Surface
Curvature | Point cloud registration has seen recent success with several learning-based
methods that focus on correspondence matching and, as such, optimize only for
this objective. Following the learning step of correspondence matching, they
evaluate the estimated rigid transformation with a RANSAC-like framework. While
it is an indispensable component of these methods, it prevents a fully
end-to-end training, leaving the objective to minimize the pose error
nonserved. We present a novel solution, Q-REG, which utilizes rich geometric
information to estimate the rigid pose from a single correspondence. Q-REG
allows to formalize the robust estimation as an exhaustive search, hence
enabling end-to-end training that optimizes over both objectives of
correspondence matching and rigid pose estimation. We demonstrate in the
experiments that Q-REG is agnostic to the correspondence matching method and
provides consistent improvement both when used only in inference and in
end-to-end training. It sets a new state-of-the-art on the 3DMatch, KITTI, and
ModelNet benchmarks. | Shengze Jin, Daniel Barath, Marc Pollefeys, Iro Armeni | 2023-09-27T20:58:53Z | http://arxiv.org/abs/2309.16023v1 | # Q-REG: End-to-End Trainable Point Cloud Registration with Surface Curvature
###### Abstract
Point cloud registration has seen recent success with several learning-based methods that focus on correspondence matching, and as such, optimize only for this objective. Following the learning step of correspondence matching, they evaluate the estimated rigid transformation with a RANSAC-like framework. While it is an indispensable component of these methods, it prevents a fully end-to-end training, leaving the objective to minimize the pose error non-served. We present a novel solution, Q-REG, which utilizes rich geometric information to estimate the rigid pose from a single correspondence. Q-REG allows to formalize the robust estimation as an exhaustive search, hence enabling end-to-end training that optimizes over both objectives of correspondence matching and rigid pose estimation. We demonstrate in the experiments that Q-REG is agnostic to the correspondence matching method and provides consistent improvement both when used only in inference and in end-to-end training. It sets a new state-of-the-art on the 3DMatch, KITTI and ModelNet benchmarks.
## 1 Introduction
Point cloud registration is the task of estimating the rigid transformation that aligns two partially overlapping point clouds. It is commonly solved by establishing a set of tentative correspondences between the two point clouds, followed by estimating their rigid transformation. The field has seen substantial progress in recent years with methods that introduce a learning component to solve the task.
Most learning methods focus on solving the correspondence task [6, 12, 18, 21], where a feature extractor is trained to extract point correspondences between two input point clouds. Once the learning step is over, they use the estimated correspondences for computing the rigid pose. Due to the low inlier ratio in putative correspondences, these methods strongly rely on hypothesize-and-verify frameworks, _e.g_. RANSAC [15] to compute the pose in a robust manner. Recent methods [28, 42] employ advances in the field of transformers to improve the final estimated set of correspondences and remove the dependency on RANSAC, achieving close-to-RANSAC performance. However, in these methods too, the objective in the learning process remains to find the best and cleanest matches, ignoring the objective to estimate the rigid pose. In addition, they do not achieve end-to-end differentiable training since they still employ robust estimation (_e.g_., [21, 28]) combined with the Kabsch-Umeyama algorithm [42].
Other learning-based methods, such as [35, 36, 41], directly solve the registration problem by incorporating the pose estimation in their training pipeline. Since RANSAC is non-differentiable due to the random sampling, they choose to estimate the alignment using soft correspondences that are computed from local feature similarity scores. In contrast to these methods, we employ the aforementioned works on estimating hard correspondences and develop a robust solution to replace RANSAC, that allows for end-to-end differentiable training.
In general, RANSAC-like robust estimation is non-differentiable only due to the employed randomized sampling function. Such a sampler is essential to cope with the combinatorics of the problem via selecting random subsets of \(m\) correspondences (_e.g_., \(m=3\) for rigid pose estimation). This allows to progressively explore the \(\binom{n}{m}\) possible combinations, where \(n\) is the total number of matches. Actually testing all of them is unbearably expensive in practice, which is what methods like [28, 42] try to avoid. This computation bottleneck would be resolved if \(m=1\). Hence, we design a 1-point solution, _Q-REG_, that utilizes rich geometric cues extracted from local surface patches es
Figure 1: _Q-REG solver._ Given (a) two partially overlapping point clouds as input and (b) the estimated correspondences of a matching method, (c) _Q-REG_ leverages the rich local geometry to estimate the rigid pose from a single correspondence, hence enabling end-to-end training of the matcher. _(Best viewed on screen.)_
timated from observed points (Figure 1). Specifically, we utilize rich geometric information by fitting quadrics (_e.g_., an ellipsoid) locally to the neighborhoods of an estimated correspondence. Moreover, such a solution allows quick outlier rejection by filtering degenerate surfaces and rigid poses inconsistent with motion priors (_e.g_., to avoid unrealistically large scaling). _Q-REG_ is designed to be deterministic, differentiable, and it replaces RANSAC for point cloud registration. It can be used together with any feature-matching or correspondence-matching method.
Since _Q-REG_ is fully differentiable, we achieve end-to-end training that optimizes both the correspondence matching and final pose objectives. As such, any learning-based matcher can be extended to being end-to-end trainable. In our experiments, we demonstrate how _Q-REG_ consistently improves the performance of state-of-the-art matchers on the _3DMatch_[44], _KITTI_[16] and _ModelNet_[38] datasets. It sets new state-of-the-art results on all benchmarks.
Our contributions can be summarized as follows:
* We develop _Q-REG_, a solution for point cloud registration, estimating the pose from a single correspondence via leveraging local surface patches. It is agnostic to the correspondence matching method. _Q-REG_ allows for quick outlier rejection by filtering degenerate solutions and assumption inconsistent motions (_e.g_., related to scaling).
* We extend the above formulation of _Q-REG_ to a differentiable setting that allows for end-to-end training of correspondence matching methods with our solver. Thus, we optimize not only over the correspondence matching but also over the final pose.
* We demonstrate the effectiveness of _Q-REG_ with different baselines on several benchmarks and achieve new state-of-the-art performance across all of them.
## 2 Related Work
**Correspondence-based Registration Methods.** The 3D point cloud registration field is well-established and active. Approaches can be grouped into two main categories: _feature-based_ and _end-to-end_ registration. Feature-based methods comprise two steps: local feature extraction and pose estimation using robust estimators, like RANSAC [15]. Traditional methods use hand-crafted features [22, 29, 30, 32, 33] to capture local geometry and, while having good generalization abilities across scenes, they often lack robustness against occlusions. Learned local features have taken over in the past few years, and, instead of using heuristics, they rely on deep models and metric learning [20, 31] to extract dataset-specific discriminative local descriptors. These learned descriptors can be divided into patch-based and fully convolutional methods depending on the input. Patch-based ones [3, 17] treat each point independently, while fully convolutional methods [6, 12, 21] extract all local descriptors simultaneously for the whole scene in a single forward pass.
**Direct Registration Methods.** Recently, end-to-end methods have appeared replacing RANSAC with a differentiable optimization algorithm that targets to incorporate direct supervision from ground truth poses. The majority of these methods [35, 36, 41] use a weighted Kabsch solver [5] for pose estimation. Deep Closest Point (DCP) [35] iteratively computes soft correspondences based on features extracted by a dynamic graph convolutional neural network [37], which are then used in the Kabsch algorithm to estimate the transformation parameters. To handle partially overlapping point clouds, methods relax the one-to-one correspondence constraint with keypoint detection [36] or optimal transport [14, 41]. Another line of work replaces local with global feature vectors that are used to regress the pose. PointNetLK [4] registers point clouds by minimizing the distance of their latent vectors, in an iterative fashion that resembles the Lucas-Kanade algorithm [25]. In [39], an approach is proposed for rejecting non-overlapping regions via masking the global feature vector. However, due to the weak feature extractors, there is still a large performance gap compared to hard matching methods. These direct registration methods primarily work on synthetic shape datasets [38] and often fail in large-scale scenes [21]. _Q-REG_ uses _hard_ correspondences while still being differentiable, via introducing an additional loss component that minimizes the pose error. In addition, as demonstrated in Sec. 4, it works for both real-world indoor [44] and outdoor [16] scene large-scale point clouds and synthetic object-level datasets [38], by setting a new state-of-the-art.
**Learned Robust Estimators.** To address the fact that RANSAC is non-differentiable, other methods either modify it [9] or learn to filter outliers followed by a hypothesize-and-verify framework [7] or a weighted Kabsch optimization [13, 19, 27]. In the latter case, outliers are filtered by a dedicated network, which infers the correspondence weights to be used in the weighted Kabsch algorithm. Similarly, we employ the correspondence confidence predicted by a feature extraction network (_e.g_., by [21, 28, 42]) as weights in the pose-induced loss.
## 3 Point Cloud Registration with Quadrics
We first describe the definition of the point cloud registration problem followed by ways of extracting local surface patches that can be exploited for point cloud registration.
**Problem Definition.** Suppose that we are given two 3D point clouds \(\mathcal{P}=\{\mathbf{p}_{i}\in\mathbb{R}^{3}\mid i=1,...,N\}\) and \(\mathcal{Q}=\{\mathbf{q}_{i}\in\mathbb{R}^{3}\mid i=1,...,M\}\), and a set of 3D-3D point correspondences \(\mathcal{C}=\{(p_{i},q_{i})\mid p_{i}\in\mathcal{P},q_{i}\in\mathcal{Q},\;i\in [1,K]\}\) extracted, _e.g_., by the state-of-the-art matchers [21, 28, 42]. The objective is to estimate rigid transfor
mation \(\mathbf{T}=\{\mathbf{R},\mathbf{t}\}\) that aligns the point clouds as follows:
\[\min_{\mathbf{R},\mathbf{t}}\sum\nolimits_{(\mathbf{p}_{x}^{*},\mathbf{q}_{y}^{* })\in\mathcal{C}^{*}}\|\mathbf{R}\mathbf{p}_{x}^{*}+\mathbf{t}-\mathbf{q}_{y}^ {*}\|_{2}^{2}, \tag{1}\]
where \(\mathbf{R}\in\text{SO}(3)\) is a 3D rotation and \(\mathbf{t}\in\mathbb{R}^{3}\) is a translation vector, and \(\mathcal{C}^{*}\) is the set of ground truth correspondences between \(\mathcal{P}\) and \(\mathcal{Q}\). In practice, we use putative correspondences instead of ground truth matches, and the set of correspondences often contains a large number of incorrect matches, _i.e_., outliers. Therefore, the objective is formulated as follows:
\[\min_{\mathbf{R},\mathbf{t}}\sum\nolimits_{(\mathbf{p}_{x},\mathbf{q}_{y}) \in\mathcal{C}}\rho(\|\mathbf{R}\mathbf{p}_{x}+\mathbf{t}-\mathbf{q}_{y}\|_{2 }^{2}), \tag{2}\]
where \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is a robust loss, _e.g_., Huber loss. The problem is solved by a RANSAC-like [15] hypothesize-and-verify framework combined with the Kabsch-Uneyama algorithm [5]. We argue in the next sections that, when employing higher-order geometric information, RANSAC can be replaced by exhaustive search improving both the performance and run-time. Figure 2 illustrates the developed approach, called _Q-REG_.
### Local Surface Patches
The main goal in this section is to determine a pair of local coordinate systems \((\mathbf{R}_{\mathbf{p}},\mathbf{R}_{\mathbf{q}})\) for each correspondence \((\mathbf{p},\mathbf{q})\in\mathcal{C}\), where \(\mathbf{R}_{\mathbf{p}},\mathbf{R}_{\mathbf{q}}\in\text{SO}(3)\). These coordinate systems will be then used to determine rotation \(\mathbf{R}\) between the point clouds as \(\mathbf{R}=\mathbf{R}_{\mathbf{q}}\mathbf{R}_{\mathbf{p}}^{\text{T}}\). We will describe the method for calculating \(\mathbf{R}_{\mathbf{p}}\), which is the same for \(\mathbf{R}_{\mathbf{q}}\). Note that determining translation \(\mathbf{t}\) is straightforward as \(\mathbf{t}=\mathbf{q}-\mathbf{p}\).
Suppose that we are given a point \(\mathbf{p}\in\mathcal{P}\) and its \(k\)-nearest-neighbors \(\mathcal{N}\subseteq\mathcal{P}\) such that there exists a correspondence \((\mathbf{p},\mathbf{q})\in\mathcal{C}\), \(k\in\mathbb{N}^{+}\). One possible solution is to fit a general quadratic surface to the given point and the points in \(\mathcal{N}\) and find the principal directions via the first and second-order derivatives at point \(\mathbf{p}\). These directions can give us a local coordinate system that is determined by the translation and rotation of the local surface in the canonical coordinate system. Even though this algorithm is widely used in practice, it can suffer from degenerate cases and slow computation time. To address these limitations, we develop the following approach.
The approach which we adopt in this paper is based on fitting a local quadric, _e.g_. ellipsoid, to the point \(\mathbf{p}\) and the points in its neighborhood \(\mathcal{N}\). The general constraint that a 3D quadric surface imposes on a 3D homogeneous point \(\hat{\mathbf{p}}^{\text{T}}=(x,y,z,1)\in\mathcal{N}\) lying on the surface is
\[\hat{\mathbf{p}}^{\text{T}}\mathbf{Q}\hat{\mathbf{p}}=0, \tag{3}\]
where \(\mathbf{Q}\) is the quadric parameters in matrix form [26] as:
\[\mathbf{Q}=\begin{pmatrix}A&D&E&G\\ D&B&F&H\\ E&F&C&I\\ G&H&I&J\end{pmatrix}. \tag{4}\]
We can rewrite constraint (3) into the form \(\mathbf{k}^{\text{T}}\mathbf{w}=d\), where
\[\mathbf{k}^{\text{T}} = (x^{2}+y^{2}-2z^{2},x^{2}+z^{2}-2y^{2},2xy,\] \[2xz,2yz,2x,2y,2z,1),\] \[\mathbf{w}^{\text{T}} = (A^{\prime},B^{\prime},D,E,F,G,H,I,J),\] \[d = x^{2}+y^{2}+z^{2},\] \[A^{\prime} = \frac{2A+B}{3}+1,\] \[B^{\prime} = \frac{A-B}{3}.\]
By imposing the constraints on all points, we have:
\[\sum_{i=1}^{|\mathcal{N}|}\mathbf{k}_{i}\mathbf{k}_{i}^{\text{T}}\mathbf{w}= \sum_{i=1}^{|\mathcal{N}|}\mathbf{k}_{i}d_{i}. \tag{5}\]
Figure 2: **Overview of _Q-REG_.** During inference, given (a) an input pair of partially overlapping point clouds and (b) the output of a correspondence matcher, we (c) perform quadric fitting for each estimated correspondence from which (d) we estimate the rigid pose and (e) compute the inliers given this pose. We iterate over all estimated correspondences, and choose the estimated pose that yields the most inliers. We further improve the result with (f) the local optimization and output the final estimated pose. During training, we back-propagate the gradients to the correspondence matcher and, in addition to its standard loss formulation, we minimize the proposed loss (\(L_{\text{pose}}\)) based on the single-correspondence pose estimation. _(Best viewed on screen.)_
\(|\mathcal{N}|\) is the number of neighbors to which the quadric is fitted (set to 50 in our experiments); \(d_{i}\) is the squared dist. of the \(i\)th neighbor from the origin. By solving the above linear equation, we get the coefficients of the quadric surface \(\mathbf{Q}\).
As we are interested in finding \(\mathbf{Q}\) such that the observed point \(\mathbf{p}\) lies on its surface, we express coefficient \(J\) (_i.e._, the offset from the origin) in Eq. 4 with formula
\[\mathbf{p}^{\mathsf{T}}\mathbf{Q}\mathbf{p}=0. \tag{6}\]
Thus, \(J\) will not be estimated from the neighborhood but it is preset to ensure that the quadric lies on point \(\mathbf{p}\).
In order to find a local coordinate system, we calculate the new coefficient matrix \(\mathbf{P}\) as follows:
\[\mathbf{P}=\frac{1}{J}\begin{pmatrix}A&D&E\\ D&B&F\\ E&F&C\end{pmatrix}. \tag{7}\]
Matrix \(\mathbf{P}\) can be decomposed by Eigen-decomposition into \(\mathbf{P}=\mathbf{V}\boldsymbol{\Sigma}\mathbf{V}^{\mathsf{T}}\), where \(\mathbf{V}=(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})\), projecting the fitted points to canonical coordinates, and \(\boldsymbol{\Sigma}=\text{diag}(\lambda_{1},\lambda_{2},\lambda_{3})\) comprises the eigenvalues.
Matrix \(\mathbf{V}\) contains the three main axes that map the quadric, fitted to point \(\mathbf{p}\) and its local neighborhood \(\mathcal{N}\), to canonical form. It is easy to see that, due to its local nature, the local surface is invariant to the rigid translation and rotation of the point cloud. Thus, it is a repeatable feature under rotations and translations of the underlying 3D point cloud. \(\boldsymbol{\Sigma}\) contains the three eigenvalues that are proportional to the reciprocals of the lengths \(l_{1}\), \(l_{2}\), \(l_{3}\) of three axes squared.
### Rigid Transformation from Surface Matches
Suppose that we are given sets of local coordinate systems \(\mathcal{V}^{\mathcal{P}}\) and \(\mathcal{V}^{\mathcal{Q}}\) associated with points on the found 3D-3D point correspondences, estimated as explained in the previous section. Given correspondence \((\mathbf{p},\mathbf{q})\in\mathcal{C}\), we know the local coordinates systems \(\mathbf{V}_{\mathbf{p}}^{\mathcal{P}}\in\mathcal{V}^{\mathcal{P}}\) and \(\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\in\mathcal{V}^{\mathcal{Q}}\) at, respectively, points \(\mathbf{p}\) and \(\mathbf{q}\). Due to the local surfaces being translation and rotation invariant, the coordinate systems must preserve the rigid transformation applied to the entire point cloud. Thus, \(\mathbf{R}=\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\mathbf{P}(\mathbf{V}_{ \mathbf{p}}^{\mathcal{P}})^{\mathsf{T}}\in\text{SO}(3)\) is the rotation between the point clouds, where \(\mathbf{P}\) is an unknown permutation matrix assigning the axes in the first coordinate system to the axes in the second one.
There are three cases that we have to account for. Ideally, the lengths of the three axes \(\mathbf{L}^{a}=(l_{1}^{a},l_{2}^{a},l_{3}^{a})^{\mathsf{T}}\) have a distinct ordering such that \(l_{1}^{a}>l_{2}^{a}>l_{3}^{a}\), \(a\in\{\mathcal{P},\mathcal{Q}\}\). In this case, the permutation matrix can be determined such that it assigns the longest axis in \(\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\) to the longest one in \(\mathbf{V}_{\mathbf{p}}^{\mathcal{P}}\), and so on. This procedure builds on the assumption that there is no or negligible anisotropic scaling in the point clouds and thus, the relative axis lengths remain unchanged. Also, having this assignment allows us to do the matching in a scale-invariant manner while enabling us to calculate a uniform scaling of the point clouds - or to early reject incorrect matches that imply unrealistic scaling. In this case, the problem is solved from a single correspondence.
The second case is when two axes have the same lengths, _e.g._, \(l_{1}^{a}\approx l_{2}^{a}\), and \(l_{3}^{a}\) is either shorter or longer than them. In this scenario, only \(l_{3}^{a}\) can be matched between the point clouds. This case is equivalent to having a corresponding oriented point pair. It gives us an additional constraint for estimating the rotation matrix. However, the rotation around axis \(l_{3}^{a}\) is unknown and has to be estimated from another point correspondence. While this is a useful solution to reduce the required number of points from three to two, it does not allow solving from a single correspondence.
In the third case, when \(l_{1}^{a}\approx l_{2}^{a}\approx l_{3}^{a}\), we basically are given a pair of corresponding spheres that provide no extra constraints on the unknown rotation.
In the proposed algorithm, we keep only those correspondences from \(\mathcal{C}\) where the local surface patches are of the first type, _i.e._, they lead to enough constraints to estimate the rigid transformation from a single correspondence. Specifically, we keep only those correspondences, where \(l_{1}^{a}\neq l_{2}^{a}\neq l_{3}^{a}\) with \(10^{-3}\) tolerance. Next, we will discuss how this approach can be used for training 3D-3D correspondence matching algorithms with robust estimation in an end-to-end manner.
### End-to-End Training
Benefiting from the rich information extracted via local surfaces (described in the prev. section), the presented solver estimates the rigid pose from a single 3D quadric match. This unlocks end-to-end differentiability, where the gradients of the matcher network can be propagated through the robust estimator to a loss directly measuring the pose error per correspondence. This enables using test-time evaluation metrics to optimize the end-to-end training.
**Loss.** In order to calculate a pose-induced loss from each correspondence, we first fit quadrics to local neighborhoods. This step has to be done only once, prior to the loss calculation, as the point clouds do not change. Suppose that we are given a set of correspondences \(\mathcal{C}=\{(\mathbf{p},\mathbf{q},\mathbf{V}_{\mathbf{p}}^{\mathcal{P}}, \mathbf{V}_{\mathbf{q}}^{\mathcal{Q}})\mid\mathbf{p}\in\mathcal{P},\ \mathbf{q}\in\mathcal{Q},\ \mathbf{V}_{\mathbf{p}}^{\mathcal{P}}\in \mathcal{V}^{\mathcal{P}},\ \mathbf{V}_{\mathbf{q}}^{\mathcal{Q}}\in\mathcal{V}^{\mathcal{Q}}\}\) equipped with their local quadrics and a solver \(\phi:\mathcal{P}\times\mathcal{Q}\times\mathcal{V}^{\mathcal{P}}\times \mathcal{V}^{\mathcal{Q}}\rightarrow\text{SE}(3)\), as described in Sec. 3.2, we can estimate the rigid transformation \(\mathbf{T}=(\mathbf{R},\mathbf{t})\in\text{SE}(3)\) from a single correspondence. Given a correspondence \((\mathbf{p},\mathbf{q},\mathbf{V}_{\mathbf{p}}^{\mathcal{P}},\mathbf{V}_{ \mathbf{q}}^{\mathcal{Q}})\) and the pose estimated from it \(\mathbf{T}_{\mathbf{p},\mathbf{q}}=\phi(\mathbf{p},\mathbf{q},\mathbf{V}_{ \mathbf{p}}^{\mathcal{P}},\mathbf{V}_{\mathbf{q}}^{\mathcal{Q}})\), The error is formalized as follows:
\[\epsilon(\mathbf{T}_{\mathbf{p},\mathbf{q}})=\sqrt{\frac{1}{|\mathcal{C}|}\sum_ {(\mathbf{p}_{i},\mathbf{q}_{i},\ldots)\in\mathcal{C}}\|\mathbf{T}_{\mathbf{p}, \mathbf{q}}\mathbf{p}_{i}-\mathbf{q}_{i}\|_{2}^{2}}, \tag{8}\]
where the RMSE of the pose is calculated by transforming
the correspondences. The loss obtained by iterating through all correspondences is as follows:
\[L_{\text{pose}}=\sum_{(\mathbf{p},\mathbf{q},\mathbf{V}_{\mathbf{p}}^{\text{P}}, \mathbf{V}_{\mathbf{q}}^{\text{Q}})\in\mathcal{C}}\left(1-\frac{\min(\epsilon( \mathbf{T}_{\mathbf{p},\mathbf{q}}),\gamma)}{\gamma}-s\right), \tag{9}\]
where \(\gamma\in\mathbb{R}\) is a threshold and \(s\) is the score of the point correspondence predicted by the matching network. The proposed \(L_{\text{pose}}\) can be combined with any of the widely used loss functions, _e.g_., registration loss. It bridges the gap between correspondence matching and registration and unlocks the end-to-end training.
**Inference Time.** While the proposed _Q-REG_ propagates the gradients at training time, during inference, we equip it with components that ensure high accuracy but are non-differentiable. _Q-REG_ iterates through the poses calculated from all tentative correspondences, by the proposed single-match solver, in an exhaustive manner. For each match, the pose quality is calculated as the cardinality of its support, _i.e_., the number of inliers. After the best model is found, we apply local optimization similar to [23], a local re-sampling and re-fitting of inlier correspondences based on their normals (coming from the fitted quadrics) and positions.
## 4 Experiments
We evaluate _Q-REG_ with three state-of-the-art matchers (Predator [21], RegTR [42], and GeoTr [28]) on real, indoor point cloud datasets _3DMatch_[44] and _3DLoMatch_[21]. We also evaluate _Q-REG_ with Predator and GeoTr on the real, outdoor dataset _KITTI_[16] and on the synthetic, object-centric datasets _ModelNet_[38] and _ModelLoNet_[21]. We compare _Q-REG_ with other estimators that predict rigid pose from correspondences on the _3DLoMatch_ datasets. Furthermore, we evaluate the importance of different _Q-REG_ components on the best-performing matcher on _3DLoMatch_, as well as run-time during inference.
**3DMatch & 3DLoMatch.** The _3DMatch_[44] dataset contains 62 scenes in total, with 46 used for training, 8 for validation, and 8 for testing. We use the training data preprocessed by Huang et al. [21] and evaluate on both _3DMatch_ and _3DLoMatch_[21] protocols. The point cloud pairs in _3DMatch_ have more than 30% overlap, whereas those in _3DLoMatch_ have a low overlap of 10% - 30%. Following prior work [28, 42], we evaluate the following metrics: (i) Registration Recall (RR), which measures the fraction of successfully registered pairs, defined as having a correspondence RMSE below 0.2 m; (ii) Relative Rotation Error (RRE); and (iii) Relative Translation Error (RTE). Both (ii) and (iii) measure the accuracy of successful registrations. Additionally, we report the mean RRE, RTE, and RMSE. In this setting, we evaluate over all valid pairs1 instead of only those with an RMSE below 0.2 m, and we provide a simple average over all valid pairs instead of the median value of each scene followed by the average over all scenes. These metrics will show how consistently well (or not) a method performs in registering scenes.
Footnote 1: According to [44], a valid pair is a pair of non-consecutive frames.
We report several learned correspondence-based algorithms on the two datasets. For [6, 12, 18], we tabulate the results as reported in their original papers. For [21, 28, 42], we evaluate them with and without the _Q-REG_ solver on all metrics. We also report methods that do not employ RANSAC [10, 13, 39] - results are taken from [42].
The results for _3DLoMatch_ and _3DMatch_ are tabulated in Tables 1 and 2 respectively. Note that, unless differently stated, hereafter the best values per group are in **bold** and the absolute best is **underlined**. Also, _Q-REG_ means that the solver is used only in inference and _Q-REG_* means it is used in both end-to-end training and inference. In the latter case, we train from scratch the correspondence matching network with the addition of the pose-induced loss. We use the default training procedure and parameters specified for each particular matcher when retraining. \(50K\) refers to the RANSAC iterations. Last, if nothing is added next to a method, the standard formulation is used.
In all three matchers, incorporating _Q-REG_ in inference time yields an increase in RR that ranges from 1.0 to 6.2% in _3DLoMatch_ and from 0.9 to 1.6% in _3DMatch_. The range difference between the two datasets is expected, since _3DMatch_ is more saturated and the gap for improvement is small. Using _Q-REG_ for inference achieves the second-best results overall (GeoTr + Q-REG). Even in the case of RegTR, where the putative correspondence set is smaller than other two methods and applying RANSAC ends in decreasing performance [42], _Q-REG_ can still provide a boost in all metrics. When training end-to-end the best-performing matcher, GeoTr, we gain further boost and achieve the best results overall in both datasets, setting a new benchmark (GeoTr + _Q-REG*_). We observe this behavior not only on the standard metrics (RR, RRE, RTE), but also at the Mean RRE, RTE, and RMSE. As expected, _Q-REG_ results in smaller errors regardless of the matcher. Additional results of Inlier Ratio (IR) and Feature Matching Recall (FMR) can be found in the supplementary material.
Qualitative results are shown in Figure 3. In the first and second row, GeoTr+_Q-REG*_ achieves a good alignment of the point clouds when all other methods fail. This means that using _Q-REG_ in end-to-end training can provide additional improvements in performance by learning how to better match correspondences together with the objective of rigid pose estimation, and not in isolation as it happens in all other cases. In the third row, the standard formulation already produces well-aligned point clouds, and the addition of _Q-REG_ slightly refines this output. However, in the case of RegTR, we can see the most improvement. The standard
formulation fails to achieve a good alignment and _Q-REG_ is able to recover a good pose. This means that our method is able to identify robust correspondences and remove spurious ones. When RegTR finds a correct pose, _Q-REG_ can further optimize it, as shown in the fourth row. In the same example, although GeoTR fails to infer a good pose, both _Q-REG_ and _Q-REG*_ are able to recover it. Additional qualitative results and plots can be found in the supp. material.
**KITTI.** The _KITTI_ odometry dataset [16] contains 11 sequences of LiDAR-scanned outdoor driving scenarios. We follow [6, 12, 21, 28] and split it into train/val/test sets as follows: sequences 0-5 for training, 6-7 for validation and 8-10 for testing. As in [6, 12, 21, 28], we refine the provided ground truth poses using ICP [8] and only use point cloud pairs that are captured within 10m range of each other. Following prior work [21, 28], we evaluate the following metrics: (1) Registration Recall (RR), which is the fraction of point cloud pairs with RRE and RTE both below certain thresholds (_i.e._, RRE\(<\)5 and RTE\(<\)2m), (2) Relative Rotation Error (RRE), and (3) Relative Translation Error (RTE).
We report several recent algorithms on the _KITTI_ odometry dataset [16]. For [6, 12, 40], results are taken from [28]. For [21, 28], we evaluate them with and without _Q-REG_, similarly to the Sec. 4. The results on the _KITTI_ dataset are in Table 3. Here as well, we observe a similar trend in the results, with _Q-REG_ boosting the performance of all matchers. Despite saturation of methods on _KITTI_, using the _Q-REG_ solver during inference provides improvements in both RRE and RTE. Predator with _Q-REG_ achieves the best results overall on both datasets (Predator + Q-REG). In addition, when _Q-REG_ is used for both inference and end-to-end training, the results of GeoTr are also improved with respect to its standard formulation (GeoTr + _Q-REG*_). This indicates that _Q-REG_ has similar behavior in point clouds of lower density and different distribution. Additional qualitative results and plots are provided in the supp. material.
**ModelNet & ModelLoNet.** The _ModelNet_[38] dataset contains 12,311 3D CAD models of man-made objects from 40 categories, with 5,112 used for training, 1,202 for validation, and 1,266 for testing. We use the partial scans created by Yew et al. [41] for evaluating on _ModelNet_ and those created by Huang et al. [21] for evaluating on _ModelLoNet_. The point cloud pairs in _ModelNet_ have 73.5% overlap on average, whereas those in _ModelLoNet_ have 53.6%. Following prior work [21, 42], we evaluate the following metrics: (1) Chamfer Distance (CD) between registered point clouds; (ii) Relative Rotation Error (RRE); and (iii) Relative Translation Error (RTE). We report several recent algorithms on the two datasets. For [4, 21], we tabulate the results as reported in their original papers. For [35, 39, 41], results are taken from [42]. For [28, 42], we evaluate them with and without _Q-REG_, similarly as before.
The results for _ModelNet_[38] and _ModelLoNet_[21] are tabulated in Tables 4 and 5, respectively. Here as well, we observe a similar trend in the results, with _Q-REG_ boosting the performance of all matchers. RegTR with _Q-REG_
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline Model & RR & RRE & RTE & \multicolumn{3}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\(\uparrow\))\(\downarrow\) & (cm\(\downarrow\)) & RRE \(\downarrow\) & RTE \(\downarrow\) & RMSE (cm\(\downarrow\)) \\ \hline
3DSN [18] & 38.4 & 2.20 & 7.1 & - & - & - \\ FCGF [12] & 85.1 & 1.95 & 6.6 & - & - & - \\ DSFNet [6] & 81.6 & 2.16 & 6.7 & - & - & - \\ OMNet [59] & 35.9 & 4.17 & 10.5 & - & - & - \\ DCR [13] & 85.3 & 2.10 & 6.7 & - & - & - \\ PCFM [10] & **85.5** & **1.81** & **5.9** & - & - & - \\ \hline Predator [21] + 56K & 89.3 & 1.98 & 6.5 & 6.80 & 20.2 & 18.3 \\ Predator [21] + Q-REG & **90.6** & **1.74** & **5.7** & **6.78** & **20.0** & **18.1** \\ \hline RegTR [42] & 92.0 & **1.57** & **4.9** & 5.31 & 17.0 & 13.8 \\ RegTR [42] + 50K & 91.3 & 1.72 & 5.9 & 5.26 & 17.5 & 14.7 \\ RegTR [42] + Q-REG & **92.1** & **1.57** & **4.9** & **5.13** & **16.5** & **13.6** \\ \hline GeoT [28] & 92.5 & 1.54 & **5.1** & 7.04 & 19.4 & 17.6 \\ GeoTr [28] + 50K & 92.2 & 1.66 & 5.6 & 6.85 & 18.7 & 17.1 \\ GeoTr [28] + Q-REG & 93.8 & 1.57 & 5.3 & 4.74 & 15.0 & 12.8 \\ GeoTr [28] + _Q-REG*_ & **95.2** & **1.53** & 5.3 & **3.70** & **12.5** & **10.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of state-of-the-art matchers on the _3DMatch_[44] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline Method & RR (\%)\(\uparrow\) & RRE (\(\circ\))\(\downarrow\) & RTE (cm\(\downarrow\)) & \multicolumn{3}{c}{_Mean_} \\ \hline
3DFeat-Net [40] & 96.0 & **0.25** & 25.9 \\ FCGF [12] & 96.0 & 0.30 & 9.5 \\ D3Feat [6] & **99.8** & 0.30 & **7.2** \\ \hline Predator [21] + 50K & **99.8** & 0.27 & 6.8 \\ Predator [21] + Q-REG & **99.8** & **0.16** & **3.9** \\ \hline GeoTr [28] & **99.8** & 0.24 & 6.8 \\ GeoTr [28] + 50K & **99.8** & 0.26 & 7.5 \\ GeoTr [28] + Q-REG & **99.8** & 0.20 & 6.0 \\ GeoTr [28] + _Q-REG*_ & **99.8** & **0.18** & **5.4** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation of state-of-the-art matchers on the _KITTI_[16] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline Model & RR (\%)\(\uparrow\) & RTE (\(\circ\))\(\downarrow\) & RTE & \multicolumn{3}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\(\uparrow\))\(\downarrow\) & (cm\(\downarrow\)) & RRE \(\downarrow\) & RTE \(\downarrow\) & RMSE (cm\(\downarrow\)) \\ \hline
3DSN [18] & 78.4 & 2.20 & 7.1 & - & - & - \\ FCGF [12] & 85.1 & 1.95 & 6.6 & - & - & - \\ DSFNet [6] & 81.6 & 2.16 & 6.7 & - & - & - \\ OMNet [59] & 35.9 & 4.17 & 10.5 & - & - & - \\ DCR [13] & 85.3 & 2.10 & 6.7 & - & - \\ PCFM [10] & **85.5** & **1.81** & **5.9** & - & - & - \\ \hline Predator [21] + 50K & 89.3 & 1.98 & 6.5 & 6.80 & 20.2 & 18.3 \\ Predator [21] + Q-REG & **90.6** & **1.74** & **5.7** & **6.78** & **20.0** & **18.1** \\ \hline RegTR [42] + 50K & 92.0 & **1.57** & **4.9** & 5.31 & 17.0 & 13.8 \\ RegTR [42] + 50K & 91.3 & 1.72 & 5.9 & 5.26 & 17.5 & 14.7 \\ RegTR [28] + Q-REG & **92.1** & **1.57** & **4.9** & **5.13** & **16.5** & **13.6** \\ \hline GeoT [28] & 92.5 & 1.54 & **5.1** & 7.04 & 19.4 & 17.6 \\ GeoTr [28] + 50K & 92.2 & 1.66 & 5.6 & 6.85 & 18.7 & 17.1 \\ GeoTr [28] + Q-REG & 93.8 & 1.57 & 5.3 & 4.74 & 15.0 & 12.8 \\ GeoTr [28] + _Q-REG*_ & **95.2** & **1.53** & 5.3 & **3.70** & **12.5** & **10.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of state-of-the-art matchers on the _3DMatch_[44] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
achieves the best results overall on both datasets (RegTR + Q-REG). In addition, both when _Q-REG*_ is used for inference and end-to-end training, the results of GeoTr are also improved with respect to its standard formulation.
### Comparison with Other Estimators
We compare _Q-REG_ with other estimators that predict rigid pose from correspondences on the _3DLoMatch_ dataset, using the state-of-the-art matcher GeoTr [28] as the correspondence extractor (best performing on this dataset). We evaluate the following estimators: (i) **GeoTr + WK**: weighted variant of the Kabsch-Umeyama algorithm [34]. (ii) **GeoTr + ICP**: Iterative closest point (ICP) [8] initialized with 50K RANSAC. (iii) **GeoTr + PointDSC**: PointDSC [7] with their pre-trained model from [2]. (iv) **GeoTr + SC2-PCR**: GeoTr with SC2-PCR [11]. (v) **GeoTr + Q-REG w/ PCA**: PCA instead of our quadric fitting to determine the local coordinate system. (vi) **GeoTr + Q-REG w/ PCD**: Use principal direction as explained in Sec. 3.1. (vii) **GeoTr + Q-REG**: Our quadric-fitting solver is used only in inference. (viii) **GeoTr + _Q-REG*_**: Our solver is used in both end-to-end training and inference.
The results are in Table 6 (_best results in **bold**, _second best are underlined_). Among all methods, _Q-REG*_ per
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ModelNet [38]} \\ & CD \(\downarrow\) & RPE (\({}^{\circ}\))\(\downarrow\) & RTE (cm)\(\downarrow\) \\ \hline PointNetLK [4] & 0.02350 & 29.73 & 29.7 \\ OMNet [39] & 0.00150 & 2.95 & 3.2 \\ DCP-v2 [35] & 0.01170 & 11.98 & 17.1 \\ RPM-Net [41] & **0.00085** & **1.71** & **1.8** \\ Predator [21] & 0.00089 & 1.74 & 1.9 \\ \hline RegTR [42] & 0.00078 & 1.47 & 1.4 \\ RegTR [42] + 50K & 0.00091 & 1.82 & 1.8 \\ RegTR [42] + Q-REG & **0.00074** & **1.35** & **1.3** \\ \hline GeoTr [28] & 0.00083 & 2.16 & 2.0 \\ GeoTr [28] + 50K & 0.00095 & 2.40 & 2.2 \\ GeoTr [28] + Q-REG & 0.00078 & 1.84 & 1.7 \\ GeoTr [28] + _Q-REG*_ & **0.00076** & **1.73** & **1.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation of state-of-the-art matchers on the _ModelNet_[38] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ModelNet [21]} \\ & CD \(\downarrow\) & RPE (\({}^{\circ}\))\(\downarrow\) & RTE (cm)\(\downarrow\) \\ \hline PointNetLK [4] & 0.0367 & 48.57 & 50.7 \\ OMNet [39] & 0.0074 & 6.52 & 12.9 \\ DCP-v2 [35] & 0.0268 & 6.50 & 30.0 \\ RPM-Net [41] & **0.0050** & 7.34 & **12.4** \\ Predator [21] & 0.0083 & **5.24** & 13.2 \\ \hline RegTR [42] & 0.0037 & 3.93 & 8.7 \\ RegTR [42] + 50K & 0.0039 & 4.23 & 9.2 \\ RegTR [42] + Q-REG & **0.0034** & **3.65** & **8.1** \\ \hline GeoTr [28] & 0.0050 & 4.49 & 7.6 \\ GeoTr [28] + 50K & 0.0050 & 4.27 & 8.0 \\ GeoTr [28] + Q-REG & 0.0044 & 3.87 & 7.0 \\ GeoTr [28] + _Q-REG*_ & **0.0040** & **3.73** & **6.5** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of state-of-the-art matchers on the _ModelLoNet_[21] dataset. The best values are **bold** in each group. The absolute best are **underlined**.
Figure 3: **Qualitative Results.** We showcase registration examples of RegTR [42] and GeoTr [28] with and without _Q-REG_ for the _3DLoMatch_ (first and third rows) and _3DMatch_ (second and fourth rows) datasets. _(Best viewed on screen.)_
forms the best in the majority of the metrics. GeoTr + WK shows a large performance gap with other methods since it utilizes soft correspondences and is not robust enough to outliers. GeoTr + ICP relies heavily on the initialization and, thus, often fails to converge to a good solution. GeoTr + PointDSC and GeoTr + SC2-PCR have comparable results as LGR but are noticeably worse than _Q-REG_. GeoTr + Q-REG w/ PCA leads to less accurate results than w/ quadric fitting, which demonstrates the superiority of _Q-REG_ that utilizes quadrics. GeoTr + Q-REG w/ PCD shows comparable performance as our _Q-REG_. This is expected since both methods have the same geometric meaning. However, there is a substantial difference in runtime (secs): 0.166 for quadric fitting versus 0.246 for PCD. Similar comparison results on the _3DMatch dataset_ is in the supplementary material.
### Ablation Studies
We evaluate the contribution of each component in the _Q-REG_ solver to the best-performing matcher on the _3DLoMatch_ dataset, the state-of-the-art GeoTr [28]. We evaluate the following self-baselines (i, ii and iii are inference only): (i) **GeoTr + Q**: Our quadric-fitting 1-point solver. (ii) **GeoTr + QL**: We extend the quadric fitting with local optimization (LO) as discussed in Sec. 3.3. (iii) **GeoTr + RL**: We replace quadric fitting with RANSAC 50K in (ii). (iv) **GeoTr + QT**: Our quadric-fitting solver is used in end-to-end training - during inference we do not employ LO; and (v) **GeoTr + QTL**: Our quadric-fitting 1-point solver is used in end-to-end training followed by inference with LO.
The results are reported in Table 7 (_best results in **bold**, second best are underlined_). _Q-REG_* performs the best in the majority of the metrics. Specifically, we observe that there is a substantial increase in RR by 4.2%. When our solver is used only during inference, we still see a 3.0% increase in RR. Even though the performance of our _Q-REG_ decreases by 1.6% in RR without LO, it provides a good initial pose and the performance gap can be easily bridged with a refinement step. For RANSAC 50K, the increase in RR is only 0.4% after applying local optimization, indicating that many of the initially predicted poses is unreasonable and cannot be improved with further refinement. We can also observe a noticeable difference in performance between GeoTr + RL and GeoTr + QL, which further highlights the superiority of our quadric fitting approach. When considering the mean RRE, RTE, and RMSE, we observe that our self-baselines provide consistently more robust results over all valid pairs versus the standard GeoTr (standard GeoTr corresponds to the top two rows in Table 7). The ablation on the _3DMatch_ dataset is in the supplementary.
**Run-time.** We compute the average run-time in seconds per component in Table 8 (evaluated with GeoTr on _3DLoMatch_). Regarding RANSAC 50K, which yields at least 2% lower RR, _QREG_ provides better results while being an order of magnitude faster. On average, GeoTr's correspondence matcher runs for \(0.134\)s. The overall inference time of each method can be obtained by adding it to Table 8. These experiments were run on 8 Intel Xeon Gold 6150 CPUs and an NVIDIA GeForce RTX 3090 GPU.
## 5 Conclusion
We present a novel solution for point cloud registration, _Q-REG_, that utilizes rich geometric information to estimate the rigid pose from a single correspondence. It allows us to formalize the robust estimation as an exhaustive search and enable us to iterate through all the combinations and select the best rigid pose among them. It performs quick outlier rejection by filtering degenerate solutions and assumption inconsistent motions (_e.g_., related to scale). _Q-REG_ is agnostic to matching methods and is consistently improving their performance on all reported datasets, setting new state-of-the-art on these benchmarks.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & RR & RRE & RTE & \multicolumn{2}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\%)\(\downarrow\) & (cm)\(\downarrow\) & RE \(\downarrow\) & RTE \(\downarrow\) & RMSE \(\downarrow\) \\ \hline GeoTr + LGR & 74.1 & 2.99 & 7.3 & 23.15 & 88.3 & 57.8 \\ GeoTr + 50K & 75.0 & 2.54 & 7.7 & 22.69 & 57.8 & 57.3 \\ \hline i) GeoTr + WK [34] & 58.6 & 3.01 & 8.8 & 33.74 & 84.7 & 76.1 \\ ii) GeoTr + ICP [8] & 75.1 & 2.43 & 8.1 & 22.68 & 66.5 & 66.5 \\ ii) GeoTr + PointDSC [7] & 74.0 & 2.55 & 7.2 & 23.95 & 61.6 & 60.7 \\ iv) GeoTr + SC2-PCR [11] & 74.2 & 2.58 & 7.5 & 22.90 & 59.1 & 58.4 \\ v) GeoTr + QREG w/ PCD & 75.1 & 2.44 & 7.6 & 22.66 & 57.4 & 56.9 \\ vi) GeoTr + QREG w/ PCD & 76.5 & 2.47 & 7.5 & 16.81 & 46.4 & 44.6 \\ vi) GeoTr + Q-REG & 77.1 & 2.44 & 7.7 & 16.70 & **44.6** & 44.6 \\ vi) GeoTr + _Q-REG_* & **78.3** & **2.38** & **7.2** & **15.65** & 46.3 & **42.5** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on the _3DLoMatch_[21] dataset of GeoTr [28] with different estimators. The best values are **bold** and the 2nd best are underlined.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Model} & RR & RRE & RTE & \multicolumn{2}{c}{_Mean_} \\ & (\%)\(\uparrow\) & (\%)\(\downarrow\) & (cm)\(\downarrow\) & RE \(\downarrow\) & RTE \(\downarrow\) & RMSE \(\downarrow\) \\ \hline GeoTr + LGR & 74.1 & 2.99 & 7.3 & 23.15 & 88.3 & 57.8 \\ GeoTr + 50K & 75.0 & 2.54 & 7.7 & 22.69 & 57.8 & 57.3 \\ \hline i) GeoTr + Q & 75.5 & 2.47 & 7.6 & 22.38 & 57.6 & 57.3 \\ ii) GeoTr + QL (- Q-REG) & 77.1 & 2.44 & 7.7 & 16.20 & **46.0** & 44.6 \\ ii) GeoTr + RL & 75.4 & 2.46 & 7.6 & 22.86 & 58.5 & 68.0 \\ vi) GeoTr + QT & 72.2 & **2.37** & 7.5 & 17.32 & 50.3 & 47.4 \\ v) GeoTr + QTL (_Q-REG_*) & **78.3** & 2.38 & **7.2** & **15.65** & 46.3 & **42.5** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation results on the _3DLoMatch_[21] dataset of GeoTr [28] with different aspects of the _Q-REG_ solver. The best values are **bold** and the 2nd best are underlined.
\begin{table}
\begin{tabular}{l c c|c c} \hline \hline LGR & +1K & +50K & +Q & +QL (Q-REG) \\ \hline
0.016 & 0.053 & 1.809 & 0.085 & 0.166 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Run-time evaluation in seconds during inference using GeoTr [28] on the _3DLoMatch_ dataset. Times shown for LGR, RANSAC running 1K and 50K iterations, Quadric solvers (Sec. 3.2), and with the entire _Q-REG_ algorithm. |
2305.00594 | The MCC approaches the geometric mean of precision and recall as true
negatives approach infinity | The performance of a binary classifier is described by a confusion matrix
with four entries: the number of true positives (TP), true negatives (TN),
false positives (FP), and false negatives (FN).
The Matthew's Correlation Coefficient (MCC), F1, and Fowlkes--Mallows (FM)
scores are scalars that summarize a confusion matrix. Both the F1 and FM scores
are based on only three of the four entries in the confusion matrix (they
ignore TN). In contrast, the MCC takes into account all four entries of the
confusion matrix and thus can be seen as providing a more representative
picture.
However, in object detection problems, measuring the number of true negatives
is so large it is often intractable. Thus we ask, what happens to the MCC as
the number of true negatives approaches infinity? This paper provides insight
into the relationship between the MCC and FM score by proving that the
FM-measure is equal to the limit of the MCC as the number of true negatives
approaches infinity. | Jon Crall | 2023-04-30T22:36:47Z | http://arxiv.org/abs/2305.00594v2 | # The MCC approaches the geometric mean of precision and recall as true negatives approach infinity.
###### Abstract
The performance of a binary classifier is described by a confusion matrix with four entries: the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
The Matthew's Correlation Coefficient (MCC), F1, and Fowlkes-Mallows (FM) scores are scalars that summarize a confusion matrix. Both the F1 and FM scores are based on only three of the four entries in the confusion matrix (they ignore TN). In contrast, the MCC takes into account all four entries of the confusion matrix and thus can be seen as providing a more representative picture.
However, in object detection problems, measuring the number of true negatives is so large it is often intractable. Thus we ask, what happens to the MCC as the number of true negatives approaches infinity? This paper provides insight into the relationship between the MCC and FM score by proving that the FM-measure is equal to the limit of the MCC as the number of true negatives approaches infinity.
Confusion Matrix Binary Classification Fowlkes-Mallows Index Matthew's Correlation Coefficient F1
## 1 Introduction
Evaluation of binary classifiers is central to the quantitative analysis of machine learning models [1]. Given a finite set of examples with known real labels, the quality of a set of corresponding predicted labels can quantified using a \(2\times 2\) confusion matrix. A confusion matrix counts the number of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) a model predicts with respect to the real labels. The confusion matrix is often written as:
\[\begin{bmatrix}\texttt{TP}&\texttt{FP}\\ \texttt{FN}&\texttt{TN}\end{bmatrix} \tag{1}\]
This matrix provides a holistic view of classifier quality, however, it is often desirable to summarize performance using fewer numbers. Two popular metrics defined on a classification matrix are precision and recall.
Precision -- also known as the positive-predictive-value (PPV) -- is the fraction of positive predictions that are correct.
\[\texttt{PPV}=\frac{\texttt{TP}}{\texttt{TP}+\texttt{FP}} \tag{2}\]
Recall -- also known as the true positive rate (TPR), sensitivity, or probability of detection (PD) -- is the fraction of real positive cases that are correct. |
2309.08209 | Attitude Control and Low Cost Design of UAV Bicopter | This paper present a control system for the attitude and low cost design of a
Bicopter. The control system uses a PID controller that receives feedback from
an IMU to calculate control inputs that adjust the Bicopters attitude (roll,
pitch and yaw angles) which is resistant to disturbances (wind noise) on a test
bed. The control system is implemented on a hardware platform consisting of a
Bicopter, an IMU sensor, and a microcontroller with low cost design. In
mechanical design, the Bicopter is designed to more closely resemble the letter
"V" so that the distribution of the centre of mass (CoM) of the Bicopter can be
such that the servomotor torque reaction is parallel to the axis of rotation of
the Bicopter during the movement of the pitch angle attitude. In electronic
design, the Bicopter was developed using the ATmega328P microcontroller. | Fahmizal, Hanung Adi Nugroho, Adha Imam Cahyadi, Igi Ardiyanto | 2023-09-15T07:19:06Z | http://arxiv.org/abs/2309.08209v1 | # Attitude Control and Low Cost Design of UAV Bicopter
###### Abstract
This paper present a control system for the attitude and low cost design of a Bicopter. The control system uses a PID controller that receives feedback from an IMU to calculate control inputs that adjust the Bicopter's attitude (roll, pitch and yaw angles) which is resistant to disturbances (wind noise) on a test bed. The control system is implemented on a hardware platform consisting of a Bicopter, an IMU sensor, and a microcontroller with low cost design. In mechanical design, the Bicopter is designed to more closely resemble the letter "V" so that the distribution of the centre of mass (CoM) of the Bicopter can be such that the servomotor torque reaction is parallel to the axis of rotation of the Bicopter during the movement of the pitch angle attitude. In electronic design, the Bicopter was developed using the ATmega328P microcontroller.
+
Footnote †: 1 Introduction
Unmanned aerial vehicles (UAV) Bicopters are becoming increasingly popular due to their ability to perform a wide range of tasks, such as aerial photography, monitoring, and surveying [1]. The unique design of Bicopters, which combines the features of both fixed-wing and rotary-wing aircraft, makes them well suited to many applications [2, 3]. However, controlling the attitude of a Bicopter can be challenging due to the complex and nonlinear dynamics involved.
The control of attitude is a critical issue in the design and operation of UAVs. In particular, for a UAV Bicopter, which has a hybrid design combining the advantages of a helicopter and a fixed-wing aircraft, controlling the attitude is essential for stable flight and manoeuvrability. Conventional control approaches, such as proportional-integral-derivative (PID) controllers, have been widely used for Bicopter attitude control [4, 5, 6, 7, 8]. These controllers are easy to implement and have been shown to be effective in many cases.
Advanced model-based controllers, such as linear quadratic regulators (LQR), have been proposed as an alternative to PID controllers [9, 10, 11, 12, 13]. These controllers use a mathematical model of the Bicopter to predict its behavior and adjust the control inputs accordingly. While LQR controllers can be more effective than PID controllers in some cases, they are also more complex and require more computational resources.
Youming Qin et al. [4] detailed a Bicopter concept they called Gemini in their research. In this research, they focus on how the Bicopter can be put to use in enclosed environments. Starting with the selection of the optimal propeller and continuing through aerodynamic analysis, system design, optimisation, and implementation of the controls, this study details the full process of creating the Gemini platform. Cascaded PID controllers are used in practise on Gemini's attitude controller. In this research use high cost flight-controller.
In 2022, Bicopter's research on mechanical design without a servomotor has been carried out by Youming Qin et al. [14]. In this study, replacing the servomotor by using a cyclic blade pitch control but requires a magnetic encoder sensor on the cyclic blade system so that it becomes a high cost flight controller.
This paper has the following contributions: 1). developing and implementing a PID controller design to maintain a stable attitude of the UAV Bicopter, which is resistant to disturbances (wind noise) on a test bed. 2). designs and manufactures mechanical (tilt mechanism) and electronic (flight controller) UAV Bicopter with the principle of low cost.
This paper's remaining sections are organized as follows: Section II covers the methodology of mechanical design, electronics design, attitude sensor, and attitude modelling. Section III describes the design of the attitude control using the PID controller. Experimental results are presented in Section IV to demonstrate the value and effectiveness of the proposed methodologies. Section V concludes the paper.
## II Materials and Methods
### Mechanical Design
Bicopters are a type of UAV that have two rotors that are fixed in a parallel configuration, which allows them to perform VTOL and hover like a helicopter, as well as to fly forward like a fixed-wing aircraft. Designing the mechanics of a Bi-copter involves several considerations, including the size and weight of the vehicle, the choice of materials, the design of the rotors, and the placement of the motors. The size and weight of the Bicopter will determine the amount of lift required and the amount of power needed to achieve flight. The size and weight will also determine the maximum payload capacity of the Bicopter. The rotors are a critical component of the Bicopter, as they provide lift and control the vehicle's attitude.
The Bicopter is constructed with two driving rotors and two servomotors for tilting the two rotors in opposite directions. Figure 1 shows the right rotor thrust (\(F_{R}\)) and left rotor thrust (\(F_{L}\)) created by the rotor, propeller, and their components in the \(x\) and \(z\) axes. By altering the magnitude of the rotor thrust \(F_{R}\) and \(F_{L}\), the rolling movement may be adjusted. This paper develops the mechanical design of Bicopter using Autodesk Inventor Student version and printed by UP2 3D printer. The mechanical design of the UAV Bicopter consists of an arm, rotor holder and body. As shown in Fig. 1, the Bicopter is designed to more closely resemble the letter "V" so that the distribution of the centre of mass (CoM) of the Bicopter can be such that the servomotor torque reaction is parallel to the axis of rotation of the Bicopter during the movement of the pitch angle attitude.
The test bed rig is a device used as an evaluation platform for the stability of the rotational motion of roll, pitch, and yaw on a Bicopter. Without needing to fly the Bicopter, the stability of its attitude can be verified with this test bed rig. The Bicopter's attitude when disturbed can also be observed with this test bed rig. Figure 2 illustrates the design of the test bed rig that will be used for Bicopter attitude testing.
### Electronics Design
ATmega328P serves as the primary microcontroller in the Bi-copter electronics system. There are two MG90S servomotors and two left and right rotors use Sunnysky x2216 800 KV rotors, and one IMU sensor MPU6050. Figure 3 represents the results of the printed circuit board (PCB) design for the Bicopter electronic system. This PCB is also known as the Bicopter's flight controller. The electronic system is also coupled with a personal computer (PC) via serial communication with the graphical user interface (GUI) to automatically show sensor reading conditions in real time as presented in Fig. 4.
### Attitude Sensor
The _motion processing unit_ (MPU) 6050 is a type of inertial measurement unit (IMU) that is commonly used in small UAVs and other electronic devices that require accurate motion sensing. It is a small and affordable sensor that combines both accelerometer and gyroscope functionality. The accelerometer measures acceleration along the \(x\), \(y\), and \(z\) axes and is used to determine the orientation of the device. It senses both static and dynamic acceleration due to gravity and motion respectively. It is able to measure acceleration in the range of \(\pm 2g\), \(\pm 4g\), \(\pm 8g\), or \(\pm 16g\).
In this paper, the IMU MPU 6050 is used in the Bicopter orientation sensor system. This sensor's configuration is as follows: it has six degrees of freedom and is made up of two
Figure 1: Mechanical design of Bicopter.
Figure 2: Test bed rig for attitude testing of Bicopter.
types of sensors, namely accelerometers and gyroscopes with data transmission protocols based on inter-integrated circuits (I2C) [15]. The IMU MPU 6050 sensor produces an angle on the x-axis called the _phi_ (\(\phi\)) or roll angle, a pitch angle on the y-axis called the _theta_ (\(\theta\)), and a yaw angle on the z-axis called the _psi_ (\(\psi\)).
Noise is an important distraction that must be considered in a measuring process. Besides that, noise can also interfere with the process in a closed-loop control system. Because of that, filtering techniques are needed to separate the actual signal from a set of noises. This paper uses a complementary filter (CF) to remove noise [16; 17]. The configuration of the CF for the pitch angle is presented in Fig. 5. The characteristics of the raw accelerometer sensor data are filtered in the low-frequency region, and the gyroscope data are in the high-frequency region [18]. Therefore, to compensate for the sensor data, apply a low-pass filter (LPF) and a high-pass filter (HPF).
Theoretically, an LPF is shown in Eq. (1).
\[V_{in}(t)-V_{out}(t)=RC\frac{dV_{out}}{dt} \tag{1}\]
Equation 1 can be discretized into Eq. (2). Furthermore, for simplicity, it is assumed that the input and output are sampled at the same time interval, namely \(\Delta_{T}\). Input \(V_{in}\) is defined by \(x_{i}\) and output \(V_{out}\) is defined by \(y_{i}\).
\[x_{i}-y_{i}=RC\frac{y_{i}-y_{i-1}}{\Delta_{T}}\] \[y_{i}=x_{i}\left(\frac{\Delta_{T}}{RC+\Delta_{T}}\right)+y_{i-1 }\left(\frac{RC}{RC+\Delta_{T}}\right)\] \[y_{i}=\alpha x_{i}+\left(1-\alpha\right)y_{i-1} \tag{2}\]
Where \(\alpha=\frac{\Delta_{T}}{RC+\Delta_{T}}\), \(RC=\frac{1}{2\pi\ell}\),\(f_{c}\) show as frequency _cutoff_, \(\Delta_{T}\) is the sampling period and the smoothing factor lies between \(0\leq\alpha\leq 1\), then HPF is defined as in Eq. (3).
\[y_{i}=RC\left(\frac{x_{i}-x_{i-1}}{\Delta_{T}}-\frac{y_{i}-y_{i- 1}}{\Delta_{T}}\right)\] \[y_{i}=\left(\frac{\Delta_{T}}{RC+\Delta_{T}}\right)y_{i-1}+ \left(\frac{RC}{RC+\Delta_{T}}\right)(x_{i}-x_{i-1})\] \[y_{i}=\alpha y_{i-1}+\alpha\left(x_{i}-x_{i-1}\right) \tag{3}\]
In addition to implementing the IMU MPU 6050 using CF, this paper also uses a Quaternion-based sensor fusion processing technique. Quaternions were introduced to improve computational efficiency and to avoid gimbal lock and singularity [19; 20] problems in the Euler angle representation [21; 22; 23]. Quaternions have four dimensions, one real dimension (scalar element) and three imaginary dimensions ( vector). Each of these imaginary dimensions has a unit value of the square root of -1, but the square roots of the distinct -1 are perpendicular to one another; we know \(i,j\)
Figure 4: Graphical user interface (GUI) of Bicopter.
Figure 5: Complementary filter block diagram.
and \(k\). So, a Quaternion can be represented as \(q=w+ix+jy+kz\). A Quaternion can be converted into a 3D space like Euler and the yaw-pitch-roll (YPR) representation [24], [25]. This makes it easier to imagine and describe rotations along the \(x\), \(y\), and \(z\) axes. First, by extracting the gravity \(g=\begin{bmatrix}g_{x}&g_{y}&g_{z}\end{bmatrix}\) from the Quaternion defined by Eq. (4).
\[\begin{bmatrix}g_{x}\\ g_{y}\\ g_{z}\end{bmatrix}=\begin{bmatrix}2\left(q_{x}q_{z}-q_{y}q_{y}\right)\\ 2\left(q_{w}q_{x}-q_{y}q_{z}\right)\\ q_{w}q_{w}-q_{y}q_{x}+q_{z}q_{z}\end{bmatrix} \tag{4}\]
Then YPR and Euler can be obtained by conversion in Eq. (5) and Eq. (6).
\[\begin{bmatrix}yaw\\ pitch\\ roll\end{bmatrix}=\begin{bmatrix}\arctan 2\left(\frac{2q_{x}q_{y}-2q_{y}q_{z}}{2 q_{x}q_{y}+2q_{z}q_{z}^{*}}\right)\\ \arctan\left(\frac{g_{x}}{\sqrt{g_{x}g_{y}+g_{y}}}\right)\\ \arctan\left(\frac{g_{z}}{\sqrt{g_{x}g_{y}+g_{y}}}\right)\end{bmatrix} \tag{5}\]
\[\begin{bmatrix}\psi\\ \theta\\ \phi\end{bmatrix}=\begin{bmatrix}\arctan 2\left(\frac{2q_{x}q_{y}-2q_{y}q_{z}}{2 q_{x}q_{y}+2q_{z}q_{z}^{*}}\right)\\ -\arcsin\left(2q_{x}q_{z}+2q_{y}q_{y}\right)\\ \arctan 2\left(\frac{2q_{x}q_{y}-2q_{y}q_{z}}{2q_{x}q_{y}+2q_{z}q_{z}^{*}} \right)\end{bmatrix} \tag{6}\]
### Attitude modelling
The right rotor thrust (\(F_{R}\)) and left rotor thrust (\(F_{L}\)) are generated by the propeller and rotor and their components in the \(x\) and \(z\) directions shown in Fig. 6. With the parameters described in Table 1. Using Newton's second law, the equations of forces in the \(x\), \(y\) and \(z\) directions are defined as given in Eq. (7) - (9).
\[\sum F_{x} =F_{R}\sin\gamma_{R}+F_{L}\sin\gamma_{L} \tag{7}\] \[\sum F_{y} =0\] (8) \[\sum F_{z} =F_{R}\cos\gamma_{R}+F_{L}\cos\gamma_{L} \tag{9}\]
The total lift (thrust) and moment of force from the Bicopter can be obtained from the input \(u\) which is written in Eq. (10) - (14). Where, \(C_{T}\) is the thrust coefficient of the propeller. \(\Omega_{R}\) and \(\Omega_{L}\) are the rotational speeds of the right and left rotors, \(\gamma_{R}\) and \(\gamma_{L}\) are the tilt angles of the right and left rotors.
\[u =\begin{bmatrix}u_{1}&u_{2}&u_{3}&u_{4}\end{bmatrix}^{T} \tag{10}\] \[u_{1} =C_{T}\left(\Omega_{R}^{2}\cos\gamma_{R}+\Omega_{L}^{2}\cos\gamma _{L}\right)\] (11) \[u_{2} =C_{T}\left(\Omega_{R}^{2}\cos\gamma_{R}-\Omega_{L}^{2}\cos\gamma _{L}\right)\] (12) \[u_{3} =C_{T}\left(\Omega_{R}^{2}\sin\gamma_{R}+\Omega_{L}^{2}\sin \gamma_{L}\right)\] (13) \[u_{4} =C_{T}\left(\Omega_{R}^{2}\sin\gamma_{R}-\Omega_{L}^{2}\sin \gamma_{L}\right) \tag{14}\]
Bicopter dynamic movement can be divided into two subsystems, namely, the rotation subsystem (_roll_, _pitch_ and _yaw_) as the inner loop and the translation subsystem (\(x\), \(y\) position and \(z\) (altitude) position) as the outer loop. Based on the dynamic solution of the model using Newton-Euler [5], [6], [26], we get the equation of translational motion in Eq. (15) and Eq. (16) for rotational motion, where \(s=\sin\) and \(c=\cos\).
\[\ddot{x} =-\frac{1}{m}\left(s\phi s\psi+c\phi s\theta c\psi\right)u_{1}- \frac{c\theta c\psi}{m}u_{3}\] \[\ddot{y} =-\frac{1}{m}\left(-s\phi c\psi+c\phi s\theta s\psi\right)u_{1}+ \frac{c\theta s\psi}{m}u_{3}\] \[\ddot{z} =g-\frac{1}{m}\left(c\phi c\theta\right)u_{1}-\frac{s\theta}{m}u _{3} \tag{15}\]
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & Symbols & Value & Unit \\ \hline Mass of the UAV Bicopter & \(m\) & 0.725 & \(kg\) \\ Gravitational acceleration & \(g\) & 9.81 & \(m.s^{-2}\) \\ Vertical distance between CoG and center of the rotor & \(h\) & 0.042 & \(m\) \\ Horizontal distance CoG and rotor center & \(L\) & 0.225 & \(m\) \\ Thrust coefficient & \(C_{T}\) & 0.1222 & - \\ The Moment of Inertia along x axis & \(I_{xx}\) & \(0.116\times 10^{-3}\) & \(kg.m^{2}\) \\ The Moment of Inertia along y axis & \(I_{yy}\) & \(0.0408\times 10^{-3}\) & \(kg.m^{2}\) \\ The Moment of Inertia along z axis & \(I_{zz}\) & \(0.105\times 10^{-3}\) & \(kg.m^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Bicopter dynamic model parameters.
Figure 6: Bicopter reference frames.
\[\ddot{\phi} = \frac{L}{I_{xx}}u_{2}\] \[\ddot{\theta} = \frac{h}{I_{yy}}u_{3}\] \[\ddot{\psi} = \frac{L}{I_{zz}}u_{4} \tag{16}\]
In this paper, the design of attitude control is the main point, based on the illustration in Fig. 8, the roll angle movement condition occurs when there is a difference in the lift force caused by the rotation of the right rotor and the left rotor, and the condition of the right and left servomotors is zero (\(\cos(0)=1\)) so that the rolling case can be solved in Eq. (17). And in the case of pitching and yawing presented in Eq. (18) and Eq. (19). With parameters described in Table 1.
\[\ddot{\phi} = \frac{L}{I_{xx}}C_{T}\left(\Omega_{R}^{2}-\Omega_{L}^{2}\right)= \frac{L}{I_{xx}}C_{T}\left(F_{R}-F_{L}\right) \tag{17}\] \[\ddot{\theta} = \frac{h}{I_{yy}}C_{T}\left(\Omega_{R}^{2}\sin\lambda_{R}+\Omega_{ L}^{2}\sin\lambda_{L}\right)\] (18) \[\ddot{\psi} = \frac{L}{I_{zz}}C_{T}\left(\Omega_{R}^{2}\sin\lambda_{R}-\Omega_ {L}^{2}\sin\lambda_{L}\right) \tag{19}\]
## III Attitude Control Design
The block diagram of the closed-loop control system for the attitude stability of the Bicopter is presented in Fig. 7. From this block diagram, it can be seen that there are four closed loops. The first is for the altitude control loop of the Bicopter, for the second, third and fourth loops are the attitude control of the Bicopter in the orientation of the motion angles of roll, pitch and yaw.
### Pid Attitude Roll Control
The attitude roll is a rotational movement of the Bicopter about the x-axis, which means this attitude movement will cause the translational displacement of the Bicopter on the y-axis to shift to the right and left. An illustration of the rolling motion of the Bicopter is shown in Fig. 8.
By providing an input reference, the set point signal (SP), in the form of a Bicopter roll angle of 0 degrees, then the deviation of the roll angle \((\phi)\) from the reference roll angle \((\phi_{r})\) is defined as an error in Eq. (20). Furthermore, if you know the error value \((e)\), then the differential error \(\left(\frac{d\epsilon(t)}{dt}\right)\) can be calculated as shown in Eq. (21).
\[e_{\phi}(t)=\phi-\phi_{r} \tag{20}\]
\[\frac{de_{\phi}(t)}{dt}=\dot{\phi}-\dot{\phi_{r}} \tag{21}\]
Discrete PID control is another form of analog PID control (continues) in Eq. (22) which is programmed and executed using a computer or microcontroller. The analog PID control must first be converted to a discrete form to implement discrete PID on a computer or microcontroller [27]. The formulation of discrete PID control can be seen in Eq. (22) - (25).
\[u(t)=K_{p}e(t)+K_{i}\int_{0}^{t}e(t)dt+K_{d}\frac{de(t)}{dt} \tag{22}\]
With \(K_{i}=\frac{1}{\tau_{i}}\) and \(K_{d}=\tau_{d}\), the integral and differential forms can be written in discrete form as in Eq. (23) and Eq. (24), so that they are obtained in the discrete PID control form in Eq. (25). \(e(k)\) is the current error, \(e(k-1)\) is the previous error and \(T\) is the sampling time.
\[\int_{0}^{t}e(t)dt\approx T\sum_{0}^{k}e(k) \tag{23}\] \[\frac{de(t)}{dt}\approx\frac{e(k)-e(k-1)}{T} \tag{24}\]
\[u(k) = K_{p}e(k)+K_{i}T\sum_{0}^{k}e(k)\] \[+\frac{1}{T}K_{d}e(k)-e(k-1)\]
In the case of controlling the attitude roll of the Bicopter using PID control, the output value of the PID roll controller in Eq. (26) will be added or subtracted from the given throttle value depending on the roll angle error condition, and this condition can be explained by looking at the illustration in Fig. 8.
\[u_{\phi}(k) = K_{p\phi}\ e_{\phi}(k)+K_{i\phi}\ T\sum_{0}^{k}e_{\phi}(k)\] \[+\frac{1}{T}K_{d\phi}\ e_{\phi}(k)-e_{\phi}(k-1)\]
### Pid Attitude Pitch Control
The attitude pitch is the rotational movement of the Bicopter about the _y-axis_, which means this attitude will cause the translational displacement of the Bicopter on the _x-axis_ to shift forwards and/or backwards. In the case of controlling the attitude pitch of the Bicopter using PID control, the output value of the PID pitch controller in Eq. (27) will be added or subtracted from the given CenterServo value depending on the pitch angle error condition; this condition can be explained by looking at the illustration in Fig. 9.
\[u_{\theta}(k) = K_{p\theta}\ e_{\theta}(k)+K_{i\theta}\ T\sum_{0}^{k}e_{\theta}(k)\] \[+\frac{1}{T}K_{d\theta}\ e_{\theta}(k)-e_{\theta}(k-1)\]
### Pid Attitude Yaw Control
The attitude yaw is a rotational movement of the Bicopter about the z-axis, which means that this attitude movement will cause the rotational movement of the Bicopter to rotate
clockwise (CW) or counterclockwise (CCW). In the case of controlling the attitude yaw Bicopter using PID control, the output value of the yaw PID controller in Eq. (28) will be added or subtracted from the given CenterServo value depending on the yaw angle error condition, and this condition can be explained by looking at the illustration in Fig. 10.
\[\begin{split} u_{\psi}(k)=K_{p\psi}\;e_{\psi}(k)+K_{i\psi}\;T\sum_{ 0}^{k}e_{\psi}(k)\\ +\frac{1}{T}K_{d\psi}\;e_{\psi}(k)-e_{\psi}(k-1)\end{split} \tag{28}\]
### PID Controller Tuning
Before applying PID control parameters to control the attitude stability of the Bicopter, the tuning process of the PID attitude roll controller is carried out by simulating a dynamic model of the attitude roll of the Bicopter. Based on Eq. (16), the dynamics of the Bicopter rolling motion can be obtained in the form of a double integrator. If the moment of inertia \(I_{xx}\) is known to be \(0.116\times 10^{-3}\) and \(L\) is \(0.225\), the equation for the dynamic attitude roll transfer function can be obtained in Eq. (29).
\[\frac{\Phi}{U_{2}}=\frac{1939.7}{s^{2}} \tag{29}\]
From Eq. (29) when changed in the model state space with suppose \(\phi=y\), \(u_{2}=u\), \(x_{1}=y\), \(x_{2}=\dot{y}\), \(\dot{x}_{1}=\dot{y}\) dan \(\dot{x}_{2}=\ddot{y}\) then obtained as follows:
\[\begin{split}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=1939.7u\end{split} \tag{30}\]
From Eq. (30) when arranged in the form \(\dot{x}=Ax+Bu\), then obtained Matrix A and Matrix B as follows:
\[\begin{split}\dot{x}=Ax+Bu\\ A=\begin{bmatrix}0&1\\ 0&0\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix},B=\begin{bmatrix}0\\ 1939.7\end{bmatrix}u\end{split} \tag{31}\]
It is known that the closed loop characteristic equation of the system in Eq. (31) can be obtained using the formula _det_\((sI-(A-BK))=0\), with the following description:
Figure 7: Control system block diagram for Bicopter flight stability.
\[\begin{split} det\left(sI-\left(A-BK\right)\right)&=0\\ \left|s\begin{bmatrix}1&0\\ 0&1\end{bmatrix}-\left(\begin{bmatrix}0&1\\ 0&0\end{bmatrix}-\begin{bmatrix}0&0\\ 1939,7\end{bmatrix}\begin{bmatrix}K_{1}&K_{2}\end{bmatrix}\right)\right|&=0\\ \left|\begin{bmatrix}s&0\\ 0&s\end{bmatrix}-\begin{bmatrix}0&1\\ 0&0\end{bmatrix}-\begin{bmatrix}0&0\\ 1939.7K_{1}&1939.7K_{2}\end{bmatrix}\right|&=0\\ \left|\begin{bmatrix}s&0\\ 0&s\end{bmatrix}-\begin{bmatrix}0&1\\ -1939.7K_{1}&-1939.7K_{2}\end{bmatrix}\right|&=0\\ \left|\begin{bmatrix}s&0\\ 0&s\end{bmatrix}-\begin{bmatrix}0&1\\ -1939.7K_{1}&-1939.7K_{2}\end{bmatrix}\right|&=0\\ \left|\begin{bmatrix}s&-1\\ 1939,7K_{1}&s+1939,7K_{2}\end{bmatrix}\right|&=0\\ s^{2}+\left(1939.7K_{2}\right)s+1939.7K_{1}&=0\end{split} \tag{32}\]
It is known that the characteristic equation of a closed two-loop system in general can be defined in Eq. (33) where \(\zeta\) is the damping ratio and \(\omega_{n}\) is the natural frequency. By using the substitution method, it is possible to determine the gains of \(K_{1}\) and \(K_{2}\) according to \(\zeta\) and \(\omega_{n}\) based on the desired system performance.
\[s^{2}+2\zeta\omega_{n}s+\omega_{n}^{2}=0 \tag{33}\]
The closed-loop transfer function (CLTF) dynamic attitude roll is planned in Eq. (29) using a proportional-differential (PD) controller, as shown in Fig. 11. Equation (34) explains the CLTF results.
\[\begin{split}\frac{y(s)}{r(s)}&=\frac{\frac{1939,7K_{d}s+193 9,7K_{p}}{s^{2}}}{1+\frac{1939,7K_{d}s+1939,7K_{p}}{s^{2}+1939,7K_{d}s+1939,7K_{ p}}}\\ &=\underbrace{\frac{1939,7K_{d}s+1939,7K_{p}}{s^{2}+1939,7K_{d}s+1939,7K_{p}}}_{\underbrace{s^{2}+1939,7K_{d}s+1939,7K_{p}}}\\ \end{split} \tag{34}\]
From Eq. (32) and Eq. (34), it can be noticed that the value of \(K_{d}=K_{2}\) and \(K_{p}=K_{1}\), therefore if we want the system to have characteristics similar to Eq. (33), then we will get \(1939.7K_{d}s=2\zeta\omega_{n}s\) and \(1939.7K_{p}=\omega_{n}^{2}\). If the planned closed loop system in Eq. (34) has the characteristics of \(s^{2}+331s+1950=0\), \(K_{d}\) and \(K_{p}\) will be obtained in Eq. (35) and Eq. (36).
\[K_{d}=\frac{331s}{1939.7s}=0.1706 \tag{35}\]
\[K_{p}=\frac{1950}{1939.7}=1.0053 \tag{36}\]
Figure 11: CLTF with PD controller.
Figure 8: Bicopter attitude roll condition; (a) roll angle with an error (\(+\)) value produces a translational movement on the y-axis, which is shifted to the right, (b) roll angle with an error (\(-\)) value produces a translational movement on the y-axis, which is shifted to the left.
Figure 9: Bicopter attitude pitch condition; (a) pitch angle with error (\(+\)) value produces translational movement on the x-axis, which is shifted forward, (b) pitch angle with error (\(-\)) value produces translational movement on the x-axis, which is shifted backwards.
## IV Results and Discussion
A PID controller was implemented in the experiment to maintain a stable attitude from roll, pitch and yaw angles on the Bicopter. The test setup is carried out as shown in Fig. 12. With the help of the GUI, as presented in Fig. 4, the PID control parameter search process can be carried out using an experimental fine-tuning process by comparing the response results. This tuning process produces PID controller parameters described in Table 2.
Furthermore, the PID control test is carried out by adding noise, by giving noise in the form of wind emitted using a fan. The wind given is regulated into three modes where the wind speed has been measured using a wind sensor in the form of an anemometer.
The results of testing with noise were carried out in three modes, namely three types of wind strength. The first experiment, as shown in Fig. 13 when the wind strength is set at a speed of about 8 Knots, it can be noticed that at about the 300th data or at the time (300 x 2.8 ms = 8.4 s) the attitude pitch angle of the Bicopter increases to \(5^{\circ}\) means that the Bicopter experienced a condition of nose up of \(5^{\circ}\). Up to 28 seconds of experiment, it can be seen that the attitude pitch condition of the Bicopter can be maintained. For the attitude roll condition, the Bicopter experiences shocks with a change in attitude roll of \(\pm 3^{\circ}\).
In the second experiment by increasing the wind strength given by 9 Knots. In Fig. 14 it can be seen that at around the 250th data or at the time (250 x 2.8 ms = 7.0 s) the attitude pitch angle of the Bicopter undergoes a nose up process until the 1000th data or at the time (1000 x 2.8 ms = 28.0 s) the attitude pitch angle of the Bicopter is around \(7^{\circ}\) and the Bicopter experiences an attitude roll shock with a change of \(\pm 4^{\circ}\).
In the third experiment, the wind strength was set at a speed of around 10 Knots, as seen in Fig. 15, the attitude pitch dipped to \(11^{\circ}\) and also experienced an increase in attitude roll shocks with a change of \(\pm 6^{\circ}\). The RMSE attitude roll and pitch values are presented in Table 3 when tested on a test bed.
The Table 3 shows the RMSE attitude of a Bicopter when tested on a test bed with three variations of static noise. The RMSE attitude is a measure of how much the Bicopter's attitude (roll and pitch) deviates from the desired attitude. The higher the RMSE attitude, the more the Bicopter's attitude deviates from the desired attitude. The RMSE attitude of the Bicopter increases as the wind power from the fan increases. This is because the wind gusts cause the Bicopter to wobble, which increases the deviation of its attitude from the desired attitude. From Table 3 also shows that the RMSE attitude of the Bicopter is higher for roll than for pitch. This is because the Bicopter is more susceptible to rolling than pitching and also because the Bicopter's rotors are located on the same plane, so they provide more lift for rolling than for pitching. Overall, the RMSE attitude of a Bicopter increases as the wind power from the fan increases and as the roll angle increases. This information can be used to design a Bicopter that is more stable in windy conditions.
In subsequent tests, a PID controller was implemented to maintain stable roll, pitch and yaw angle attitudes on the
\begin{table}
\begin{tabular}{l l l l} \hline \hline Parameter & Roll & Pitch & Yaw \\ \hline \(K_{p}\) & 3.3 & 3.3 & 6.8 \\ \(K_{i}\) & 0.030 & 0.030 & 0.045 \\ \(K_{d}\) & 23 & 23 & 0 \\ \hline \hline \end{tabular}
\end{table} TABLE 2: PID control parameters on attitude Bicopter using the test bed rig.
Figure 10: Bicopter attitude yaw condition; (a) yaw angle with error (\(+\)) value produces COW rotational movement, (b) yaw angle with error (\(-\)) value produces CW rotational movement.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Attitude} & \multicolumn{3}{c}{Wind power from the fan} \\ \cline{2-4} & 8 Knots & 9 Knots & 10 Knots \\ \hline Roll & 1.8868 & 2.7628 & 3.9183 \\ Pitch & 3.6764 & 4.2332 & 9.9868 \\ \hline \hline \end{tabular}
\end{table} TABLE 3: RMSE attitude of Bicopter when tested on a test bed with three variations of static noise.
Bicopter when flying indoors. The indoor flight test setup was carried out as shown in Fig. 18. From the results of tuning the PID controller parameters using the experimental fine-tuning process, it was obtained parameters as presented in Table 4. Figure 16 shows the results of the attitude roll and pitch response during the flight test. Figure 17 shows the results of the yaw attitude response. RMSE attitude roll and pitch values when tested in flight tests are presented in the Table 5.
The sampling data of 2.8 ms indicates that the data was collected at a rate of 1000/2.8 = 357 Hz. The attitude response shows that the Bicopter is able to track the desired attitude with a reasonable degree of accuracy.
## 5 Conclusion
The results of implementing the PID controller on the attitude of the Bicopter have been tested for its durability using a test bed with various variations of wind strength. From the static noise test results, when the wind strength is given at a speed of 10 Knots, the RMSE value for attitude roll is 3.9183, and the RMSE value for attitude pitch is 9.9868. During the flight test, the PID controller maintained a stable attitude of the UAV Bicopter with an RMSE of 2.3728 for attitude roll and 4.4219 for attitude pitch.
The mechanical design of the UAV Bicopter was developed using the concept of a "V" shaped frame with the aim that the center of mass (CoM) distribution of the UAV Bicopter can be in a position that causes the servomotor torque reaction to being parallel to the axis of rotation of the Bicopter when the attitude pitch angle moves. The electronic design of the UAV Bicopter was developed on the principle of low cost using the ATmega328P microcontroller.
## Acknowledgment
This work was supported by the Indonesian Postgraduate Domestic Education Scholarship (BPPDN) with contract number 2974/UN1.P.IV/KPT/DSDM/2019.
|
2309.17329 | Efficient Anatomical Labeling of Pulmonary Tree Structures via Implicit
Point-Graph Networks | Pulmonary diseases rank prominently among the principal causes of death
worldwide. Curing them will require, among other things, a better understanding
of the many complex 3D tree-shaped structures within the pulmonary system, such
as airways, arteries, and veins. In theory, they can be modeled using
high-resolution image stacks. Unfortunately, standard CNN approaches operating
on dense voxel grids are prohibitively expensive. To remedy this, we introduce
a point-based approach that preserves graph connectivity of tree skeleton and
incorporates an implicit surface representation. It delivers SOTA accuracy at a
low computational cost and the resulting models have usable surfaces. Due to
the scarcity of publicly accessible data, we have also curated an extensive
dataset to evaluate our approach and will make it public. | Kangxian Xie, Jiancheng Yang, Donglai Wei, Ziqiao Weng, Pascal Fua | 2023-09-29T15:40:58Z | http://arxiv.org/abs/2309.17329v2 | # Efficient Anatomical Labeling of Pulmonary Tree Structures via Implicit Point-Graph Networks
###### Abstract
Pulmonary diseases rank prominently among the principal causes of death worldwide. Curing them will require, among other things, a better understanding of the many complex 3D tree-shaped structures within the pulmonary system, such as airways, arteries, and veins. In theory, they can be modeled using high-resolution image stacks. Unfortunately, standard CNN approaches operating on dense voxel grids are prohibitively expensive. To remedy this, we introduce a point-based approach that preserves graph connectivity of tree skeleton and incorporates an implicit surface representation. It delivers SOTA accuracy at a low computational cost and the resulting models have usable surfaces. Due to the scarcity of publicly accessible data, we have also curated an extensive dataset to evaluate our approach and will make it public.
pulmonary tree labeling, graph, point cloud, implicit function, 3D deep learning.
## I Introduction
In recent years, since pulmonary diseases [1, 2, 3] have become the leading causes of global mortality [4], pulmonary research has gained increasing attention. In studies related to pulmonary disease, understanding pulmonary anatomies through medical imaging is important due to the known association between pulmonary disease and inferred metrics from lung CT images [5, 6, 7, 8, 9, 10].
The tree-shaped pulmonary structures--airways, arteries, and veins, as depicted by Fig. 1--have high branching factors and play a crucial role in the respiratory system in the lung. Multi-class semantic segmentation of the pulmonary trees, where each class represents a specific division or branch of tree according to the medical definition of the pulmonary segments, is an effective approach to modeling their intricacies. In pulmonary tree labeling, the derived quantitative characteristics [6, 8, 11] not only associate with lung diseases and pulmonary-related medical applications [9, 10] but are also crucial for surgical navigation [7]. This work focuses on methodologies for efficient and accurate anatomical labeling of pulmonary airways, arteries, and veins.
Among deep learning approaches, convolutional neural networks (CNN) have become the _de facto_ standard approach to semantic segmentation [12, 13]. One of their strengths is that they yield volumes with well-defined surfaces. However, they are computationally demanding when processing large 3D volumes and often deliver unsatisfactory results when operating at a reduced resolution (Fig. 2 (a)) or local patches (Fig. 2 (b)), either leading to a lack of details or global context. In contrast, point-cloud representations [14, 15] have lower computational requirements while perserving global structures (Fig. 2 (c)). Besides, considering the inherent tree structures of pulmonary airways, arteries and veins, graph modeling (Fig. 2 (d)) that preserves the connectivity and structural topology is also visible [16, 17, 18]. Nevertheless, extracting usable surfaces from point clouds or graphs is yet non-trivial.
To be computationally efficient while enabling continuous surface definition and tree topology preservation, as illustrated in Fig. 2 (e), we introduce an approach that combines skeleton graph and point representations, with an implicit surface representation to yield a feature field. Connectivity constraints, based on the skeleton graphs, are imposed on the surfaces reconstructed from the feature field, achieved at a low computational cost without sacrificing accuracy. The proposed _Implicit Point-Graph Network (IPGN)_ includes backbone point and graph networks, _Point-Graph Fusion_ layers for deep feature fusion and an _Implicit Point Module_ to model the implicit surface in 3D, allowing for fast classification inference at arbitrary locations. Furthermore, thanks to the flexibility of implicit representations, the IPGN trained for pulmonary tree labeling can be extended to pulmonary
Fig. 1: **Pulmonary Tree Labeling.** (a) The pulmonary tree consists of three anatomic structures (airway, artery and vein). (b) Given a binary volume representing a tree structure as input, we attempt to label each voxel into one of 19 classes based on the branching regions, _i.e._, the pulmonary segments.
by simply modifying the inference method (Sec. VI).
As illustrated by Fig. 7, our approach produces high-quality dense reconstructions of pulmonary structures at an acceptable computational cost. To evaluate it quantitatively, we compiled the Pulmonary Tree Labeling (PTL) dataset illustrated in Fig. 1. It contains manual annotations for 19 different components of pulmonary airways, arteries, and vein, which will be made publicly available as a multi-class semantic segmentation benchmark for pulmonary trees1. Our method achieves SOTA performance on this dataset while being the most computationally efficient.
Footnote 1: An URL with data and code will be provided in the final paper version.
## II Related Works
### _Pulmonary Anatomy Segmentation_
In pulmonary-related studies, image-based understanding of pulmonary anatomies is important as metrics inferred from lung imaging have shown to be related to severity, development and therapeutic outcome of pulmonary diseases [5, 6, 7, 8, 9, 10].
Previously, CNN-based methods have been tailored for comprehension of different pulmonary anatomies, such as pulmonary lobe [19], airway [20]. artery-vein [21, 22] and fissure [23]. Among various pulmonary anatomies, tree-shaped pulmonary structures have drawn a lot of research attention. For pulmonary airway tree, not only is proper segmentation crucial for surgical navigation [7], segmentation-derived attributes like airway counts [6], thickness of wall [11] and morphological changes [8] also have association with lung diseases. For pulmonary vasculature including arteries and veins, quantitative attributes extracted from segmentation are also commonly applied in multiple pulmonary-related medical applications like emboli detection [9] and hypertension [10].
Previous works specifically on pulmonary tree segmentation either apply graph-only modeling or leverage graph-CNN multi-task learning. A Tree-Neural Network [16] leverages handcrafted features on a tailored hyper-graph for pulmonary bronchial tree segmentation to address the inherent problem of overlapping distribution of features. A recent [18] work on airway segmentation proposes graph modeling to incorporate structural context to local CNN feature from each airway branch. Yet, both works merely provide labeling to pulmonary tree branches that are pre-defined at pixel level, and thus are not for semantic segmentation, where defining accurate borders between neighboring branches remains a challenge. Applicable to binary or raw images of pulmonary structures, SG-Net [17] employs CNN features for detecting of landmarks, constructing graphs for CNN-graph multi-task learning.
Although these methods are graph-based, the graph construction procedure varies. While one treats each pre-defined branch as a node [18], disrupting the original spatial structure, another is parameter-based [16], causing the quality of the constructed tree highly dependent on the parameter selection, and finally, the SG-Net [17] established its graph node by learned landmark prediction, whose structural quality can not be ensured. In our setup, the skeleton graphs are based on a thinning algorithm [24], with no modeling or hyper-parameters tuning involved, and all spatial and connection relationships between tree segments are acquired directly from the original dense volume (Fig. 1). Additionally, as CNN methods incur memory footprint that grows in cubical, they are expensive when facing large 3D volumes.
### _3D Deep Learning_
Deep learning on 3D geometric data is a specialized field that focuses on extending neural networks to handle non-Euclidean domains such as point clouds, graphs (or meshes), and implicit surfaces.
Point-based methods have emerged as a novel approach for 3D geometric modeling. As sparse 3D data, point cloud can be modeled in a variety of ways, such as multi-layer perception [25, 26], convolution [27, 28], graph [29] and transformer [30, 31]. Point-based methods have also been validated in the medical image analysis domain [14, 15].
Since the initial introduction [32], graph learning has become a powerful tool for analyzing graph-structured data, being widely applied to medical images analysis [16, 17, 18] or bioinformatics [33]. Meshes represent a specialized form of graph, typically characterized by a collection of triangular faces to depict the object surfaces, and are extensively employed in the field of graphics. Additionally, they also maintain some applications in the domain of medical imaging [34, 35].
While point-based methods enable computation-efficient modeling on volumetric data, graph learning is lightweight and learns structural context within geometric data. Combining point and graph methods is advantageous in our task. However, extracting surfaces (or dense volumes) from sparse point-based or graph-based prediction is non-trivial, which leads us to introduce implicit surface representation to address this issue.
Deep implicit functions have been successful at modeling 3D shapes [36, 37, 38, 39, 40, 41, 42]. Typically, the implicit function predicts occupancy or signed distance at continuous locations, thus capable of reconstructing 3D shapes at arbitrary resolutions [43]. Moreover, implicit networks could be trained with randomly sampled points, which lowers the computation burden during training. These characteristics suggest that implicit functions have inherent advantages when reconstructing the sparse prediction of pulmonary tree labeling into dense.
Fig. 2: **A Comparison of Data Representation for Pulmonary Tree Labeling.** The CNN-based methods are either low-resolution (down-sampled) or local (sliding-window). The standard sparse representation like point and graph is global but it is not-trivial to reconstruct high-quality dense volume. Our method that combines point, graph, and implicit functions produces high-quality dense reconstruction efficiently.
## 3 Problem Formulation
### Pulmonary Tree Labeling
In this study, we address the pulmonary tree labeling problem in 3D CT images. Specifically, given a binary volumetric image of a pulmonary tree, our objective is to provide accurate 19-class segmentation of the pulmonary airway, artery, and vein trees into segmental-level branches and components, demonstrated in Fig. 1 (b). Through this process, each foreground pixel will be assigned to its respective semantic class.
When evaluating the segmentation performance, we consider the presence of intersegmental veins and the potential ambiguity in their class assignment. Intersegmental veins are veins that lie along the border between two neighboring pulmonary segments [44], highlighted in Fig. 3. As pulmonary tree branches are involved in the boundary definition of pulmonary segments [43], intersegmental veins pose an inherent challenge in their class definition. To address this issue, we mask out the intersegmental veins during the evaluation and only focus on segmentation of the pulmonary airway, artery, and veins within each individual segment.
### Dataset
#### 3.2.1 Overview
From multiple medical centers, we compile the Pulmonary Tree Labeling (PTL) dataset containing annotated pulmonary tree structures for 800 subjects. For each subject, there are 3 volumetric images containing its pulmonary airway, artery, and vein, illustrated in Fig. 1 (b). In each volume, the annotations consist of 19 classes, where labels 1 to 10 represent branches located in the left lung, labels 11 to 18 represent those in the right lung, and class 19 represents extra-pulmonary structures while 0 is background.
All 3D volumes have shapes \(N\times 512\times 512\), where \(512\times 512\) represents the CT slice dimension, and \(N\) denotes the number of CT slices, ranging from 181 to 798. Z-direction spacings of these scans range from 0.5mm to 1.5mm. Manual annotations are produced by a junior radiologist and verified by a senior radiologist. During the modeling process, the images are converted to binary as input, and the original annotated image is the prediction target. The dataset is randomly split into 70% train, 10% validation, and 20% test samples.
#### 3.2.2 Skeleton Graph
We utilize a software application vesselsvio [45], which applies a thinning algorithm [24], to pre-process the volumetric data and derive a skeleton graph in 3D space for each pulmonary structure, illustrated by Fig. 2 (d). The graphs consist of nodes that represent branch bifurcation points and edges that represent anatomical connections. In this manually derived graph dataset, the label for each graph node and edge is recovered from the dense volume. Therefore, the task within this graph dataset involves performing a 19-class classification for both graph nodes and edges.
### Evaluation Metrics
For segmentation performance evaluation, classification accuracy and micro-averaged dice scores are used as metrics. For point-level results on dense volume, the classification accuracy measures the percentage of correctly classified points in the dense volume. The dice score, on the other hand, assesses the overlap between the predicted segment and the ground truth segment, providing a measure of similarity. For graph-level node and edge classification, the same metrics can be applied. The classification accuracy measures the percentage of correctly classified nodes and edges in the pre-processed skeleton graph dataset (Sec. 3.2.2). The dice score can also be used to assess the similarity of the predicted graph structure with the ground truth graph structure.
## 4 Methodology
Our objective is centered around the anatomical labeling of pulmonary trees. That is to say, given an input of binary volumes of a pulmonary tree--derivable from manual annotations or model predictions--we aim to execute a 19-class semantic segmentation on every non-zero voxel. However, diverging from standard methodologies reliant on CNNs, we down-sample the raw dense volume to a point cloud while concurrently extracting a skeleton graph. Our approach (Sec. 4.1) engages in representation learning individually on the sparse representations of both the point cloud and the skeleton graph, and a fusion module is employed to perform deep integration of point-graph representations (Sec. 4.2). Ultimately, to reconstruct the predictions based on sparse representations back to dense, we introduce implicit functions to facilitate efficient reconstruction (Sec. 4.3).
### Implicit Point-Graph Network Architecture
Given a binary volumetric image of a pulmonary tree (Fig. 4 (a)), a graph (Fig. 4 (b)) is constructed with VesselVio [45] from the original volume and a set of points are randomly sampled from the tree voxels to construct a point cloud (Fig. 4 (c)). While the point cloud is a sparse representation of the volume, the graph represents a skeleton of the pulmonary tree.
We first introduce the general notation rule for both point and graph elements. While the coordinates of \(M\) points and \(N\) graph nodes are represented as **P** and **G**, single point or graph element is expressed as **p** and **g**, where \(\textbf{P}=\{\textbf{p}_{1},\textbf{p}_{2},...,\textbf{p}_{m}\}\), \(\textbf{G}=\{\textbf{g}_{1},\textbf{g}_{2},...,\textbf{g}_{n}\}\). The superscript notation \(\textbf{p}^{(1)}\) represents an element's feature at the \(i\)-th network layer.
At input, the 3-dimensional \(\{x,y,z\}\) point coordinates, **P**\(\in\mathbb{R}^{M\times 3}\) and graph nodes, \(\textbf{G}\in\mathbb{R}^{N\times 3}\) are utilized as initial feature. We use a point neural network, and a graph neural
Figure 3: **Visualization of Pulmonary Tree and Pulmonary Segment Anatomy. Each pulmonary tree branch corresponds to a pulmonary segment. The intersegmental vein, which lies along the pulmonary segment border, is highlighted in red.**
network as initial feature encoders, from which we extract a 128-dimensional intermediate feature for each point and graph node, expressed as \(\textbf{P}^{(0)}\in\mathbb{R}^{M\times 128}\) and \(\textbf{G}^{(0)}\in\mathbb{R}^{N\times 128}\).
Subsequently, initial features from both branches \(\textbf{P}^{(0)}\) and \(\textbf{G}^{(0)}\) are incorporated within one or multiple _Point-Graph Fusion_ layers, which allow for two-way feature integration based on feature propagation [46] and ball-query&grouping [26]. Let the input to a Point-Graph Fusion layer be defined as \(\textbf{P}^{(i-1)}\) and \(\textbf{G}^{(i-1)}\), the feature out of the fusion layer is \(\textbf{P}^{(i)}\) and \(\textbf{G}^{(i)}\). The last Point-Graph Fusion layer outputs \(\textbf{P}^{(I)}\) and \(\textbf{G}^{(I)}\) after \(l\) Point-Graph Fusion layers for deep feature fusion. Finally, a lightweight MLP network and a GNN projects the fusion feature to 19-dimensional vectors for graph (Fig. 4 (d)) and point predictions (Fig. 4 (e)).
An _Implicit Point Module_ is further introduced to reconstruct the dense volumes, which consists of a feature propagation process and an MLP network. As features are extracted by the Point-Graph Network, the Implicit Point Module leverages the extracted multi-stage point features for fast dense volume segmentation. Given a query point \(\textbf{p}_{q}\) with at arbitrary coordinates, the module locates \(p_{q}\)'s \(k\)-nearest point elements from the point cloud: \(\{\textbf{p}_{1},\textbf{p}_{2},...,\textbf{p}_{k}\}\), and extracts their multi-stage features \(\{\textbf{z}_{1},\textbf{z}_{2},...,\textbf{z}_{k}\}\) from the backbone Network for feature propagation into a multi-stage representation \(\textbf{z}_{q}\) of the query point \(\textbf{p}_{q}\). After propagating the point feature \(\textbf{z}_{q}\), the MLP network \(\mathcal{H}\) is utilized to make class predictions. By applying this process to all foreground points, we can efficiently generate a dense volume reconstruction (Fig. 4 (f)).
To avoid naming ambiguity, we refer to the aforementioned complete network as _Implicit Point-Graph Network (IPGN)_, and that sans the implicit part as _Point-Graph Network (PGN)_.
### Point-Graph Feature Fusion
The essence of our point-graph fusion learning approach lies in leveraging coordinate information as a basis for querying neighboring elements in the opposite branch. To achieve feature integration, We adopt ball-query & grouping for point-to-graph feature merging and feature propagation for graph-to-point feature merging.
For the ball-query & grouping method in the \(i\)-th Point-Graph Fusion layer, a graph node \(g\) searches for all opposite point elements within a given ball with radius \(r\) as \(\{\textbf{p}_{1},\textbf{p}_{2},...,\textbf{p}_{b}\}\). Then, an MLP module \(\mathcal{F}_{1}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) independently project the point feature vectors to an updated representation of all queried points, Then, a feature-wise max-pooling layer aggregates all updated point features as point representation of the node **g**, expressed as:
\[\textbf{g}^{(i)}_{bg}=\max_{j}\left(\mathcal{F}_{1}(\textbf{p}^{(i-1)}_{j})\right) \tag{1}\]
Subsequently, the ball-queried feature \(\textbf{g}^{(i)}_{bg}\) is combined with the current feature \(\textbf{g}^{(i-1)}\) before using a graph neural network \(\mathcal{G}:\mathbb{R}^{2D}\rightarrow\mathbb{R}^{D_{next}}\) to perform graph convolution for feature fusion, resulting an updated feature representation of the node, \(\textbf{g}^{(i)}\in\mathbb{R}^{D_{next}}\), as input to the next Point-Graph Fusion layer.
For feature fusion from graph to point, feature propagation is utilized. In the process, each query point \(p\) with feature \(p^{(i-1)}\in\mathbb{R}^{D}\) at the \(i\)-th fusion layer locates its \(k\)-nearest graph nodes \(\{n_{1},n_{2},...,n_{k}\}\) in coordinate space. With \(k\)-nearest neighbors, the query point **p** acquires summarized graph feature \(\textbf{p}^{(i)}_{fp}\) by weighted summation (Eq. 2) of the \(k\) node features \(\{\textbf{g}^{(i-1)}_{1},\textbf{g}^{(i-1)}_{2},...,\textbf{g}^{(i-1)}_{k}\} \in\mathbb{R}^{D}\), where the weights are based on the normalized reciprocal distances. Let the distance between the query point and \(k\) neighbor nodes be \(\{d_{1},d_{2},...,d_{k}\}\), the propagation can be expressed as:
Figure 4: **Overview of the proposed Implicit Point-Graph Network (IPGN) for Pulmonary Tree Labeling. The pipeline pre-processes dense volume to graph and point cloud input for feature fusion learning. The _Point-Graph Fusion_ layers enhance point features with graph context, and the _Implicit Point Module_ produces dense prediction efficiently.**
Figure 5: **Point-Graph Fusion Layer. The details of the Point-Graph Fusion layer are presented. Here, \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) are MLP networks while \(\mathcal{G}\) represents a graph neural network.**
\[\mathbf{p}_{p}^{(i)}=\frac{\sum_{j=1}^{k}\mathbf{e}_{j}^{(i-1)}\times\frac{1}{d_{j} }}{\sum_{l=1}^{k}\frac{1}{d_{l}}} \tag{2}\]
Then, the aggregated feature for point **p**, \(\mathbf{p}_{fp}^{(i)}\in\mathbb{R}^{D}\) is concatenated with the incoming point feature \(\mathbf{p}^{(i-1)}\) to create \(\mathbf{p}_{concat}^{(i)}\in\mathbb{R}^{2D}\). Finally, an MLP module \(\mathcal{F}_{2}:\mathbb{R}^{2D}\rightarrow\mathbb{R}^{D_{nest}}\) projects the concatenated point feature to the input dimension of the next layer as \(\mathbf{p}^{(i)}\in\mathbb{R}^{D_{nest}}\).
### Implicit Dense Volume Reconstruction
To acquire dense volume segmentation results, the naive method is to sample all points from the pulmonary tree and group into multiple non-overlapping point clouds for full inferences. For example, for a point cloud containing \(6k\) points, when the total number of foreground points in dense volume is around \(180k\), the number of forward passes required to reconstruct the dense would be around \(30\). However, repeated inferences is computationally inefficient because the graph input remains identical and the point cloud is globally invariant during the \(30\) inferences.
To avoid repetitive computation, we propose the _Implicit Point Module_ in Fig. 6 for efficient and arbitrary point inference, enabling fast dense volume reconstruction in 3-steps. First, for arbitrary point coordinates \(\mathbf{p}_{q}=(x_{q},y_{q},z_{q})\in\mathbb{R}^{3}\) as input, its \(k\) nearest-neighbor points \(\{\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{k}\}\) in point cloud (Fig. 6 (a)) are queried. Second, for the \(i\)-th nearest neighbor point \(p_{i}\), its corresponding features in different stages of the network are extracted and concatenated to form a multi-stage feature vector \(\mathbf{z}_{i}=\{\mathbf{p}_{i}^{(0)}\frown\mathbf{p}_{i}^{(1)}\frown... \frown\mathbf{p}_{i}^{(l)}\}\), where \(l\) denotes the number of Point-Graph Fusion layers. A feature propagation (Fig. 6 (b)), similar to Eq. 2, is performed to aggregate \(\{\mathbf{z}_{1},\mathbf{z}_{2},...,\mathbf{z}_{k}\}\) into the feature representation \(\mathbf{z}_{q}\) for the query point. Finally, an MLP network \(\mathcal{H}\) projects the feature \(\mathbf{z}_{q}\) into a 19-dimensional vector for final classification (Fig. 6 (c)). For dense volume segmentation results, simply sample all foreground points and query through the module for prediction.
### Model Details
The IPGN is a customizable pipeline. For ball query in point-to-graph fusion, we set ball radius \(r\) = 0.1, and the maximum queried point is 24. For feature propagation in both graph-to-point fusion and the Implicit Point Module, we set \(k\)=3 for the \(k\)-nearest neighbor search. During the entire Point-Graph Network backbone, we set the intermediate fusion features to be 128-dimensional.
#### 4.1.1 Training
At the point branch, we experimented with two point models, PointNet++ [26] or Point Transformer [31] as initial feature encoders. For the graph encoder, we apply an 11-layer GAT [47] as the initial feature encoder. Prior to the training of the network, the feature encoders at point and graph branches are independently trained on the corresponding point and graph segmentation tasks and kept frozen.
The Point-Graph Network and the Implicit Point Module are trained simultaneously. First, we perform a forward pass on the Point-Graph Network, generating multi-stage point and graph features as well as predictions for the \(M\) input points and \(N\) graph elements. Subsequently, another set of \(M^{\prime}\) foreground points are randomly sampled, and perform forward inference on the Implicit Point Module for \(M^{\prime}\) predictions. After acquiring \(M+M^{\prime}\) point and \(N\) graph predictions, we apply the Cross-Entropy loss for training.
The two point encoder candidates: PointNet++ and Point Transformer, are both trained for 120 epochs with a learning rate of 0.002 while the GAT graph encoder is trained for 240 epochs with a learning rate of 0.02. The IPGN pipeline is trained for 100 epochs with a learning rate of 0.01 and for every 45 epochs, the learning rates are divided by 2. To improve model robustness, we employ random rotation, shift, and scaling as data augmentation during training.
#### 4.1.2 Inference
Testing on the dense volume input involves a 2-step procedure. First, the input point cloud and graph perform inference and generate multi-stage features through the backbone Point-Graph Network. In the second step, we sample all foreground points on the pulmonary tree structure and feed them to the Implicit Point Module in an iterative manner for predictions. With predictions for all dense volume elements, we simply reconstruct the volume by placing the predicted point labels at the 3D coordinates.
## 5 Experiments
### Experiment Setting
By default, all experiments use dense volume as initial input. While CNN is natural for manipulating dense volume, we pre-process the dense volume to point clouds and skeleton graphs for point and graph experiments. We present the experiment metrics at the point-level and graph-level. The performance at graph-level represents how structural components and connections within a graph are recognized. Moreover, point-level dense volume performance evaluation can be achieved using CNN methods, point-based methods with repeated inferences and graph-based models after post-processing (section 5.3). Therefore, the evaluations in point-level across convolution, point, and graph methods are consistent and fair.
#### 5.1.1 CNNs
In CNN experiments, given a 3D input, the task involves providing semantic segmentation prediction for the pulmonary tree in the image and all CNN methods apply a combination of dice loss and the cross-entropy loss for training. During testing, the image background is excluded from the metric computation.
Figure 6: **Implicit Point Module. For any query point, the Implicit Point Module consumes multi-stage features from a Point-Graph Network with feature propagation and a neural network to provide a label. \(\mathcal{H}\) represents an MLP.**
Among the CNN experiments, we employ 3D-unet [48] as the basic setup. Based on the 3D-Unet, we also implement a multi-task key-point regression method [17], abbreviated as "3D-Unet + KP". More specifically, an additional regression prediction head predicts a heatmap representing how likely is the location of a graph node as a key-point. In these two CNN-based experiments, data are down-sampled to \(96\times 96\times 96\) due to limited GPU memory during training and validation. During testing, the input is down-sampled for inference and re-scaled to the original dimension \(N\times 512\times 512\) for evaluation.
To address the issue of high memory usage and information loss by compromised resolution, we apply a sliding window approach in another CNN experiment, in which local 3D patches with dimension \(96\times 96\times 96\) from the original image are used for training and validation purposes. During testing, the predictions obtained from the sliding-window technique were assembled back onto the original image for evaluation.
#### Iv-A2 Point Clouds
Point-based experiments involve treating a set of tubular voxels of a pulmonary tree as a point cloud for sparse representation for modeling. At the output, point-based model provides per-point classification prediction.
During training and validation of the experiments, we randomly sampled 6000 foreground elements as point cloud input. During testing, all foreground points are sampled, randomly permuted, and grouped into multiple point clouds. Then each point cloud containing 6000 points is iteratively fed into the model for evaluation. Consequently, the inference processes provide a prediction for all foreground elements as dense volume results. In terms of baselines, PointNet [25], PointNet++ [26] and Point Transformer [31] are tested.
#### Iv-A3 Graphs
Graph experiments utilize the skeleton graph generated from dense volume by software [45] as graph structure, and evaluate networks' ability to recognize key-point and structural connections within the skeleton tree, respectively represented by per-node and per-edge performance.
In graph experiments, we leverage multiple graph neural networks (GNN) such as GAT [47], GraphSage [50], and graph network with pre-trained point-based features as input. We ensure fair comparisons across all GNNs by using 14 respective layers. After acquiring node features, features from the source node and destination node are interpolated by averaging for edge features before an MLP network projects all features to 19-dimension for final predictions.
Furthermore, we implement a post-processing technique to dilate graph-based prediction to dense volume prediction. Specifically, given any voxel, the algorithm searches for the label of the nearest graph element as its own label. Therefore, all graph baselines also provide point-level prediction metrics.
#### Iv-A4 Ours
As discussed in depth in section IV, we perform experiments combining point learning with graph learning. For the proposed Point-Graph network, the input and output setups for the point branch and graph branch are identical to those of point experiments and graph experiments. To speed up dense reconstruction, we incorporate the Implicit Point Module for point-based prediction, evaluated at point-level.
### Model Performance Comparison
In this section, we perform a comparative analysis based on Table I. The statistics presented in this section are the average performance over the 3 pulmonary structures.
At graph-level evaluation, among GNN methods with graph-only context, GAT [47] model, with minimal tuning, outperforms most baselines with considerable margin, displaying the advantage of attention mechanism in pulmonary tree setting. In addition to graph modeling with 3D coordinate features, the performance of the GAT [47] model, and a hypergraph [16]
\begin{table}
\begin{tabular}{l|c c|c c c|c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Feature} & \multicolumn{4}{c|}{Airusy} & \multicolumn{4}{c|}{Atery} & \multicolumn{4}{c}{Vain} \\ & & \multicolumn{4}{c|}{Point-level} & \multicolumn{4}{c|}{Graph-level} & \multicolumn{4}{c|}{Point-level} & \multicolumn{4}{c|}{Graph-level} & \multicolumn{4}{c|}{Point-level} & \multicolumn{4}{c|}{Graph-level} \\ & & CNN & Point & Graph & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice & Acc & Dice \\ \hline _User / Polar_ & & & & & & & & & & & & & & & & & & & & \\ 3D-Uet (down-sampled) [48] & ✓ & & & 62.9 & 58.5 & - & - & - & - & 67.1 & 61.4 & - & - & - & - & 63.0 & 54.0 & - & - & - & - \\
3D-Uet (down-sampled) + KP [17] & ✓ & & & 63.6 & 56.9 & - & - & - & - & 67.2 & 61.3 & - & - & - & - & 63.7 & 55.0 & - & - & - & - \\
3D-Uet (diding-window) [48] & ✓ & & & 61.0 & 39.8 & - & - & - & - & 51.0 & 22.5 & - & - & - & - & 52.6 & 28.7 & - & - & - & - \\ PointNet [25] & ✓ & & 87.4 & 79.1 & - & - & - & - & 86.6 & 80.0 & - & - & - & - & 79.6 & 70.8 & - & - & - & - \\ PointNet++ [26] & ✓ & & 90.1 & 82.9 & - & - & - & - & 89.0 & 82.5 & - & - & - & - & 82.5 & 75.7 & - & - & - & - \\ Point Transformer [31] & ✓ & & 91.1 & 87.8 & - & - & - & - & 90.1 & 86.5 & - & - & - & - & 83.4 & 77.7 & - & - & - & - \\ \hline _Graph_ & & & & & & & & & & & & & & & & & & & & & & \\ GCN [32] & & & & 84.0 & 82.8 & 87.8 & 85.5 & 85.3 & 83.0 & 80.2 & 79.6 & 83.2 & 82.1 & 81.8 & 80.4 & 74.6 & 73.7 & 74.7 & 73.5 & 71.7 & 70.4 \\ GNN [49] & & & ✓ & 85.3 & 84.6 & 90.5 & 88.8 & 87.6 & 85.6 & 82.1 & 81.4 & 86.0 & 84.9 & 84.3 & 82.8 & 75.0 & 74.0 & 75.8 & 74.3 & 72.5 & 70.9 \\ GraphSage [50] & & ✓ & 86.6 & 86.0 & 92.8 & 91.7 & 89.5 & 85.0 & 83.0 & 82.4 & 87.8 & 86.7 & 85.8 & 84.5 & 76.7 & 75.9 & 79.9 & 79.1 & 76.1 & 75.0 \\ HyperGraph [16] & ✓ & & 86.7 & 86.0 & 93.4 & 90.2 & 89.2 & 88.9 & 82.2 & 89.2 & 89.3 & 89.0 & 86.3 & 85.7 & 76.4 & 75.7 & 79.6 & 79.4 & 76.1 & 75.8 \\ HyperGraph + Handcrafted Features [16] & ✓ & & 86.4 & 85.3 & 93.0 & 92.8 & 89.8 & 87.2 & 85.8 & 81.8 & 89.0 & 80.6 & 86.1 & 85.2 & 76.3 & 75.7 & 79.6 & 79.5 & 78.5 & 75.4 \\ GAT [47] + Handcrafted Features [16] & ✓ & & 86.9 & 86.3 & 94.1 & 92.7 & 90.8 & 89.1 & 84.0 & 83.4 & 89.6 & 88.7 & 87.6 & 86.3 & 76.7 & 76.0 & 80.3 & 79.0 & 76.7 & 75.3 \\ GAT [47] + Handcrafted Features [16] & ✓ & & 86.9 & 86.3 & 94.0 & 92.5 & 90.7 & 88.8 & 84.1 & 83.7 & 89.4 & 88.4 & 87.4 & 86.2 & 77.1 & 76.1 & 79.5 & 78.1 & 75.6 & 74.1 \\ \hline _Point + Graph_ & & & & & & & & & & & & & & & & & & & & & & \\ GAT [47] (PointNet++ Features) [26] & ✓ & & 87.2 & 86.7 & 94.4 & 94.6 & 91.7 & 91.8 & 84.5 & 83.8 & 91.8 & 90.5 & 89.1 & 88.6 & 77.6 & 76.3 & 81.8 & 80.4 & 78.3 & 77.0 \\ PGN (PointNet++ [26]) & ✓ & & 91.0 & 87.7 & 95.2 & **98.1** & 92.9 & 92.3 & 93.9 & 89.6 & 86.8 & 92.9 & **92.9** & **90.0** & **89.3** & **83.8** & 77.4 & **82.9** & **82.8** & **76.** **74** \\ PGN (PointNet++ [26]) & ✓ & & 91.0 & 87.7 & 95.2 & **98.1** & 92.3 & 92.3 & 89.9 & 86.6 & 92.9 & 92.9 & 90.0 & **89.3** & 83.6 & 77.5 & **82.9** & **82.8** & **79.6** & **79.4** \\ PGN (Point Transformer [31]) & ✓ & & **91.6** & 88.4 & **95.3** & 95.0 & **92.6** & **92.7** & **90.7** & **87.2** & **93.0** & 92.6 & **90
model with handcrafted features [16] are presented. Compared to applying coordinate features as input, GAT with handcrafted features suffers from a slight drop of 0.4% in accuracy. On the other hand, applying handcrafted features to the hypergraph [51] model also translates to a performance drop. These results suggest that in graph settings, tailor-designed features don't provide more valuable information than 3D coordinates. Further insights are provided in section V-E1.
For graph performance in settings with both graph and point context, GAT (PointNet++ feature) has the lowest performance among all. Nevertheless, it still outperforms any other graph-context-only baseline by a margin of around 1.4% in accuracy at minimum. Such a gap in performance indicates that the integration of point-context into graph learning is beneficial. For the proposed Point-Graph Network and IPGN, their performances beat all baselines in both metrics, displaying superiority in point-graph fusion learning.
At point-level, CNN methods achieve the worst performance. Two 3D-Unet [17], [48] methods based on down-sampled input yield unsatisfying metrics, indicating that training CNN methods on reduced resolution leads to inferior modeling. As an attempt to train on the original resolution without memory restriction, the 3D-Unet [48] experiment with sliding-window strategy reports the poorest performances at both accuracy and dice, likely due to lack of global context.
Among point-based methods, the Point Transformer [31] achieves the overall best performances. In terms of graph-based methods, their point-level results generally report lower accuracy, while offering relatively high dice scores compared to point baselines despite the lack of local shape context: The highest dice score from graph methods is only 1.7% behind that of Point Transformer. In our view, such dice quality based on graph can be attributed to the accurate prediction of graph nodes, which is equivalent to border prediction as nodes represent bifurcation points in a tree. Additionally, in CNN, point-based, and the point-graph fusion methods, accuracy is generally higher than dice score. We believe that because the extra-pulmonary structure (Fig. 1, colored in light blue) possesses a large volume and has a distinct location, it could be easily recognized, boosting the accuracy over the dice score.
For settings with point and graph context, the proposed Point-Graph Network and IPGN based on pre-trained PointNet++ [26] feature reports similar performance as Point Transformer [31], showing that tree-topology context provides a similar benefit as the long-range dependency information from Point Transformer. Point-Graph Network and IPGN based on Point Transformer achieve SOTA results among all settings.
### Dense Volume Reconstruction
In this subsection, we focus on the Implicit Point Module (Fig. 6), which aims to enhance the efficiency of labeled dense volume reconstruction (Fig. 4 (f)). As discussed previously in Sec. I, the integration of the implicit module is necessary as high-performing point models like Point Transformer [31] could still be computationally expensive at test time: with repeated inferences for dense prediction, its computation cost grows in cubic, similar to CNN. Experiments are completed to compare the efficiency using the Point Implicit module against using the Point-Graph Network as well as various baseline methods. The reconstruction efficiency is represented as the average run-time in seconds, for dense volume prediction across the Pulmonary Tree Labeling test set.
Regarding volume reconstruction efficiency, the test time of convolution methods, graph-based method, point-based method, and our methods were measured2. For all setups, we evaluate and present the inference time and quality for dense volume segmentation in Table 2. Apart from model inference costs, test time is composed of various operations in different setups. For CNN with down-sampled input, test time includes down-sampling and up-sampling operations. For graph baselines, time measurement takes post-processing into account. Finally, test time for IPGN includes a forward pass on the Point-Graph Network and the subsequent Implicit Point Module inference on all remaining points.
Footnote 2: To make it reproducible, the measurement was conducted on a free Google Colab T4 instance—CPU: Intel(R) Xeon(R) CPU @ 2.00GHz, GPU: NVIDIA Tesla T4 16Gb, memory: 12Gb.
The results demonstrated the superiority of the IPGN in dense volume reconstruction. Qualitatively, the Point-Graph Network and IPGN achieve nearly identical performance. In terms of efficiency, IPGN only requires \(1.5\) seconds on average with PointNet++ [26] as point encoder and \(2.3\) seconds with Point Transformer [31]. Although the graph-based method records a similar time cost, it fails to generate quality results. Further, all other methods are relatively inefficient. Therefore, the proposed IPGN pipeline is the all-around optimal solution for dense volume segmentation for pulmonary structures.
These findings highlight the practical utility of the proposed module, as it allows for fast and efficient point inference without compromising quality. The Implicit Point Module serves as a valuable contribution in overcoming the computational challenges associated with analyzing the pulmonary tree, enabling rapid decision-making in clinical settings.
### Qualitative Analysis with Visualization
In the qualitative analysis section, we employ 2 concrete cases each from the airway, artery, and vein tree to demonstrate the efficacy of our method. Fig. 7 (a-f) showcases the results of full pulmonary tree segmentation using graph-only method, point-only method, and the Implicit Graph-Point Network, against the ground truth image.
The graph-based predictions from Fig. 7 (b-d) reveal instances of incorrect graph predictions leading to anomalous outcomes, where multiple class labels appear in the same branch, thereby inducing inconsistent branch predictions. For point-based method, due to its sparse representation in individual forward passes, point predictions often lose details and disrupt the borders between branches (Fig. 7 (a,c,e)), resulting in volumes that lack smoothness and quality.
Conversely, the proposed IPGN offers various advantages. Firstly, our method accurately classifies distal branches, illustrated by dense volume prediction in Fig. 7 (d,f). Additionally, our method effectively defines clear and uninterrupted borders between the child branches at bifurcation points. This is
especially valuable when a parent node branches into multiple child branches belonging to different classes, as demonstrated in Fig. 7 (b,e). By integrating graph and point context into the modeling process, our method enhances the segmentation capabilities at bifurcation points and distal branches, ultimately producing smoother and more uniform branch predictions.
### Ablation Experiments
#### 5.5.1 Feature Input Selection
In this section, we examine the effect of different input features for the skeleton graph node. The candidates are 3D coordinates feature and handcrafted feature. As coordinate feature is simply the \(\{x,y,z\}\) coordinates, handcrafted feature follows the feature design from TNN [16], containing structural, positional, and morphological features.
We perform experiments on a GAT [47] and a 5-layer MLP network to evaluate the impact of coordinate features and handcrafted features, reported in Table 3. Based on the MLP experiment, handcrafted feature achieves 81.4% node accuracy on average while raw coordinate only reports 74.4%. Conversely, in the experiment with GAT [47], applying the co
\begin{table}
\begin{tabular}{c c|c c c|c c c|c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Airway} & \multicolumn{2}{c|}{Artery} & \multicolumn{2}{c}{Vain} \\ & & Time (s) & Acc (\%) & Dice (\%) & Time (s) & Acc (\%) & Dice (\%) & Time (s) & Acc (\%) & Dice (\%) \\ \hline _Voxel_ & 3D-Unet [48] (downsampled) & 8.16 & 62.9 & 85.5 & 8.02 & 67.1 & 61.4 & 7.46 & 63.0 & 54.0 \\ _Voxel_ & 3D-Unet [48] (sliding-window) & 9.43 & 61.0 & 39.8 & 11.88 & 51.0 & 22.5 & 12.44 & 52.6 & 28.7 \\ _Point_ & PointNet++ [26] & 4.51 & 90.1 & 82.9 & 9.178 & 89.0 & 82.5 & 9.65 & 82.5 & 75.7 \\ _Point_ & Point Transformer [31] & 10.97 & 91.1 & 87.8 & 12.94 & 90.1 & 86.5 & 23.25 & 83.4 & 77.7 \\ _Graph_ & GAT [47] & 1.15 & 86.9 & 86.3 & 2.05 & 84.0 & 83.4 & 2.24 & 76.7 & 76.0 \\ \hline _Point + Graph_ & RGN (PointNet++ [26]) & 5.65 & 91.0 & 87.7 & 12.40 & 89.9 & 86.8 & 13.16 & **83.8** & 77.6 \\ _Point + Graph_ & IPGN (PointNet++ [26]) & **1.30** & 91.0 & 87.5 & **1.49** & 89.9 & 86.6 & **1.57** & 83.6 & 77.5 \\ _Point + Graph_ & PON (Point Transformer [31]) & 12.10 & **91.6** & 88.4 & 24.28 & **90.7** & **87.2** & 25.81 & **83.8** & **78.6** \\ _Point + Graph_ & IPGN (Point Transformer [31]) & 2.32 & 91.5 & **88.5** & 2.29 & **90.7** & **87.2** & 2.39 & 83.7 & 78.3 \\ \hline \end{tabular}
\end{table}
Table 2: **Inference Speed and Segmentation Metrics.** This table compares the dense volume segmentation test time and quality across graph-based, point-based, and point-graph fusion methods. The test times are measured in seconds while accuracy and dice score present segmentation quality.
Figure 7: **Visualization of Segmentation Results.** This figure displays segmentation results using GAT [47] (graph-only context), PointNet++ [26] (point-only context) and IPGN (with GAT and PointNet++ backbones) methods along with the ground truth. For each method class, the corresponding predictions in their initial form prior to dense volume prediction are also presented.
ordinate feature achieves overall best performances, reaching 88.17% while the handcrafted feature trails behind.
The results indicate that the handcrafted feature provides more information and produces better performance in an MLP setting, where the neural network simply learns a non-linear projection without any global or tree topology context. Once the connections between graph nodes are established through edge in graph modeling, the handcrafted feature completely loses its advantages over the raw coordinate feature, which implies that the handcrafted feature could be learned during graph learning and the learned graph-based feature is more beneficial to graph segmentation quality.
#### 5.3.2 Input Selection For Implicit Point Module
In this section, we present the results of an ablation study where we explore the impact of multi-stage features using different layers of the intermediate feature vector as input to the implicit module.
In the experiment, we initially utilized the feature output of the final Point-Graph Fusion layer as the sole input to the implicit module. Motivated by the design of DenseNet [52], we experimented with adding feature outputs from shallower intermediate blocks, forming multi-stage features, and report the performance with each feature addition until the initial point feature. Table 4 demonstrates that combining multi-stage features, along with the initial point (PointNet++ [26]) feature, enhances the predictive capabilities of our model, contributing to better performance. Finally, the best-performing configuration we observed involved using all available features, yielding results on par with the full modeling approach.
## 6 Extended Application: Reconstruction of Pulmonary Segments
The Implicit Point Module plays a vital role in defining implicit surfaces between different classes within the pulmonary tree, allowing for efficient dense reconstruction of the pulmonary tree. As the module utilizes the point-graph feature field for labeling, the input coordinates are not constrained to the tubular voxel/point. Consequently, any point within the pulmonary lobe can be assigned a class label.
Pulmonary segments are subdivisions within the pulmonary lobes and their boundaries are determined based on the 18 branches of the pulmonary trees of our interest [43]. By utilizing the extracted point-graph feature field from the airway, artery, or vein tree, our module can accurately infer information for all points within the lobe. This enables us to achieve a natural semantic reconstruction of the pulmonary segments, depicted in Fig. 8.
Compared to the ground truth, the tree-segmentation-induced pulmonary segments reconstruction achieves 80.13%, 82.15% and 76.20% prediction accuracy and 79.89%, 81.85% and 76.16% micro-average dice scores, respectively for pulmonary airway, artery, and vein. Future works can potentially produce better pulmonary segment reconstruction by integrating features from the three pulmonary trees or leveraging the explicit pulmonary lobe boundaries as additional guidelines.
## 7 Conclusion
In conclusion, we take an experimentally comprehensive deep-dive into pulmonary tree segmentation based on the compiled PTL dataset. A novel architecture Implicit Point-Graph Network (IPGN) is presented for accurate and efficient pulmonary tree segmentation. Our method leverages a dual-branch point-graph fusion model to effectively capture the complex branching structure of the respiratory system. Extensive experiment results demonstrate that by implicit modeling on point-graph features, the proposed model achieves state-of-the-art segmentation quality with minimum computation cost for practical dense volume reconstruction. The advancements made in this study could potentially enhance the diagnosis, management, and treatment of pulmonary diseases, ultimately improving patient outcomes in this critical area of healthcare.
|
2309.09117 | Contrastive Decoding Improves Reasoning in Large Language Models | We demonstrate that Contrastive Decoding -- a simple, computationally light,
and training-free text generation method proposed by Li et al 2022 -- achieves
large out-of-the-box improvements over greedy decoding on a variety of
reasoning tasks. Originally shown to improve the perceived quality of long-form
text generation, Contrastive Decoding searches for strings that maximize a
weighted difference in likelihood between strong and weak models. We show that
Contrastive Decoding leads LLaMA-65B to outperform LLaMA 2, GPT-3.5 and PaLM
2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLaMA
2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in
addition to improvements on a collection of other tasks. Analysis suggests that
Contrastive Decoding improves over existing methods by preventing some abstract
reasoning errors, as well as by avoiding simpler modes such as copying sections
of the input during chain-of-thought. Overall, Contrastive Decoding outperforms
nucleus sampling for long-form generation and greedy decoding for reasoning
tasks, making it a powerful general purpose method for generating text from
language models. | Sean O'Brien, Mike Lewis | 2023-09-17T00:29:32Z | http://arxiv.org/abs/2309.09117v2 | # Contrastive Decoding Improves Reasoning in Large Language Models
###### Abstract
We demonstrate that Contrastive Decoding - a simple, computationally light, and training-free text generation method proposed by Li et al 2022 - achieves large out-of-the-box improvements over greedy decoding on a variety of reasoning tasks. Originally shown to improve the perceived quality of long-form text generation, Contrastive Decoding searches for strings that maximize a weighted difference in likelihood between strong and weak models. We show that Contrastive Decoding leads LLAMA-65B to outperform LLAMA 2, GPT-3.5 and PaLM 2-L on the HellaSwag commonsense reasoning benchmark, and to outperform LLAMA 2, GPT-3.5 and PaLM-540B on the GSM8K math word reasoning benchmark, in addition to improvements on a collection of other tasks. Analysis suggests that Contrastive Decoding improves over existing methods by preventing some abstract reasoning errors, as well as by avoiding simpler modes such as copying sections of the input during chain-of-thought. Overall, Contrastive Decoding outperforms nucleus sampling for long-form generation and greedy decoding for reasoning tasks, making it a powerful general purpose method for generating text from language models.
## 1 Introduction
Text is generated from large language models (LLMs) in different ways for different tasks. For open-ended text generation tasks, truncated sampling is normally used, as the most likely strings under a model tend to be short and uninteresting (Holtzman et al., 2020). For reasoning problems, greedy decoding is normally preferred, to avoid risking sampling errors. This bifurcation is undesirable; for example it increases the likelihood of reasoning errors during open-ended generation.
We explore the use of Contrastive Decoding (Li et al., 2022) for solving reasoning problems with LLMs. Contrastive Decoding (CD) searches for strings that maximize a weighted difference in likelihood between a stronger _expert_ and a weaker _amateur_ model, and was shown to outperform existing methods for open-ended text generation. It achieves this by avoiding undesirable modes of the expert model's distribution, such as short or generic strings, which tend to be the most likely under any model, including the amateur.
We show that Contrastive Decoding outperforms greedy decoding on reasoning problems. On GSM8K, a widely used benchmark consisting of grade-school word math problems, contrastive decoding improves the performance of various LLaMA models by up to 8 absolute percentage points. This result outperforms LLaMA 2, which has 5 billion more parameters and is trained on 40% more data. On HellaSwag, using the CD objective to rank answers leads LLaMA to outperform all existing models except GPT-4. We find general improvement on arithmetic reasoning and multiple-choice ranking tasks, including on models as large as LLaMA-65B, suggesting that Contrastive Decoding could bring such widespread improvements to much larger models.
We also analyze the cause of the improvement from Contrastive Decoding. Empirically, we find that Contrastive Decoding performs less surface-level copying from the prompt than greedy decoding and misses fewer reasoning steps. This result suggests that, similarly to findings in Li et al. (2022), Contrastive Decoding works by reducing repetitive or other undesirable modes of the model distribution. Our current method yields mixed results for commonsense reasoning tasks and slightly degrades factual retrieval, both trends that encourage further refinement of the method.
Overall, we show that Contrastive Decoding not only substantially improves LLM accuracies on a range of benchmarks, but is also the first generation algorithm to achieve state-of-the-art results in both reasoning and text generation problems. These results allow a more unified method for improving generation from language models across tasks.
## 2 Contrastive Decoding
### Simplified Formulation
The original Contrastive Decoding formulation from Li et al. (2022) explicitly chooses two parameters: \(\alpha\) and the intermediate temperature of the amateur distribution \(\tau_{a}\), with the intermediate temperature of the expert fixed at \(\tau_{e}=1\). We slightly refactor the hyperparameter choice to be more interpretable and simplify the algorithm by working directly in logit space.
Figure 3: CD accentuates what the expert model has learned that the amateur model has not. Results are taken from greedy decoding with a 65B parameter expert, using \(\alpha=0.1\), \(\beta=0.5\) for CD.
Let \(s_{a}^{(i)}\) and \(s_{e}^{(i)}\) be the unnormalized scores (logits) assigned to token \(i\) by the amateur and expert models, respectively. \(\alpha\) is the same hyperparameter in the original paper: a proportion of the maximum probability assigned by the expert model, with any tokens assigned a lower probability masked out. \(\beta\) is a hyperparameter corresponding to the strength of the amateur penalty. We include a leading \((1+\beta)\) coefficient to the expert logits to decouple the strength of the contrastive penalty from the expected scale of the output logits, cleanly delineating between the contrastive tradeoff and the final sampling temperature. This matches the formulation of DE Experts (Liu et al., 2021), with the expert model serving both as the base prior and steering expert.
**1.** Determine \(\alpha\)-mask.
\(V_{valid}=\{j\in V,s_{e}^{(j)}\geq\log\alpha+\max_{k\in V}s_{e}^{(k)}\}\)
**2.** Subtract amateur logits.
\(s_{CD}^{(i)}=\begin{cases}(1+\beta)s_{e}^{(i)}-\beta s_{a}^{(i)}&i\in V_{valid }\\ -\infty&i\not\in V_{valid}\end{cases}\)
A PyTorch implementation for this formulation, as well as the original, can be found in subsection A.1 of the appendix. Our implementation takes three lines of readable code.
### Probabilistic Interpretation
Our implementation of \(\alpha\)-masking has the same interpretation as in Li et al. (2022), given that the expert temperature is fixed to \(\tau_{e}=1\). We show the equivalence in A.2.
Further, we can consider the post-softmax probabilities produced by CD as a perturbation of the probabilities predicted by the expert model. Not including \(\alpha\)-masking, the probability assigned to token \(i\) by CD is a normalized adjustment of the probability assigned by the expert model:
\[p_{CD}^{(i)}\propto p_{e}^{(i)}\left(\frac{p_{e}^{(i)}}{p_{a}^{(i)}}\right)^{\beta} \tag{1}\]
It is therefore clear that as \(\beta\to 0\) the contrastive penalty disappears, and as \(\beta\to\infty\) the distribution collapses to the argmax of \(p_{e}^{(i)}/p_{a}^{(i)}\), which is the original formulation from Li et al. (2022).
## 3 Experiments
### Experimental Setup
**Models.** We use untuned models from the LLaMA 1 family (Touvron et al., 2023) at all scales. Unless otherwise stated, we use an untuned LLaMA-65B as the expert and an untuned, LLaMA-architecture model with 1.5B parameters trained on the same data as the other LLaMA 1 models as an amateur. For one ablation study, we use models from the FLAN-T5 family (Chung et al., 2022).
**Decoding Parameters.** We set \(\beta=0.5\) and \(\alpha=0.1\) for all experiments unless otherwise stated. We use greedy decoding, except for self-consistency experiments for which we sample at \(\tau=0.7\) following Touvron et al. (2023).
**Prompting.** For generation tasks, we use 8-shot chain-of-thought prompting, in line with Touvron et al. (2023). The examples are the same as in LLaMA for tasks contained in that paper, and taken from Wei et al. (2023) for other mathematical tasks.
**Datasets.** Following prior works, we evaluate on a number of datasets. The following tasks measure performance on algebraic word problems: **AQuA** (Ling et al., 2017), **ASDiv**(Miao et al.,
2021), **GSM8K**(Cobbe et al., 2021), and **SVAMP**(Patel et al., 2021). We also evaluate on **MATH**(Hendrycks et al., 2021), a larger and more challenging benchmark.
For commonsense reasoning, we measure open-ended performance on **CommonsenseQA**(Talmor et al., 2019) and **StrategyQA**(Geva et al., 2021). We also evaluate on a battery of multiple-choice reasoning benchmarks: both the easy and challenge splits of the **AI2 Reasoning Challenge** dataset (Clark et al., 2018), **BoolQ**(Clark et al., 2019), **HellaSwag**(Zellers et al., 2019), **MMLU**(Hendrycks et al., 2021), **PIQA**(Bisk et al., 2019), **SIQA**(Sap et al., 2019), and **WinoGrande**(Sakaguchi et al., 2019).
### Hyperparameter Selection
Contrastive decoding has three major hyperparameters: the masking ratio \(\alpha\), the contrastive strength \(\beta\) and the size of the amateur model. We find that results are fairly insensitive to \(\alpha\) as long as \(\hat{\beta}\) is reasonably small (below 1); unless otherwise stated we use \(\alpha=0.1\) across experiments.
Next we consider the size of the amateur model. In agreement with Li et al. (2022), we find that performance benefits from smaller amateur models ( Figure 4); while a 1B-parameter amateur helps reasoning performance, a 7B-parameter amateur harms it. We also examine different types of amateurs; ablation studies show that a partially-trained amateur performs better than a fully-trained one, and that a poorly-prompted expert can be successfully used as an amateur as well (see subsection 4.2).
Finally, we examine the effect of \(\beta\). The optimal value depends on the task, but for both generation tasks like GSM8K and multiple-choice ranking tasks like PIQA we find that \(\beta=0.5\) performs well. Setting \(\beta\) too high can place too much weight in the contrastive penalty and harm performance, especially with a larger gap between amateur and expert models. \(\beta=0\) corresponds to standard greedy decoding with no contrastive penalty. Results of \(\beta\) hyperparameter sweeps can be found in Table 1, Figure 4, Figure 5 and B.
The best result on GSM8K, with LLaMA-65B and \(\beta=0.25\), is 57.7 (Table 1), outperforming PaLM-540B (56.5), LLaMA-2 (56.8) and GPT-3.5 (57.1).* (Anil et al., 2023; OpenAI, 2023)
Footnote *: OpenAI (2023) evaluates GPT-3.5 5-shot; all others are 8-shot.
### Arithmetic Reasoning
We find that contrastive decoding tends to help on arithmetic reasoning tasks with chain-of-thought prompting; see Table 2 for all results. One exception to this is the MATH dataset, which proves to be challenging for both standard and contrastive decoding. We conjecture that because contrastive decoding amplifies skills that the expert has learned better than the amateur, it cannot help on tasks that are well beyond the expert's ability.
We also experiment with normalizing the \(\alpha\)-masked CD scores via softmax, then temperature sampling from the resulting distribution. This permits CD to generate multiple candidate reasoning chains to be used for self-consistency (taking the majority answer) (Wang et al., 2023). We show across both mathematical and commonsense reasoning, CD improves self-consistency performance.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Expert & \(\beta=0\) & \(\beta=0.25\) & \(\beta=0.5\) & \(\beta=1\) \\ \hline
7B & 10.7 & 11.5 & **13.6** & 11.0 \\ \hline
13B & 17.0 & 21.0 & **22.9** & 20.4 \\ \hline
30B & 35.2 & 40.0 & **43.4** & 42.0 \\ \hline
65B & 51.0 & **57.7** & 56.8 & 44.6 \\ \hline \end{tabular}
\end{table}
Table 1: Results on GSM8K. \(\beta=0.5\) tends to give good results across expert sizes.
Figure 4: Results on GSM8K with LLaMA-65B as the expert. While a 7B amateur harms performance, a 1.5B amateur helps.
### Commonsense Reasoning
Results are more mixed for CommonsenseQA and StrategyQA. For both of these tasks, we 8-shot prompt our model and compute the exact match score against the ground-truth answers. We find that contrastive decoding harms performance for smaller models, but that this harm equalizes somewhat for the 65B model and evens out when using self-consistency. See Table 3 for full results.
\begin{table}
\begin{tabular}{c c|c c c c c|c} \hline Model & CD & AQuA & ASDiv & GSM8K & MATH & SVAMP & Average \\ \hline \hline \(\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ }}}}}}}}}}}}}\) & 21.0\({}^{*}\) & 40.2 & 10.7 & 3.0 & 27.3 & 20.4 \\
13B & ✗ & 18.1\({}^{*}\) & 49.0 & 17.4 & 4.2 & 39.4 & 25.6 \\
30B & ✗ & 23.8 & 60.1 & 35.3 & 6.9 & 55.9 & 36.4 \\
65B & ✗ & 33.3 & 67.2 & 51.0 & 10.6 & 69.1 & 46.2 \\
65B maj@20 & ✗ & 38.2 & 73.6 & 68.0 & \(-\)† & 77.3 & 64.3 \\ \hline \(\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ }}}}}}}}}}}}}\) & 19.0\({}^{*}\) (2.0) & 39.7 (-0.5) & 14.3 (+3.6) & 2.9 (-0.1) & 31.5 (+4.2) & 21.5 (+1.1) \\
13B & ✓� & 16.0\({}^{*}\) (2.1) & 52.0 (+3.0) & 22.7 (+5.5) & 3.8 (-0.4) & 43.1 (+3.7) & 27.5 (+1.9) \\
30B & ✓� & 29.8 (+6.0) & 62.5 (+2.4) & 43.1 (+8.1) & 8.1 (+1.2) & 59.3 (+3.4) & 40.6 (+4.2) \\
65B & ✓� & 36.9 (+3.6) & 71.9 (+4.7) & 56.8 (+5.8) & 10.3 (-0.3) & 67.8 (-1.3) & 48.7 (+2.5) \\
65B maj@20 & ✓� & **39.4** (+1.2) & **77.4** (+3.8) & **74.0** (+6.0) & \(-\)† & **79.0** (+1.7) & **67.5** (+3.2) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on math generation tasks. Contrastive decoding generally improves performance.
Figure 5: Two examples of sweeping through \(\beta\) values on multiple-choice reasoning tasks across model scales. Dashed horizontal lines mark performance without contrastive decoding.
\begin{table}
\begin{tabular}{c c|c c|c} \hline Model & CD & CSQA & StrategyQA & Average \\ \hline
7B & ✗ & 40.0 & 59.2 & 49.6 \\
13B & 60.4 & 64.5 & 62.5 \\
30B & 66.4 & 68.7 & 67.6 \\
65B & 77.5 & 69.5 & 73.5 \\
65B maj@20 & ✗ & 77.0 & **79.3** & 78.2 \\ \hline TB & ✓ & 37.3 (-2.7) & 58.3 (-0.9) & 47.8 (-1.8) \\
13B & ✓� & 58.5 (-1.9) & 65.5 (+1.0) & 62.0 (-0.5) \\
30B & ✓� & 62.8 (-3.6) & 67.6 (-1.1) & 65.2 (-2.4) \\
65B & ✓� & 77.1 (-0.4) & 71.5 (+2.0) & 74.3 (+0.8) \\
65B & maj@20 & ✓� & **77.9** (+0.9) & **79.3** (+0.0) & **78.6** (+0.4) \\ \hline \hline \end{tabular}
\end{table}
Table 3: CD harms commonsense reasoning with a smaller expert, but performance evens out with a larger expert-amateur gap.
### Contrastive Ranking
We further evaluate a contrastive objective as a scoring function to rank answers to multiple-choice questions. These tasks are zero-shot, multiple-choice cloze tasks; instead of open-ended generation the model scores each potential completion, length-normalizing following Touvron et al. (2023). We find comparable performance across most tasks, with more substantive gains on HellaSwag and ARC-Challenge. Notably, on HellaSwag CD leads LLaMA-65B to score 88.0, which outperforms LLaMA-2 (85.3), GPT-3.5 (85.5) (OpenAI, 2023) and PALM 2-Large (86.8) (Anil et al., 2023).
## 4 Additional Studies
### Effects of Contrastive Decoding
**CD is worse at arithmetic but better at logical reasoning.** We conduct a manual error analysis of 100 randomly selected examples from the GSM8K set between continuations from greedy decoding and CD (\(\beta=0.5,\alpha=0.1\)). We follow Wang et al. (2023) and categorize wrong answers as primarily being due to an arithmetic error, a missing step or a semantic misunderstanding. We add one category of "degeneration," chosen when the model lapses into excessive repetition. Our small-scale analysis finds that CD makes more arithmetic errors, but that this is offset by better semantic reasoning and fewer missing steps (see Table 5).
To further explore the claim that the benefit of CD does not stem from arithmetic evaluation, we generate a toy dataset of 1,0000 multiplication and subtraction equations with operands up to four digits and then 8-shot prompt models to complete the expression, measuring exact match accuracy. We find that CD does not improve performance on this task, and in fact may degrade it slightly. Results are shown in Table 8.
**CD reduces copying from the prompt.** We analyze 26,000 sampled generations from CD-sampling on GSM8K against the corresponding set from temperature sampling; both of these sets of generations are used in our self-consistency study. We find that responses are roughly the same length and follow the few-shot template roughly the same proportion of the time. This rules out the
\begin{table}
\begin{tabular}{|c|c c|} \hline & Standard & CD \\ \hline Correct \% & 44.6 & **51.1** \\ Parseable \% & 95.2 & **95.6** \\ Average \# chars & 215.2 & 217.2 \\ \hline \end{tabular}
\end{table}
Table 6: High-level generation statistics from sampled generations on GSM8K. Responses are similar lengths, despite the performance improvement from CD.
\begin{table}
\begin{tabular}{c|c c c c c c c|c|c} \hline \(\beta\) & ARC-E & ARC-C & BoolQ & HSWag & PIQA & SIQA & WGrande & MMLU & Avg \\ \hline
0.0 & **79.1** & 56.1 & 84.2 & 84.2 & 82.6 & 52.3 & 77.3 & **63.5** & 72.4 \\ \hline
0.5 & 79.0 & 59.5 & **84.3** & 87.4 & **83.1** & **53.3** & **77.8** & 63.4 & **74.9** \\
1.0 & 76.9 & **59.7** & 84.1 & **88.0** & 82.9 & **53.3** & 76.5 & 63.2 & 74.5 \\ \hline \end{tabular}
\end{table}
Table 4: Results on multiple-choice reasoning tasks. CD generally provides a modest boost.
\begin{table}
\begin{tabular}{c|c c c|c|c} CD & Arithmetic & Missing Step & Semantic & Degeneration & Total Errors \\ \hline ✗ & **4\%** & 22\% & 24\% & 4\% & 54\% \\ \hline ✓ & 8\% & **20\%** & **21\%** & **3\%** & **52\%** \\ \hline \end{tabular}
\end{table}
Table 5: Proportion of errors in of a set of 100 GSM8K questions. CD makes more arithmetic errors, but omits fewer steps and avoids semantic misunderstandings.
Figure 6: CD reduces copying from the question in the generated Chain of Thought, as measured by n-gram overlap on GSM8K generations.
hypothesis that contrastive decoding simply leads the model to follow the template better, prevents degeneration or induces longer answers with more reasoning steps. Further, we run an automatic evaluation of greedy generations using ROSCOE (Golovneva et al., 2022) but do not find significant differences in any of these metrics. However, we measure the precision and recall of the tokens in the prompt by the sampled generations and find that CD systematically reduces token-level copying from the prompt. This may be related to increased reasoning ability, as surface-level copying from the prompt does not provide new information to the problem.
**CD can harm factual recall.** Our primary claim is that contrastive decoding improves chain-of-thought reasoning. However, we also test CD on two pure factual-recall tests that do not utilize chain-of-thought: OpenBookQA (Mihaylov et al., 2018) and TriviaQA (Joshi et al., 2017). OpenBookQA ("OBQA"), is a multiple-choice completion task, while TriviaQA is a 5-shot generation task. Reusing the same setup from reasoning leads to a slight degradation of performance, as seen in Table 7.
**CD outperforms other reasoning enhancements in FLOP efficiency.** We note that contrastive decoding introduces relatively little overhead in comparison to other reasoning-enhancing methods. We estimate that with a 1.5B amateur and 65.2B expert, contrastive decoding increases the total number of FLOPs by \(3.25\%\) (see section C of the appendix). This compares favorably to self-consistency, which requires several extra full generation loops. We show in Figure 9 that CD is significantly more efficient than self-consistency.
### Ablation Studies
\(\alpha\)**-masking alone is not enough.** When sampling and performing self-consistency, \(\alpha\)-masking prevents the sampling of tokens the expert finds to be unlikely. It is natural to ask what portion of the benefit comes purely from \(\alpha\)-masking and not the contrastive objective itself.
To answer this, we set \(\beta=0\) but \(\alpha=0.1\); that is, we mask out candidates based on the expert but do not apply the contrastive objective. When sampling one path, we expect \(\alpha\)-masking to improve over temperature sampling alone as it eliminates unlikely results and thus provides a closer approximation to greedy sampling. This holds, but as we increase the number of paths we find no benefit from \(\alpha\)-masking alone. This suggests that the contrastive objective, and not \(\alpha\)-masking, is the primary source of improved self-consistency results. See Figure 7 for results of this ablation.
**CD requires chain-of-thought prompting to improve results.** We next study whether contrastive decoding provides an advantage in the absence of chain-of-thought prompting. We remove the chains of thought from the GSM8K fewshot prompt, and find that as expected performance drops for both standard and contrastive decoding (Figure 8); further, without chains of thought contrastive decoding provides no consistent improvement. As with the MATH dataset, solving problems without explicit reasoning steps may be too challenging of a task for the expert model, and thus leave too small a gap between the expert and amateur to contrastively exploit.
**CD can benefit non-LLaMA models.** We conduct a short study to show that CD can benefit models outside of the LLaMA family. For this study, we choose the FLAN-T5 family as it is open-source, has a wide range of model sizes that share a single tokenizer, and obtains good performance on chain-of-thought reasoning tasks. We use FLAN-T5-XXL (11B) as the expert model and FLAN-T5-Small (80M) as amateur. We evaluate on GSM8K using the 8-shot random prompts from Fu
\begin{table}
\begin{tabular}{|c|c c c|} \hline CD & OBQA & TriviaQA* & CD & 7B & 13B & 30B & 65B \\ \hline ✗ & **60.0** & **72.2** & **31.0** & **36.3** & **52.3** & **58.4** \\ ✓ & 57.8 (\(\geq\)2.4) & 69.9 (\(\geq\)2.1) & 30.9 & 35.6 & 52.2 & 57.6 \\ \hline \end{tabular}
\end{table}
Table 7: CD can harm performance on a synthetic task of evaluating arithmetic expressions.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline CD & OBQA & TriviaQA* & CD & 7B & 13B & 30B & 65B \\ \hline ✗ & **31.0** & **36.3** & **52.3** & **58.4** \\ ✓ & 30.9 & 35.6 & 52.2 & 57.6 \\ \hline \end{tabular}
\end{table}
Table 8: CD slightly harms performance on a synthetic task of evaluating arithmetic expressions.
et al. (2023); note that GSM8K is within the set of tasks that FLAN-T5 is finetuned on. CD provides a slight boost in performance, as seen in Table 9. We leave more extensive experiments on other families of models to future work.
Small-scale amateurs beat "negative prompting."We experiment to determine if there is a more effective weak amateur model to use for contrastive decoding. We define a set of "negative prompts" by sampling 7B model outputs on the fewshot prompts and collecting the incorrect responses. We use these responses as fewshot prompts to mimic the failure modes of the family of models. These negative prompts should harm the performance of models they are prompted with, and specifically bias results towards the error distribution of the 65B model.
We find that contrasting with a negative prompt does not harm performance, but does not improve it as much as contrasting with a small amateur (see Table 10). In an ablation study, we find that negative prompting does not harm performance that much; prompting a 65B model with incorrect fewshot examples on GSM8K gives a score of 41.3, which underperforms prompting with correct examples (51.2) but significantly beats non-chain-of-thought prompting (13.5). This supports Wang et al. (2023), who find that even incorrect chain-of-thought rationales improve reasoning. A prompting strategy which better incapacitates the expert model might yield better results.
Mid-training checkpoints make for good amateurs.We experiment with checkpoints of a mid-training 7B-parameter LLaMA model taken 10% and 23% of the way through the full training run. Even while a fully-trained 7B amateur harms performance on GSM8K, we find that a partially-trained amateur improves performance. We do not perform extensive hyperparameter sweeps here, instead reusing \(\alpha=0.1\), \(\beta=0.5\) as before. We do not pursue partially-trained amateurs for our main results as results may vary based on the order of training data, but this result allows us to interpret contrastive decoding as a first-order optimization step over the output of a model, highlighting the high-level behaviors that it learns later on in the course of training. See Table 11 for full results.
## 5 Related Work
**Steering methods for reasoning.** Other works more explicitly model the error distribution of reasoning steps and use this to steer decoding. For example GRACE (Khalifa et al., 2023) uses a contrastive loss to train an external step-level discriminator, which it then uses to select between candidate steps sampled from a base model. Using the interpretation of contrastive decoding as mutual distinguishability between amateur and expert, we see that our method is close to FUDGE (Yang and Klein, 2021) where the binary predictor is an estimate of the probability that the generated token has come from the expert rather than the amateur.
**Prompting Methods for Reasoning.** There are many recent prompting methods to improve language model reasoning; see Qiao et al. (2023) for a survey. We perform our experiments with chain-of-thought prompting (Wei et al., 2023).
**Sampling methods** Several decoding methods exist to improve the quality of generations from large language models. For open-ended generation, truncated sampling schemes like top-\(k\) sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020) and typical sampling (Meister et al., 2023) have been shown to reduce repetition in comparison to greedy decoding and beam search while producing more coherent generations than standard temperature sampling. However, sampling can still introduce errors into logical chains, and so greedy decoding is used to more effectively solve reasoning tasks. (Wei et al., 2023; Anil et al., 2023)
**Contrastive Generation Methods.** Our formulation's objective can be interpreted as a special case of DE experts (Liu et al., 2021), using the larger model as both an expert and base LM prior. Yona et al. (2023) identify model biases with Contrastive Input Decoding, a contrastive-decoding-style technique similar to negative prompting that operates on perturbed text inputs.
Concurrently to our work, Chuang et al. (2023) propose DoLA, which improves factuality and reasoning through contrastive decoding between the predictions of later layers and earlier layers in a language model. We study a wider array of reasoning tasks and demonstrate that a 7B amateur is too large, finding greater gains in reasoning just by scaling down the amateur to 1.5B parameters.
Our paper differentiates itself from Li et al. (2022), which initially proposed Contrastive Decoding, in several ways: by testing on standard reasoning benchmarks, by our exploration of \(\beta\) as a hyper-parameter, by ablations with various types of amateurs, and by a careful analysis of the combination of Contrastive Decoding with chain-of-thought prompting and self-consistency.
## 6 Limitations
Our investigation is also limited mainly to the LLaMA family of models. While the method continues to provide benefit to larger LLaMA models, further work is required to definitively establish the effect of contrastive decoding on larger, tuned models.
## 7 Conclusion
Our study shows that contrastive decoding can improve chain-of-thought reasoning in large language models. While challenges like factual recall remain, this strengthens the case for contrastive decoding as a simple, general-purpose method to elicit more desirable behavior from large language models.
\begin{table}
\begin{tabular}{c|c c c} Expert & Greedy & NP & CD & CD + NP \\ \hline
7B & 10.7 & 11.4 & **14.3** & 12.7 \\ \hline
13B & 17.4 & 17.5 & **22.7** & 20.7 \\ \hline
30B & 35.3 & 36.9 & **43.1** & 42.9 \\ \hline
65B & 51.0 & 52.0 & **56.8** & 54.7 \\ \end{tabular}
\begin{tabular}{c|c} Amateur & Amateur Tokens & GSM8K \\ \hline
7B & 130B & **57.0** \\ \hline
7B & 300B & 56.8 \\ \hline
7B & 1.3T & 49.9 \\ \end{tabular}
\end{table}
Table 10: On GSM8K, negative prompting outperforms greedy decoding but weakens CD.
\begin{table}
\begin{tabular}{c|c c} Amateur & Amateur Tokens & GSM8K \\ \hline
7B & 130B & **57.0** \\ \hline
7B & 300B & 56.8 \\ \hline
7B & 1.3T & 49.9 \\ \end{tabular}
\end{table}
Table 11: Early-training checkpoints can be good amateurs, even when late-stage checkpoints harm performance.
### Reproducibility Statement
The training process and model architecture for the 1.5B-parameter LLaMA model used as the amateur in several results is publicly available, but the weights are not, which limits public reproducibility of results relying on that model. The results on FLAN-T5, as well as the negative-prompting study and examination of 7B-LLaMA as an amateur, are all built on entirely open-source models and data.
|
2303.18041 | Isometries of wall-connected twin buildings | We introduce the notion of a wall-connected twin building and show that the
local-to-global principle holds for these twin buildings. As each twin building
satisfying Condition (co) (introduced in [7]) is wall-connected, we obtain a
strengthening of the main result of [7] that covers also the thick irreducible
affne twin buildings of rank at least 3. | Sebastian Bischof, Bernhard Mühlherr | 2023-03-31T13:20:36Z | http://arxiv.org/abs/2303.18041v1 | # Isometries of wall-connected twin buildings
###### Abstract
We introduce the notion of a wall-connected twin building and show that the local-to-global principle holds for these twin buildings. As each twin building satisfying Condition (co) (introduced in [7]) is wall-connected, we obtain a strengthening of the main result of [7] that covers also the thick irreducible affine twin buildings of rank at least \(3\).
**Keywords** Twin buildings, Local-to-global principle, affine RGD-systems
**Mathematics Subject Classification** 20E42, 51E24
## 1 Introduction
In [10] Tits gave a complete classification of all thick irreducible spherical buildings of rank at least \(3\). The decisive tool in this classification is the extension theorem for local isometries between two spherical buildings (Theorem 4.1.2 in loc. cit.). Inspired by the paper [11] on Kac-Moody groups over fields, Ronan and Tits introduced twin buildings (see [13, 88/89 and 89/90] and [12]). Twin buildings are natural generalizations of spherical buildings because there is an opposition relation on the set of its chambers. It was conjectured in [12] that the extension theorem for local isometries can be generalized to \(2\)-spherical twin buildings. This conjecture was confirmed in [7] for twin buildings that satisfy a technical condition (called Condition (co) in [7]) and it was shown that Condition (co) is "almost always" satisfied. More precisely, if the twin building in question has no rank two residues isomorphic to \(B_{2}(2),G_{2}(2),G_{2}(3)\) or \({}^{2}F_{4}(2)\), then the extension theorem holds (see Corollary 1.7 in [7]).
It seemed first that Condition (co) was just a convenient assumption for making the ideas in [7] work, but that the extension theorem should hold without any additional hypothesis. After a while, however, there were serious doubts about this (see Paragraph 2.3 in [13, 97/98]) and the question about the validity of the local-to-global principle for all \(2\)-spherical buildings remained an open problem. It is particularly unsatisfying that it is even not known whether the extension theorem holds for twin buildings of affine type. It was observed by Abramenko and Muhlherr that the arguments in [7] can be modified to treat some cases in which Condition (co) does not hold. But these modifications were not good enough to prove the extension theorem for all affine twin buildings.
In this paper we introduce a condition for twin buildings that we call wall-connectedness. It is inspired by the content of [4] and its definition (given in Definition (4.8)) is somewhat technical. It turns out that each twin building satisfying Condition (co) is wall-connected but that the converse is not true. The main goal of this paper is to provide the following improvement of the main result in [7].
**Main result:** The extension theorem holds for wall-connected twin buildings.
For a precise statement of our main result we refer to Corollary (5.18). It turns out that all \(3\)-spherical twin buildings and all irreducible affine twin buildings of rank at least \(3\) are wall-connected (see Section 6). Thus, our main result yields the following:
**Corollary:** The extension theorem holds for all \(3\)-spherical twin buildings and all irreducible affine twin buildings of rank at least \(3\).
### Content of the paper
In Section 2 we fix notation and state some results about parallel residues in a building. In Section 3 we give the definition of a twin building and prove some elementary facts which we will need later. This section is also to be understood as a preliminary section. The first part of the next section is concerned with compatible path's as defined in [4]. In the second part of this section we define \(P\)-compatible path's which generalizes compatible path's to the situation of twin buildings. Later on we prove some result about them. In particular, our proof of the extension theorem relies heavily on Proposition (4.4). At the end of this section we give the definition of a wall-connected twin building. Section 5 is divided in three parts. In the first part we state the definition of an isometry and some basic results about them. A crucial lemma is Lemma (5.5). We will use this lemma in combination with Proposition (4.4) to prove the extension theorem for wall-connected twin buildings. The main step is Proposition (5.13).
The rest of the paper is concerned essentially with the fact that affine twin buildings of rank at least \(3\) are wall-connected.
### Acknowledgement
We thank Richard Weiss for communicating us the proof of Proposition (6.11).
## 2 Preliminaries
### Coxeter system
A _Coxeter system_ is a pair \((W,S)\) consisting of a group \(W\) and a set \(S\subseteq W\) of generators of \(W\) of order \(2\) such that the set \(S\) and the relations \((st)^{m_{st}}\) for all \(s,t\in S\) constitute a presentation of \(W\), where \(m_{st}\) denotes the order of \(st\) in \(W\) for all \(s,t\in S\).
Let \((W,S)\) be a Coxeter system and let \(\ell:W\to\mathbb{N},w\mapsto\min\{k\in\mathbb{N}_{0}\ |\ \exists s_{1},\ldots,s_{k}\in S:w=s_{1} \cdots s_{k}\}\) denote the corresponding length function. The _Coxeter diagram_ corresponding to \((W,S)\) is the labeled graph \((S,E(S))\), where \(E(S)=\{\{s,t\}\ |\ m_{st}>2\}\) and where each edge \(\{s,t\}\) is labeled by \(m_{st}\) for all \(s,t\in S\). We call the Coxeter diagram _irreducible_, if the underlying graph is connected, and we call it \(2\)_-spherical_, if \(m_{st}<\infty\) for all \(s,t\in S\). The _rank_ of a Coxeter diagram is the cardinality of the set of its vertices. It is well-known that the pair \((\langle J\rangle,J)\) is a Coxeter system (cf. [3, Ch. IV, SS1 Theoreme 2]). We call \(J\subseteq S\)_spherical_ if \(\langle J\rangle\) is finite. Given a spherical subset \(J\) of \(S\), there exists a unique element of maximal length in \(\langle J\rangle\), which we denote by \(r_{J}\) (cf. [1, Corollary 2.19]); moreover, \(r_{J}\) is an involution.
**(2.1) Convention**.: From now on we let \((W,S)\) be a Coxeter system of finite rank.
**(2.2) Lemma**.: _Let \((W,S)\) be a Coxeter system and let \(w\in W,s,t\in S\) be such that \(\ell(sw)=\ell(w)-1=\ell(wt)\). Then either \(\ell(swt)=\ell(w)-2\) or \(swt=w\)._
Proof.: We put \(w^{\prime}:=sw\). Then \(\ell(sw^{\prime})=\ell(w)=\ell(w^{\prime})+1\). We assume that \(\ell(swt)\neq\ell(w)-2\). Then \(\ell(swt)=\ell(w)\) and hence \(\ell(w^{\prime}t)=\ell(swt)=\ell(w)=\ell(w^{\prime})+1\). Using Condition (**F**) of [1] on page 79 we obtain either \(\ell(sw^{\prime}t)=\ell(w^{\prime})+2\) or \(sw^{\prime}t=w^{\prime}\). Since \(\ell(sw^{\prime}t)=\ell(wt)=\ell(w)-1=\ell(w^{\prime})\) we have \(wt=sw^{\prime}t=w^{\prime}=sw\) and the claim follows.
### Buildings
A _building of type_\((W,S)\) is a pair \(\Delta=(\mathcal{C},\delta)\) where \(\mathcal{C}\) is a non-empty set and where \(\delta:\mathcal{C}\times\mathcal{C}\to W\) is a _distance function_ satisfying the following axioms, where \(x,y\in\mathcal{C}\) and \(w=\delta(x,y)\):
1. \(w=1_{W}\) if and only if \(x=y\);
2. if \(z\in\mathcal{C}\) satisfies \(s:=\delta(y,z)\in S\), then \(\delta(x,z)\in\{w,ws\}\), and if, furthermore, \(\ell(ws)=\ell(w)+1\), then \(\delta(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}\) such that \(\delta(y,z)=s\) and \(\delta(x,z)=ws\).
The _rank_ of \(\Delta\) is the rank of the underlying Coxeter system. The elements of \(\mathcal{C}\) are called _chambers_. Given \(s\in S\) and \(x,y\in\mathcal{C}\), then \(x\) is called _\(s\)-adjacent_ to \(y\), if \(\delta(x,y)=\langle s\rangle\). The chambers \(x,y\) are called _adjacent_, if they are \(s\)-adjacent for some \(s\in S\). A _gallery_ joining \(x\) and \(y\) is a sequence \((x=x_{0},\ldots,x_{k}=y)\) such that \(x_{l-1}\) and \(x_{l}\) are adjacent for any \(1\leq l\leq k\); the number \(k\) is called the _length_ of the gallery. For any two chambers \(x\) and \(y\) we put \(\ell_{\Delta}(x,y):=\ell(\delta(x,y))\).
Given a subset \(J\subseteq S\) and \(x\in\mathcal{C}\), the _\(J\)-residue_ of \(x\) is the set \(R_{J}(x):=\{y\in\mathcal{C}\mid\delta(x,y)\in\langle J\rangle\}\). Each \(J\)-residue is a building of type \((\langle J\rangle,J)\) with the distance function induced by \(\delta\) (cf. [1, Corollary 5.30]). A _residue_ is a subset \(R\) of \(\mathcal{C}\) such that there exist \(J\subseteq S\) and \(x\in\mathcal{C}\) with \(R=R_{J}(x)\). It is a basic fact that the subset \(J\) is uniquely determined by the set \(R\); it is called the _type_ of \(R\) and the _rank_ of \(R\) is defined to be the cardinality of \(J\). A residue is called _spherical_ if its type is a spherical subset of \(S\). Let \(R\) be a spherical residue of type \(J\) and let \(x,y\in R\). Then \(x,y\) are called _opposite in \(R\)_ if \(\delta(x,y)=r_{J}\). If \(R=\mathcal{C}\), we say for short that \(x,y\) are _opposite_. If \((W,S)\) is spherical we call two residues \(R_{1}\) of type \(J_{1}\) and \(R_{2}\) of type \(J_{2}\)_opposite_ if \(R_{1}\) contains a chamber opposite to a chamber of \(R_{2}\) and if \(J_{1}=r_{S}J_{2}r_{S}\).
A _panel_ is a residue of rank \(1\). An _\(s\)-panel_ is a panel of type \(\{s\}\) for some \(s\in S\). The building \(\Delta\) is called _thick_, if each panel of \(\Delta\) contains at least three chambers.
Given \(x\in\mathcal{C}\) and \(k\in\mathbb{N}_{0}\) then \(E_{k}(x)\) denotes the union of all residues of rank at most \(k\) containing \(x\). It is a fact, that the set \(E_{k}(x)\) determines the chamber \(x\) uniquely, if \(k<|S|\).
For \(x\in\mathcal{C}\) and any \(J\)-residue \(R\subseteq\mathcal{C}\) there exists a unique chamber \(z\in R\) such that \(\ell_{\Delta}(x,y)=\ell_{\Delta}(x,z)+\ell_{\Delta}(z,y)\) holds for any \(y\in R\) (cf. [1, Proposition 5.34]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_ and is denoted by \(\operatorname{proj}_{R}x\). Moreover, if \(z=\operatorname{proj}_{R}x\) we have \(\delta(x,y)=\delta(x,z)\delta(z,y)\) for each \(y\in R\).
Let \(R,Q\) be two residues. Then we define the mapping \(\operatorname{proj}_{Q}^{R}:R\to Q,x\mapsto\operatorname{proj}_{Q}x\) and we put \(\operatorname{proj}_{Q}R:=\{\operatorname{proj}_{Q}r\mid r\in R\}\). The residues \(R,Q\) are called _parallel_ if \(\operatorname{proj}_{R}Q=R\) and \(\operatorname{proj}_{Q}R=Q\).
**(2.3) Lemma**.: _Two residues \(R,Q\) are parallel if and only if \(\operatorname{proj}_{Q}^{R}\) and \(\operatorname{proj}_{R}^{Q}\) are bijections inverse to each other._
Proof.: One implication is obvious; the other is [6, Proposition 21.10].
**(2.4) Lemma**.: _Let \(P_{1}\) and \(P_{2}\) be two parallel panels of type \(s_{1}\) and \(s_{2}\), respectively. Then \(s_{2}=w^{-1}s_{1}w\), where \(w:=\delta(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\) in \(P_{1}\)._
_Conversely, if \(x\) and \(y\) are chambers with \(\delta(x,y)=w\), where \(w\) satisfies \(s_{2}=w^{-1}s_{1}w\) and \(\ell(s_{1}w)=\ell(w)+1\), then the \(s_{1}\)-panel on \(x\) is parallel to the \(s_{2}\)-panel on \(y\)._
Proof.: This is [4, Lemma 14].
Let \(P_{1}\) and \(P_{2}\) be two parallel panels. Then, by the previous lemma, \(\delta(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\in P_{1}\). Thus we define \(\delta(P_{1},P_{2}):=\delta(x,\operatorname{proj}_{P_{2}}x)\), where \(x\) is a chamber in \(P_{1}\).
**(2.5) Lemma**.: _Let \(R\) be a spherical \(J\)-residue and let \(R_{1},R_{2}\) be two residues in \(R\), which are opposite in \(R\). Then \(R_{1}\) and \(R_{2}\) are parallel._
Proof.: This is a consequence of [6, Proposition 21.24].
**(2.6) Lemma**.: _Let \(R\) be a rank \(2\) residue and let \(P,Q\) be two parallel panels contained in \(R\). Then either \(P=Q\) or \(R\) is spherical and \(P\) and \(Q\) are opposite in \(R\). In particular, if \(P\neq Q\) and \(J\) is the type of \(R\), we have \(\ell(\delta(P,Q))=\ell(r_{J})-1\)._
Proof.: This is [4, Lemma 18].
A subset \(\Sigma\subseteq\mathcal{C}\) is called _convex_ if \(\operatorname{proj}_{P}c\in\Sigma\) for every \(c\in\Sigma\) and every panel \(P\subseteq\mathcal{C}\) which meets \(\Sigma\). A subset \(\Sigma\subseteq\mathcal{C}\) is called _thin_ if \(P\cap\Sigma\) contains exactly two chambers for every panel \(P\subseteq\mathcal{C}\) which meets \(\Sigma\). An _apartment_ is a non-empty subset \(\Sigma\subseteq\mathcal{C}\), which is convex and thin.
## 3 Twin buildings
Let \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{-})\) be two buildings of the same type \((W,S)\). A _codistance_ (or a _twinning_) between \(\Delta_{+}\) and \(\Delta_{-}\) is a mapping \(\delta_{*}:(\mathcal{C}_{+}\times\mathcal{C}_{-})\cup(\mathcal{C}_{-}\times \mathcal{C}_{+})\to W\) satisfying the following axioms, where \(\varepsilon\in\{+,-\},x\in\mathcal{C}_{\varepsilon},y\in\mathcal{C}_{-\varepsilon}\) and \(w=\delta_{*}(x,y)\):
1. \(\delta_{*}(y,x)=w^{-1}\);
2. if \(z\in\mathcal{C}_{-\varepsilon}\) is such that \(s:=\delta_{-\varepsilon}(y,z)\in S\) and \(\ell(ws)=\ell(w)-1\), then \(\delta_{*}(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(y,z)=s\) and \(\delta_{*}(x,z)=ws\).
A _twin building of type_\((W,S)\) is a triple \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) where \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{ -})\) are buildings of type \((W,S)\) and where \(\delta_{*}\) is a twinning between \(\Delta_{+}\) and \(\Delta_{-}\).
**(3.1) Lemma**.: _Given \(\varepsilon\in\{+,-\},x\in\mathcal{C}_{\varepsilon},y\in\mathcal{C}_{-\varepsilon}\) and let \(w=\delta_{*}(x,y)\). Then for any \(y^{\prime}\in\mathcal{C}_{-\varepsilon}\) with \(\delta_{-\varepsilon}(y,y^{\prime})=s\in S\) we have \(\delta_{*}(x,y^{\prime})\in\{w,ws\}\)._
Proof.: This follows similar to [1, Lemma 5.139].
We put \(\mathcal{C}:=\mathcal{C}_{+}\cup\mathcal{C}_{-}\) and define the distance function \(\delta:\mathcal{C}\times\mathcal{C}\to W\) by setting \(\delta(x,y):=\delta_{+}(x,y)\) (resp. \(\delta_{-}(x,y),\delta_{*}(x,y)\)) if \(x,y\in\mathcal{C}_{+}\) (resp. \(x,y\in\mathcal{C}_{-},(x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\) for some \(\varepsilon\in\{+,-\}\)).
Given \(x,y\in\mathcal{C}\) then we put \(\ell(x,y):=\ell(\delta(x,y))\). If \(\varepsilon\in\{+,-\}\) and \(x,y\in\mathcal{C}_{\varepsilon}\), then we put \(\ell_{\varepsilon}(x,y):=\ell(\delta_{\varepsilon}(x,y))\) and for \((x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\) we put \(\ell_{*}(x,y):=\ell(\delta_{*}(x,y))\).
Let \(\varepsilon\in\{+,-\}\). For \(x\in\mathcal{C}_{\varepsilon}\) we put \(x^{\operatorname{op}}:=\{y\in\mathcal{C}_{-\varepsilon}\mid\delta_{*}(x,y)=1_ {W}\}\). It is a direct consequence of (Tw1) that \(y\in x^{\operatorname{op}}\) if and only if \(x\in y^{\operatorname{op}}\) for any pair \((x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\). If \(y\in x^{\operatorname{op}}\) then we say that \(y\) is _opposite_ to \(x\) or that \((x,y)\) _is a pair of opposite chambers._
A _residue_ (resp. _panel_) of \(\Delta\) is a residue (resp. panel) of \(\Delta_{+}\) or \(\Delta_{-}\); given a residue \(R\subseteq\mathcal{C}\) then we define its type and rank as before. Two residues \(R,T\subseteq\mathcal{C}\) in different halves are called _opposite_ if they have the same type and if there exists a pair of opposite chambers \((x,y)\) such that \(x\in R,y\in T\). The twin building \(\Delta\) is called _thick_ if \(\Delta_{+}\) and \(\Delta_{-}\) are thick.
Let \(\varepsilon\in\{+,-\}\), let \(J\) be a spherical subset of \(S\) and let \(R\) be a \(J\)-residue of \(\Delta_{\varepsilon}\). Given a chamber \(x\in\mathcal{C}_{-\varepsilon}\) then there exists a unique chamber \(z\in R\) such that \(\ell_{*}(x,y)=\ell_{*}(x,z)-\ell_{\varepsilon}(z,y)\) holds for any chamber \(y\in R\) (cf. [1, Lemma 5.149]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_ and is denoted by \(\operatorname{proj}_{R}x\). Moreover, if \(z=\operatorname{proj}_{R}x\) we have \(\delta_{*}(x,y)=\delta_{*}(x,z)\delta_{\varepsilon}(z,y)\) for each \(y\in R\).
**(3.2) Lemma**.: _Let \(R_{1}\subseteq R_{2}\) be two spherical residues of \(\Delta\) and let \(x\in\mathcal{C}\). Then \(\operatorname{proj}_{R_{1}}x=\operatorname{proj}_{R_{1}}\operatorname{proj}_ {R_{2}}x\)._
Proof.: Let \(r\in R_{1}\). Following [5, Proposition 2] we compute the following, where we have \({}^{\prime}+^{\prime}\) if \(x,R_{1},R_{2}\) are in the same half, and \({}^{\prime}-^{\prime}\) if \(x\) and \(R_{1},R_{2}\) are in different halves:
\[\ell(x,r) =\ell(x,\operatorname{proj}_{R_{2}}x)\pm\ell(\operatorname{proj}_ {R_{2}}x,r)\] \[=\ell(x,\operatorname{proj}_{R_{2}}x)\pm\left(\ell(\operatorname{ proj}_{R_{2}}x,\operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x)+\ell( \operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x,r)\right)\] \[=\ell(x,\operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x) \pm\ell(\operatorname{proj}_{R_{1}}\operatorname{proj}_{R_{2}}x,r)\]
Since this holds for any \(r\in R_{1}\), the uniqueness of \(\operatorname{proj}_{R_{1}}x\) yields the claim.
**(3.3) Lemma**.: _Let \(\varepsilon\in\{+,-\}\) and let \(R\subseteq\mathcal{C}_{\varepsilon}\) and \(T\subseteq\mathcal{C}_{-\varepsilon}\) be two opposite residues of spherical type \(J\subseteq S\). Then for any pair \((x,y)\in R\times T\) the following are equivalent:_
1. \(\operatorname{proj}_{T}x=y\)_;_
2. \(\delta_{*}(x,y)=r_{J}\)_;_
3. \(\operatorname{proj}_{R}y=x\)_._
Proof.: This is [2, Lemma 3.4].
Let \(\varepsilon\in\{+,-\}\) and let \(R\subseteq\mathcal{C}_{\varepsilon},T\subseteq\mathcal{C}_{-\varepsilon}\) be spherical residues. Then we define the mapping \(\operatorname{proj}_{T}^{R}:R\to T,x\mapsto\operatorname{proj}_{T}x\) and we put \(\operatorname{proj}_{T}R:=\{\operatorname{proj}_{T}r\mid r\in R\}\). The residues \(R\) and \(T\) are called _parallel_ if \(\operatorname{proj}_{R}T=R\) and \(\operatorname{proj}_{T}R=T\).
**(3.4) Lemma**.: _Let \(\varepsilon\in\{+,-\},R\subseteq\mathcal{C}_{\varepsilon},T\subseteq\mathcal{ C}_{-\varepsilon}\) be two spherical residues._
1. \(\operatorname{proj}_{T}R\) _is a spherical residue in_ \(T\)_._
2. \(R\) _and_ \(T\) _are parallel if and only if_ \(\operatorname{proj}_{T}^{R}\) _and_ \(\operatorname{proj}_{R}^{T}\) _are bijections inverse to each other._
3. \(\operatorname{proj}_{R}T\) _and_ \(\operatorname{proj}_{T}R\) _are parallel._
4. _If_ \(R\) _and_ \(T\) _are opposite then they are parallel._
Proof.: Assertion \((a)\) is [1, Exercise 5.168]. Let \(x\in\operatorname{proj}_{R}T\). Then there exists \(y\in T\) such that \(x=\operatorname{proj}_{R}y\). Note that \(\ell_{*}(y,x)=\ell_{*}(\operatorname{proj}_{T}x,x)-\ell_{-\varepsilon}(y, \operatorname{proj}_{T}x)\). Since \(\ell_{*}(c,d)\geq\ell_{*}(c,e)-\ell_{-\varepsilon}(e,d)\) for any \(c\in\mathcal{C}_{\varepsilon}\) and \(d,e\in\mathcal{C}_{-\varepsilon}\) the following hold:
\[\ell_{*}(y,x)-\ell_{\varepsilon}(x,\operatorname{proj}_{R} \operatorname{proj}_{T}x) =\ell_{*}(y,\operatorname{proj}_{R}\operatorname{proj}_{T}x)\] \[\geq\ell_{*}(\operatorname{proj}_{T}x,\operatorname{proj}_{R} \operatorname{proj}_{T}x)-\ell_{-\varepsilon}(y,\operatorname{proj}_{T}x)\] \[\geq\ell_{*}(\operatorname{proj}_{T}x,x)-\ell_{-\varepsilon}(y, \operatorname{proj}_{T}x)\] \[=\ell_{*}(y,x).\]
This implies \(\operatorname{proj}_{R}\operatorname{proj}_{T}x=x\) and the restriction of the projection mappings are bijections inverse to each other.
One implication in Assertion \((b)\) is obvious. For the other we note that \(\operatorname{proj}_{R}T=R\) and \(\operatorname{proj}_{T}R=T\) and we proved that the restriction of the projection mappings are bijections inverse to each other. Assertion \((c)\) is now a consequence of Assertion \((b)\) and Assertion \((d)\) is [9, Proposition (4.3)].
**(3.5) Lemma**.: _Let \(\varepsilon\in\{+,-\},P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{ C}_{-\varepsilon}\) be two panels. Then \(P,Q\) are parallel if and only if \(|\operatorname{proj}_{P}Q|\geq 2\)._
Proof.: We follow the ideas of [4, Lemma 13]. If \(P,Q\) are parallel, the claim follows. Therefore let \(|\operatorname{proj}_{P}Q|\geq 2\). Since \(\operatorname{proj}_{P}Q\) is a residue contained in \(P\) by Lemma (3.4)\((a)\), we have \(\operatorname{proj}_{P}Q=P\). By Lemma (3.4)\((c)\) the residues \(\operatorname{proj}_{Q}P\) and \(\operatorname{proj}_{P}Q=P\) are parallel. Thus we have \(|\operatorname{proj}_{Q}P|=|P|\geq 2\). Using the same argument we obtain \(\operatorname{proj}_{Q}P=Q\) and the panels \(P\) and \(Q\) are parallel.
**(3.6) Lemma**.: _Let \(\varepsilon\in\{+,-\}\), let \(P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{C}_{-\varepsilon}\) be two parallel panels and let \(R\) be a spherical residue containing \(Q\). Then \(\operatorname{proj}_{R}P\) is a panel parallel to both \(P\) and \(Q\)._
Proof.: For a proof see [4, Lemma 17]. We note that the facts which are used in [4] for buildings are proved in this paper for twin buildings.
Let \(\Sigma_{+}\subseteq\mathcal{C}_{+}\) and \(\Sigma_{-}\subseteq\mathcal{C}_{-}\) be apartments of \(\Delta_{+}\) and \(\Delta_{-}\), respectively. Then the set \(\Sigma:=\Sigma_{+}\cup\Sigma_{-}\) is called _twin apartment_ if \(|x^{\operatorname{op}}\cap\Sigma|=1\) for each \(x\in\Sigma\). If \((x,y)\) is a pair of opposite chambers, then there exists a unique twin apartment \(A(x,y)=\{z\in\mathcal{C}\ |\ \delta(x,z)=\delta(y,z)\}=\{z\in\mathcal{C}\ |\ \ell(z,x)=\ell(z,y)\}\) containing \(x\) and \(y\) (cf. [1, Exercise 5.187, Proposition 5.179(1)]). For \(\varepsilon\in\{+,-\}\) we put \(A_{\varepsilon}(x,y):=A(x,y)\cap\mathcal{C}_{\varepsilon}\). Furthermore, for any two chambers there exists a twin apartment containing them (cf. [1, Proposition 5.179(3)]).
**(3.7) Lemma**.: _Let \((x,y)\) be a pair of opposite chambers and let \(P\subseteq\mathcal{C}\) be a panel which meets \(A(x,y)\). Then \(A(x,y)\cap P=\{\operatorname{proj}_{P}x\neq\operatorname{proj}_{P}y\}\)._
Proof.: We have \(\operatorname{proj}_{P}x,\operatorname{proj}_{P}y\in A(x,y)\cap P\) (cf. [1, Lemma 5.173 (6)]). Since \(A_{\varepsilon}(x,y)\) is thin for each \(\varepsilon\in\{+,-\}\), we obtain \(|A(x,y)\cap P|=2\). We assume that \(\operatorname{proj}_{P}x=\operatorname{proj}_{P}y\). Then there exists a chamber \(\operatorname{proj}_{P}x\neq z\in A(x,y)\cap P\). We can assume that \(y\) is in the same half of the twin building as the panel \(P\). This implies
\[\ell(y,z)=\ell(x,z)=\ell(x,\operatorname{proj}_{P}x)-\ell(\operatorname{proj} _{P}x,z)=\ell(y,\operatorname{proj}_{P}y)-1=\ell(y,z)-2.\]
This yields a contradiction and the claim follows.
**(3.8) Lemma**.: _Let \(\varepsilon\in\{+,-\},s\in S\) and let \(\Delta\) be a thick twin building. Then for all \(x,y\in\mathcal{C}_{-\varepsilon}\) the following properties are equivalent:_
1. \(\delta_{-\varepsilon}(x,y)\in\langle s\rangle\)_;_
2. \(\forall z\in x^{\operatorname{op}}\exists z^{\prime}\in y^{\operatorname{op}}: \delta_{\varepsilon}(z,z^{\prime})=s\)_._
Proof.: If \(x=y\) the claim follows because of the thickness of the twin building. Let \(\delta_{-\varepsilon}(x,y)=s\) and \(z\in x^{\operatorname{op}}\). Then \(\delta_{*}(z,y)\in\{1_{W},s\}\) by Lemma (3.1). If \(\delta_{*}(z,y)=1_{W}\), the claim follows because of the thickness again. Let \(\delta_{*}(z,y)=s\). Then \(\delta_{*}(y,z)=s\) and we obtain \(z^{\prime}\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(z,z^{\prime})=s\) and \(\delta_{*}(y,z^{\prime})=1_{W}\) by (Tw3). Therefore we have \((ii)\).
Now we assume \((ii)\). There exists a twin apartment \(\Sigma\) containing \(x\) and \(y\). Let \(w=\delta_{-\varepsilon}(y,x)\) and let \(z\in\Sigma\) be the unique chamber which is opposite to \(x\). This implies \(\Sigma=A(x,z)\) and we obtain \(\delta_{*}(y,z)=\delta_{-\varepsilon}(y,x)=w\). By condition there exists a chamber \(z^{\prime}\in y^{\operatorname{op}}\) such that \(\delta_{-\varepsilon}(z,z^{\prime})=s\). Then \(1_{W}=\delta_{*}(y,z^{\prime})\in\{w,ws\}\) by Lemma (3.1). The claim follows.
## 4 Wall-connected twin buildings
### Compatible path's
Let \(\Delta=(\mathcal{C},\delta)\) be a thick building of type \((W,S)\) and let \(\Gamma\) be the graph whose vertices are the panels of \(\Delta\) and in which two panels form an edge if and only if there exists a rank \(2\) residue in which the two panels are opposite. For two adjacent panels \(P,Q\), there exists a unique rank \(2\) residue containing \(P\) and \(Q\), that will be denoted by \(R(P,Q)\). A path \(\gamma=(P_{0},\ldots,P_{k})\) in \(\Gamma\) is called _compatible_ if \(\operatorname{proj}_{R(P_{i-1},P_{i})}P_{0}=P_{i-1}\) for all \(1\leq i\leq k\). The number \(k\) is the _length_ of that path \(\gamma\). The sequence \((J_{1},\ldots,J_{k})\) where \(J_{i}\) is the type of \(R(P_{i-1},P_{i})\) will be called the _type_ of \(\gamma\).
We obtain \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i-1}}x\) for all \(x\in P_{0}\) and \(1\leq i\leq k\), since \(\operatorname{proj}_{R(P_{i-1},P_{i})}x\in P_{i-1}\). Furthermore, we obtain \(\operatorname{proj}_{P_{i}}x=\operatorname{proj}_{P_{i}}\operatorname{proj}_ {P_{i-1}}x\) for any \(1\leq i\leq k\) by Lemma (3.2).
**Lemma**.: _Two panels are parallel if and only if there exists a compatible path from one to the other._
Proof.: This is [4, Lemma 19].
**Proposition**.: _Let \((P_{0},\ldots,P_{k})\) be a compatible path. Then the following hold:_
1. \(\operatorname{proj}_{P_{k}}^{P_{0}}=\operatorname{proj}_{P_{k}}^{P_{i}} \circ\operatorname{proj}_{P_{i}}^{P_{0}}\) _for any_ \(0\leq i\leq k\)_._
2. \(\delta(P_{0},P_{k})=\delta(P_{0},P_{i})\delta(P_{i},P_{k})\) _for any_ \(0\leq i\leq k\)_;_
3. \(\ell(\delta(P_{0},P_{k}))=\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k}))\) _for any_ \(0\leq i\leq k\)_._
4. \((P_{k},\ldots,P_{0})\) _is a compatible path._
Proof.: At first we prove Assertion \((c)\) by induction on \(k\). For \(k=0\) there is nothing to show. Thus we let \(k>0\) and \(x\in P_{0}\). Then \(\ell_{\Delta}(x,\operatorname{proj}_{P_{k}}x)=\ell_{\Delta}(x,\operatorname{ proj}_{R(P_{k-1},P_{k})}x)+\ell_{\Delta}(\operatorname{proj}_{R(P_{k-1},P_{k})}x, \operatorname{proj}_{P_{k}}x)\). Moreover, we have \(\operatorname{proj}_{R(P_{k-1},P_{k})}x=\operatorname{proj}_{P_{k-1}}x\) and \(\operatorname{proj}_{P_{k}}x=\operatorname{proj}_{P_{k}}\operatorname{proj}_ {P_{k-1}}x=\operatorname{proj}_{P_{k}}\operatorname{proj}_{P_{k-1}}x\). This implies
\[\ell_{\Delta}(x,\operatorname{proj}_{P_{k}}x)=\ell_{\Delta}(x,\operatorname{ proj}_{P_{k-1}}x)+\ell(\delta(P_{k-1},P_{k}))=\ell(\delta(P_{0},P_{k-1}))+\ell( \delta(P_{k-1},P_{k}))\]
Using induction, we have \(\ell(\delta(P_{0},P_{k-1}))=\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k-1}))\) for any \(1\leq i\leq k-1\). We deduce that \(\ell(\delta(P_{0},P_{k}))\geq\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k}))\). In particular, we obtain
\[\ell_{\Delta}(x,\operatorname{proj}_{P_{k}}x)\geq\ell_{\Delta}(x,\operatorname {proj}_{P_{i}}x)+\ell_{\Delta}(\operatorname{proj}_{P_{i}}x,\operatorname{ proj}_{P_{k}}\operatorname{proj}_{P_{i}}x)\geq\ell_{\Delta}(x,\operatorname{ proj}_{P_{k}}x)\]
This finishes the proof of Assertion \((c)\). Now Assertion \((b)\) is a direct consequence of Assertion \((c)\) and Assertion \((a)\) follows from Assertion \((c)\) and the uniqueness of the projection chamber. Note that \((a)\) also implies that \(P_{i}\) and \(P_{k}\) are parallel. For Assertion \((d)\) we use Lemma (2.3) and the equation of the projection mappings of Assertion \((a)\) and compute the following for each \(0\leq i\leq j\leq k\):
\[\operatorname{proj}_{P_{i}}^{P_{k}}=\operatorname{proj}_{P_{i}}^{P_{0}}\circ \operatorname{proj}_{P_{0}}^{P_{k}}=\left(\operatorname{proj}_{P_{i}}^{P_{j}} \circ\operatorname{proj}_{P_{j}}^{P_{0}}\right)\circ\operatorname{proj}_{P_{0} }^{P_{k}}=\operatorname{proj}_{P_{i}}^{P_{j}}\circ\operatorname{proj}_{P_{j}}^ {P_{k}}\,.\]
We have to show that \(\operatorname{proj}_{R(P_{i-1},P_{i})}P_{k}=P_{i}\). For that we show \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i}}x\) for each \(x\in P_{k}\). Let \(x\in P_{k}\) and let \(r_{i}:=\operatorname{proj}_{R(P_{i-1},P_{i})}x,p_{i}:=\operatorname{proj}_{P_{ i}}x\) and \(p_{i-1}:=\operatorname{proj}_{P_{i-1}}x\).
Using Assertion \((c)\) and the fact that \(\operatorname{proj}_{P_{i-1}}p_{i}=p_{i-1}\) we have
\[\ell_{\Delta}(x,p_{i-1}) =\ell(\delta(P_{i-1},P_{k}))\] \[=\ell(\delta(P_{0},P_{k}))-\ell(\delta(P_{0},P_{i-1}))\] \[=\ell(\delta(P_{0},P_{i}))+\ell(\delta(P_{i},P_{k}))-\ell(\delta (P_{0},P_{i-1}))\] \[=\ell(\delta(P_{k},P_{i}))+\ell(\delta(P_{i},P_{i-1}))\] \[=\ell_{\Delta}(x,p_{i})+\ell_{\Delta}(p_{i},\operatorname{proj}_ {P_{i-1}}p_{i})\] \[=\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(r_{i},p_{i})+\ell_{\Delta} (p_{i},p_{i-1}).\]
Since \(\ell_{\Delta}(r_{i},p_{i-1})\leq\ell(r_{J_{i}})-1=\ell(\delta(P_{i-1},P_{i}))= \ell(\delta(P_{i},P_{i-1}))=\ell_{\Delta}(p_{i},p_{i-1})\), where \(J_{i}\) is the type of the residue \(R(P_{i-1},P_{i})\), we obtain
\[\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(r_{i},p_{i})+\ell_{\Delta}( p_{i},p_{i-1}) =\ell_{\Delta}(x,p_{i-1})\] \[=\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(r_{i},p_{i-1})\] \[\leq\ell_{\Delta}(x,r_{i})+\ell_{\Delta}(p_{i},p_{i-1}).\]
This yields \(r_{i}=p_{i}\). Since \(P_{i},P_{k}\) are parallel, we obtain \(\operatorname{proj}_{R(P_{i-1},P_{i})}P_{k}=\{\operatorname{proj}_{R(P_{i-1}, P_{i})}x\mid x\in P_{k}\}=P_{i}\) and the claim follows.
**(4.3) Lemma**.: _Let \(s\in S\) and let \(w\in W\) be such that \(w^{-1}sw\in S\) and \(\ell(sw)=\ell(w)+1\). Let \(P,P^{\prime}\) be \(s\)-panels and \(Q,Q^{\prime}\) be \(w^{-1}sw\)-panels such that \(\delta(P,Q)=w=\delta(P^{\prime},Q^{\prime})\). If \((J_{1},\ldots,J_{k})\) is the type of a compatible path from \(P\) to \(Q\), then there exists a compatible path from \(P^{\prime}\) to \(Q^{\prime}\) of the same length and type._
Proof.: This is [4, Lemma 26].
### \(P\)-compatible path's
Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a thick twin building of type \((W,S)\), let \(\varepsilon\in\{+,-\}\) and let \(P\subseteq\mathcal{C}_{-\varepsilon},P_{0},\ldots,P_{k}\subseteq\mathcal{C}_{\varepsilon}\) be panels such that \((P_{0},\ldots,P_{k})\) is a compatible path. Then we call this path \(P\)_-compatible_ if \(P_{0}\) is opposite to \(P\), and if \(\operatorname{proj}_{R(P_{i-1},P_{i})}P=P_{i}\) for all \(1\leq i\leq k\). We obtain \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i}}x\) for all \(x\in P\) and \(1\leq i\leq k\), since \(\operatorname{proj}_{R(P_{i-1},P_{i})}x\in P_{i}\). Furthermore, we obtain \(\operatorname{proj}_{P_{i-1}}x=\operatorname{proj}_{P_{i-1}}\operatorname{proj} _{P_{i}}x\) for any \(1\leq i\leq k\) by Lemma (3.2).
**(4.4) Proposition**.: _Let \(\varepsilon\in\{+,-\}\) and \(P\subseteq\mathcal{C}_{-\varepsilon},P_{0},\ldots,P_{k}\subseteq\mathcal{C}_{\varepsilon}\) be panels such that \((P_{0},\ldots,P_{k})\) is a \(P\)-compatible path. Then the following hold for all \(0\leq i\leq k\):_
1. \(\operatorname{proj}_{I_{0}}^{P}=\operatorname{proj}_{P_{0}}^{P_{i}}\circ \operatorname{proj}_{P_{i}}^{P}\)_._
2. \(\operatorname{proj}_{I_{0}}^{P_{0}}=\operatorname{proj}_{P_{0}}^{P_{i}}\circ \operatorname{proj}_{P_{i}}^{P_{0}}\)_._
3. \(\ell_{*}(x,\operatorname{proj}_{P_{i}}x)=\ell(\delta(P_{0},P_{i}))+1\) _for each_ \(x\in P\)_._
Proof.: It suffices to show the claim only for \(i=k\), since \((P_{0},\ldots,P_{i})\) is a \(P\)-compatible path. For \(k=0\) there is nothing to show. For \(x\in P\) we have \(\operatorname{proj}_{P_{k-1}}\operatorname{proj}_{P_{k}}x=\operatorname{proj}_{ P_{k-1}}x\). Using induction and Proposition (4.2)\((a)\) and \((d)\) we obtain
\[\operatorname{proj}_{P_{0}}x=\operatorname{proj}_{P_{0}}\operatorname{proj}_{P_{ k-1}}x=\operatorname{proj}_{P_{0}}\operatorname{proj}_{P_{k-1}}\operatorname{proj}_{P_{k}}x= \operatorname{proj}_{P_{0}}\operatorname{proj}_{P_{k}}x.\]
This proves Assertion \((a)\). Using Lemma (2.3) and Lemma (3.4)\((b)\) and \((d)\), the panels \(P\) and \(P_{i}\) are parallel. Using again Lemma (2.3) and Lemma (3.4)\((b)\) and \((d)\), Assertion \((b)\) is a consequence of Assertion \((a)\). For Assertion \((c)\) it also suffices to show the claim for \(i=k\)
Let \(x\in P\). Since \(\operatorname{proj}_{P_{k-1}}\operatorname{proj}_{P_{k}}x=\operatorname{proj}_{P_{k-1 }}x\), we infer \(\ell_{*}(x,\operatorname{proj}_{P_{k-1}}x)=\ell_{*}(x,\operatorname{proj}_{P_{k}} x)-\ell_{-\varepsilon}(\operatorname{proj}_{P_{k}}x,\operatorname{proj}_{P_{k-1 }}x)=\ell_{*}(x,\operatorname{proj}_{P_{k}}x)-\ell(\delta(P_{k-1},P_{k}))\). Using Proposition (4.2)(\(c\)) and induction we obtain \(\ell_{*}(x,\operatorname{proj}_{P_{k}}x)=\ell_{*}(x,\operatorname{proj}_{P_{k- 1}}x)+\ell(\delta(P_{k-1},P_{k}))=\ell(\delta(P_{0},P_{k}))+1\) and the claim follows.
**(4.5) Lemma**.: _Let \(\varepsilon\in\{+,-\},P_{1}\subseteq\mathcal{C}_{\varepsilon},P_{2}\subseteq \mathcal{C}_{-\varepsilon}\) be two parallel panels. Let \(s_{i}\) be the type of \(P_{i}\). Then \(s_{2}=w^{-1}s_{1}w\), where \(w:=\delta_{*}(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\) in \(P_{1}\)._
_Conversely, if \(x\) and \(y\) are chambers with \(\delta_{*}(x,y)=w\), where \(w\) satisfies \(s_{2}=w^{-1}s_{1}w\) and \(\ell(s_{1}w)=\ell(w)-1\), then the \(s_{1}\)-panel of \(x\) is parallel to the \(s_{2}\)-panel of \(y\)._
Proof.: We follow the ideas of [4, Lemma 14]. Let \(x_{1}\neq y_{1}\in P_{1}\), let \(x_{2}:=\operatorname{proj}_{P_{2}}x_{1},y_{2}:=\operatorname{proj}_{P_{2}}y_{1}\) and let \(w=\delta_{*}(x_{1},x_{2}),w^{\prime}=\delta_{*}(y_{1},y_{2})\). Then we have \(\delta_{*}(x_{2},y_{1})=w^{-1}s_{1}\) and \(\delta_{*}(y_{1},x_{2})=w^{\prime}s_{2}\). In particular, \(s_{1}w=w^{\prime}s_{2}\) and hence \(w^{\prime}=s_{1}ws_{2}\). Since \(\ell(w^{\prime}s_{2})=\ell(w^{\prime})-1\), we have \(\ell(s_{1}ws_{2})=\ell(w^{\prime})=\ell(w^{\prime}s_{2})+1=\ell(s_{1}w)+1=\ell (w)\). Since \(\ell(ws_{2})\ell(w)-1\), the claim follows from Lemma (2.2). For the converse let \(P_{1}\) the \(s_{1}\)-panel of \(x\) and let \(P_{2}\) the \(s_{2}\)-panel of \(y\). Let \(x\in P_{1}\). Then \(\operatorname{proj}_{P_{2}}x=y\), since \(\ell(ws_{2})=\ell(s_{1}w)=\ell(w)-1\). Choose \(\operatorname{proj}_{P_{2}}x\neq p\in P_{2}\). Then \(\delta_{*}(x,p)=ws_{2}\). By (Tw3) there exists \(z\in P_{1}\) such that \(\delta_{*}(z,p)=s_{1}ws_{2}=w\). Since \(\ell(s_{1}ws_{2})=\ell(w)=\ell(ws_{2})+1\) we have \(z=\operatorname{proj}_{P_{1}}p\). It follows from \(\ell(s_{1}w)=\ell(w)-1\) that \(x=\operatorname{proj}_{P_{1}}\operatorname{proj}_{P_{2}}x\). Since \(\delta_{*}(x,p)\neq\delta_{*}(z,p)\), we deduce that \(x\) and \(\operatorname{proj}_{P_{1}}p\) are different. The claim follows now from Lemma (3.5).
Let \(P_{1}\) and \(P_{2}\) be two parallel panels in different halves. By the previous lemma \(\delta_{*}(x,\operatorname{proj}_{P_{2}}x)\) does not depend on the choice of \(x\in P_{1}\). Therefore we define \(\delta(P_{1},P_{2}):=\delta_{*}(x,\operatorname{proj}_{P_{2}}x)\), where \(x\) is a chamber in \(P_{1}\).
**(4.6) Theorem**.: _Let \(\varepsilon\in\{+,-\},P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{ C}_{-\varepsilon}\) be two panels. Then \(P,Q\) are parallel if and only if there exists a \(P\)-compatible path \((Q_{0},\dots,Q_{k}=Q)\)._
Proof.: Let \((Q_{0},\dots,Q_{k}=Q)\) be a \(P\)-compatible path. Using Proposition (4.4)(\(a\)) we have \(\operatorname{proj}_{Q_{0}}^{P}=\operatorname{proj}_{Q_{0}}^{Q}\circ \operatorname{proj}_{Q}^{P}\). By Lemma (3.4)(\(d\)) we have \(\operatorname{proj}_{Q_{0}}P=Q_{0}\). Thus \(|\operatorname{proj}_{Q}P|\geq 2\) and Lemma (3.5) finishes the claim.
Now we assume that \(P\) and \(Q\) are parallel. We show the claim via induction on the distance \(\ell:=\ell(\delta(P,Q))\). If \(\ell=1\) then \((Q)\) is a \(P\)-compatible path and we are done. Now we assume that \(\ell>1\). Let \(x\in P\). Then there exists a chamber \(e\in\mathcal{C}_{-\varepsilon}\) which is adjacent to a chamber in \(Q\) and satisfies \(\ell_{*}(x,\operatorname{proj}_{Q}x)-2=\ell_{*}(x,e)\). Let \(R\) be the unique rank \(2\) residue containing \(Q\) and \(e\). By Lemma (3.6) the panels \(Q\) and \(\operatorname{proj}_{R}P\) are parallel. Since \(Q\) and \(\operatorname{proj}_{R}P\) are not opposite in \(R\) we have \(\operatorname{proj}_{R}P=Q\) by Lemma (2.6). Note also that \(\operatorname{proj}_{R}x=\operatorname{proj}_{Q}x\). Let \(w=\delta_{*}(x,\operatorname{proj}_{Q}x)\). Let \(q\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(\operatorname{proj}_{Q}x,q)=w^{-1}\). Then \(\delta_{*}(x,q)=1_{W}\) by [1, Lemma 5.140(2)]. Let \(s\) be the type of \(P\) and let \(t\) be the type of \(Q\). Then \(\ell(wt)=\ell(w)-1\). In particular, \(\delta_{-\varepsilon}(q,\operatorname{proj}_{Q}q)=wt\). Note that for \(v:=tw^{-1}\) we have \(\ell(tv)=\ell(w^{-1})=\ell(w)=\ell(wt)+1=\ell(v)+1\). Since \(P,Q\) are parallel, we have \(s=wtw^{-1}\) by Lemma (4.5). This implies \(v^{-1}tv=wtttw^{-1}=s\). Using Lemma (2.4) we obtain that the \(s\)-panel of \(q\) and the \(t\)-panel of \(\operatorname{proj}_{Q}q\) (i.e. \(Q\)) are parallel. Let \(Q_{0}\) be the \(s\)-panel of \(q\). By [4, Lemma 17]\(\operatorname{proj}_{R}Q_{0}\) is parallel to both, \(Q_{0}\) and \(Q\). Since \(\ell(wr_{J})=\ell(w)-\ell(r_{J})\), where \(J\) is the type of \(R\), we have \(\operatorname{proj}_{R}Q_{0}\neq Q\). Using induction, there exists a \(P\)-compatible path \((Q_{0},\dots,Q_{k}=\operatorname{proj}_{R}Q_{0})\). Clearly, \((Q_{0},\dots,Q_{k},Q)\) is a compatible path. The fact that \(\operatorname{proj}_{R(Q_{k},Q)}P=\operatorname{proj}_{R}P=Q\) finishes the claim.
**(4.7) Theorem**.: _Let \(\varepsilon\in\{+,-\},s\in S\) and let \(P,Q\subseteq\mathcal{C}_{\varepsilon}\) be two parallel panels and let \(c_{-}\in\mathcal{C}_{-\varepsilon}\) such that \(\mathcal{P}_{s}(c_{-})\) and \(P\) are opposite and such that \(\ell(\delta(P,Q))+1=\ell_{*}(c_{-},\operatorname{proj}_{Q}c_{-})\). Then every compatible path \((P_{0}=P,\dots,P_{k}=Q)\) is a \(\mathcal{P}_{s}(c_{-})\)-compatible path._
Proof.: At first we show \(\ell_{*}(c_{-},\operatorname{proj}_{P_{i}}c_{-})=\ell(P,P_{i})+1\). We have \(\ell_{*}(c_{-},\operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-})\geq\ell_{* }(c_{-},\operatorname{proj}_{Q}c_{-})-\ell_{\varepsilon}(\operatorname{proj}_{Q }c_{-},\operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-})=1\) by hypothesis. Since the panels \(\mathcal{P}_{s}(c_{-}),P\) are opposite, we obtain \(\operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-}=\operatorname{proj}_{P}c_{-}\) by Lemma (3.3). Let \(c_{+}\in P\backslash\{\operatorname{proj}_{P}c_{-}\}\). Then \(c_{+}\in c_{-}^{\operatorname{op}}\). Since \(c_{+}\neq\operatorname{proj}_{P}c_{-}=\operatorname{proj}_{P}\operatorname{ proj}_{Q}c_{-}\), we have \(\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},c_{+})=\ell(\delta(P,Q))+1\). This yields \(\operatorname{proj}_{Q}c_{-}\in A(c_{+},c_{-})\), since \(\ell_{\varepsilon}(c_{+},\operatorname{proj}_{Q}c_{-})=\ell_{*}(c_{-}, \operatorname{proj}_{Q}c_{-})\). Now we will show that \(A(c_{+},c_{-})\cap P_{i}\neq\emptyset\). Let \(z_{i}:=\operatorname{proj}_{P_{i}}\operatorname{proj}_{Q}c_{-}\). Then by Proposition (4.2)\((a),(c),(d)\) the following hold:
\[\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},c_{+}) =\ell(\delta(P,Q))+1\] \[=\ell(\delta(P,P_{i}))+\ell(\delta(P_{i},Q))+1\] \[=\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},z_{i})+\ell_{ \varepsilon}(z_{i},\operatorname{proj}_{P}z_{i})+\ell_{\varepsilon}( \operatorname{proj}_{P}\operatorname{proj}_{Q}c_{-},c_{+})\] \[=\ell_{\varepsilon}(\operatorname{proj}_{Q}c_{-},z_{i})+\ell_{ \varepsilon}(z_{i},c_{+}).\]
This yields that \(z_{i}\) lies on a minimal gallery between \(\operatorname{proj}_{Q}c_{-}\) and \(c_{+}\). The definition of convexity implies that any element on a minimal gallery between two chambers of a convex set is contained in the convex set. Since \(A_{\varepsilon}(c_{+},c_{-})\) is convex, we infer \(z_{i}\in A(c_{+},c_{-})\cap P_{i}\neq\emptyset\). By Lemma (3.7) we obtain \(A(c_{+},c_{-})\cap P_{i}=\{\operatorname{proj}_{P_{i}}c_{+},\operatorname{proj} _{P_{i}}c_{-}\}\). We put \(c_{i}:=\operatorname{proj}_{P_{i}}c_{-}\). Then the following hold for every \(0\leq i\leq k\):
\[\ell_{*}(c_{-},c_{i})=\ell_{\varepsilon}(c_{+},c_{i})=\ell_{\varepsilon}(c_{+}, \operatorname{proj}_{P_{i}}c_{+})+\ell_{\varepsilon}(\operatorname{proj}_{P_{ i}}c_{+},c_{i})=\ell(\delta(P,P_{i}))+1.\]
Now we want to show that the panels \(P_{i}\) and \(\mathcal{P}_{s}(c_{-})\) are parallel. Let \(P_{i}\) be a \(t\)-panel and let \(w^{\prime}=\delta(P,P_{i})\). Then we have \(t=w^{\prime-1}sw^{\prime}\) by Lemma (2.4). Let \(w=\delta_{*}(c_{-},\operatorname{proj}_{P_{i}}c_{-})\) and let \(\operatorname{proj}_{P}c_{-}\neq c_{+}\in P\). Note that \(\operatorname{proj}_{P}c_{-}=\operatorname{proj}_{P}\operatorname{proj}_{P_{i} }c_{-}\) for any \(0\leq i\leq k\) as above. Then \(w=\delta_{*}(c_{-},\operatorname{proj}_{P_{i}}c_{-})=\delta_{\varepsilon}(c_{+ },\operatorname{proj}_{P_{i}}c_{-})=w^{\prime}t=sw^{\prime}\) by the definition of \(A(c_{+},c_{-})\). We have \(w^{-1}sw=(sw^{\prime})^{-1}s(sw^{\prime})=t\), \(\ell(sw)=\ell(w^{\prime})=\ell(sw^{\prime})-1=\ell(w)-1\) and hence Lemma (4.5) implies that the panels \(\mathcal{P}_{s}(c_{-})\) and \(P_{i}\) are parallel. In particular, we have \(\ell(x,\operatorname{proj}_{P_{i}}x)=\ell(c_{-},\operatorname{proj}_{P_{i}}c_ {-})\) by Lemma (4.5).
We now show the claim. Since \(\mathcal{P}_{s}(c_{-})\) and \(P_{i}\) are parallel, it suffices to show that \(\operatorname{proj}_{R(P_{i-1},P_{i})}x=\operatorname{proj}_{P_{i}}x\) for all \(1\leq i\leq k\) and \(x\in\mathcal{P}_{s}(c_{-})\). Let \(1\leq i\leq k\), let \(\operatorname{proj}_{P_{i-1}}x\neq y\in P_{i-1}\) and let \(J_{i}\subseteq S\) be the type of \(R(P_{i-1},P_{i})\). Let \(r_{i}:=\operatorname{proj}_{R(P_{i-1},P_{i})}x,p_{i-1}:=\operatorname{proj}_{P_{ i-1}}x\) and \(p_{i}:=\operatorname{proj}_{P_{i}}x\). Then we obtain:
\[\ell_{*}(x,r_{i})-\ell(r_{J_{i}})\leq\ell_{*}(x,y)=\ell_{*}(x,p_{i-1})-\ell_{ \varepsilon}(p_{i-1},y)=\ell_{*}(x,r_{i})-\ell_{\varepsilon}(r_{i},p_{i-1})-1\]
Therefore, we have \(\ell_{\varepsilon}(r_{i},p_{i-1})\leq\ell(r_{J_{i}})-1=\ell(\delta(P_{i-1},P_{ i}))\). Using Proposition (4.2)\((c)\), we deduce:
\[\ell_{*}(x,r_{i})-\ell_{\varepsilon}(r_{i},p_{i}) =\ell_{*}(x,p_{i})\] \[=\ell(\delta(P,P_{i}))+1\] \[=\ell(\delta(P,P_{i-1}))+\ell(\delta(P_{i-1},P_{i}))+1\] \[=\ell_{*}(x,p_{i-1})+\ell(\delta(P_{i-1},P_{i}))\] \[=\ell_{*}(x,r_{i})-\ell_{\varepsilon}(r_{i},p_{i-1})+\ell(\delta( P_{i-1},P_{i}))\] \[\geq\ell_{*}(x,r_{i}).\]
This implies \(\ell_{\varepsilon}(r_{i},p_{i})=0\) and the claim follows.
**(4.8) Definition**.: Let \(\varepsilon\in\{+,-\},s\in S,c\in\mathcal{C}_{-\varepsilon}\). Two panels \(Q_{1},Q_{2}\subseteq\mathcal{C}_{\varepsilon}\) are called _wall-adjacent of type \((c,s)\)_ if both panels \(Q_{1},Q_{2}\) are opposite to \(\mathcal{P}_{s}(c)\) and if there exist a panel
\(T\subseteq\mathcal{C}_{\varepsilon}\) and \(\mathcal{P}_{s}(c)\)-compatible paths (\(P_{0}=Q_{1},\ldots,P_{k}=T\)) and (\(P_{0}^{\prime}=Q_{2},\ldots,P_{k^{\prime}}^{\prime}=T\)) of the same length and type.
For any \(\varepsilon\in\{+,-\}\) and any pair \((c,s)\in\mathcal{C}_{\varepsilon}\times S\) we define a graph \(\Gamma_{s}(c)\) with vertex set \(\{Q\subseteq\mathcal{C}_{-\varepsilon}\mid\mathcal{P}_{s}(c),Q\text{ are opposite panels}\}\) and two vertices \(Q_{1},Q_{2}\) are joined by an edge if \(Q_{1},Q_{2}\) are wall-adjacent of type \((c,s)\). The twin building \(\Delta\) is called _wall-connected_ if the graph \(\Gamma_{s}(c)\) is connected for each pair \((c,s)\in\mathcal{C}\times S\). We refer to Lemma (6.7) for the motivation of the terminology _wall-connected_.
## 5 Isometries
### Definition and basic facts
**(5.1) Convention**.: From now on all buildings (and hence all twin buildings) are assumed to be thick.
Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*}),\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be two twin buildings of type \((W,S)\). We define \(\mathcal{C}^{\prime},\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime},\ell^{\prime}\) as in the case of \(\Delta\). Let \(\mathcal{X}\subseteq\mathcal{C},\mathcal{X}^{\prime}\subseteq\mathcal{C}^{\prime}\). A mapping \(\varphi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) is called _isometry_ if the following conditions are satisfied:
1. The mapping \(\varphi\) is bijective.
2. For \(\varepsilon\in\{+,-\}\) we have \(\varphi(\mathcal{X}\cap\mathcal{C}_{\varepsilon})\subseteq\mathcal{C}^{\prime}_ {\varepsilon}\).
3. If \(x,y\in\mathcal{X}\) then \(\delta^{\prime}(\varphi(x),\varphi(y))=\delta(x,y)\).
It is easy to see that \(\varphi^{-1}\) is also an isometry. Given \(\mathcal{X}\subseteq\mathcal{C},\mathcal{X}^{\prime}\subseteq\mathcal{C}^{\prime}\), an isometry \(\varphi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) and \((y,y^{\prime})\in\mathcal{C}\times\mathcal{C}^{\prime}\), then the pair \((y,y^{\prime})\) will be called \(\varphi\)_-admissible_ if the mapping \(y\mapsto y^{\prime}\) extends \(\varphi\) to an isometry from \(\mathcal{X}\cup\{y\}\) onto \(\mathcal{X}^{\prime}\cup\{y^{\prime}\}\). Let \((x,x^{\prime})\) be a \(\varphi\)-admissible pair. Then \((x^{\prime},x)\) is \(\varphi^{-1}\)-admissible. In particular, \((x,\varphi(x))\) is \(\varphi\)-admissible for any \(x\in\mathcal{X}\). Let \(\mathcal{Y}\subseteq\mathcal{C},\mathcal{Y}^{\prime}\subseteq\mathcal{C}^{\prime}\) and \(\psi:\mathcal{Y}\rightarrow\mathcal{Y}^{\prime}\) be another isometry. Then the pair \((\varphi,\psi)\) will be called _admissible_, if there exists an isometry from \(\mathcal{X}\cup\mathcal{Y}\) onto \(\mathcal{X}^{\prime}\cup\mathcal{Y}^{\prime}\) such that \(\varphi\) and \(\psi\) are restrictions of that isometry.
**(5.2) Lemma**.: _Let \(\varepsilon\in\{+,-\}\) and \(\varphi:\mathcal{C}_{\varepsilon}\rightarrow\mathcal{C}^{\prime}_{\varepsilon}\) be a bijection. If \(\delta_{\varepsilon}(x,y)=\delta^{\prime}_{\varepsilon}(\varphi(x),\varphi(y))\) for any \(x,y\in\mathcal{C}_{\varepsilon}\) with \(\delta_{\varepsilon}(x,y)\in S\), then \(\varphi\) is an isometry._
Proof.: This is [1, Lemma 5.61].
**(5.3) Lemma**.: _Let \(\mathcal{X},\mathcal{Y}\subseteq\mathcal{C},\mathcal{X}^{\prime},\mathcal{Y} ^{\prime}\subseteq\mathcal{C}^{\prime}\) be such that \(\mathcal{X}\cap\mathcal{Y}=\emptyset\) and \(\mathcal{X}^{\prime}\cap\mathcal{Y}^{\prime}=\emptyset\). Let \(\varphi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) and \(\psi:\mathcal{Y}\rightarrow\mathcal{Y}^{\prime}\) be two isometries such that \((z,\psi(z))\) is \(\varphi\)-admissible for any \(z\in\mathcal{Y}\). Then \((\varphi,\psi)\) is admissible._
Proof.: This is a consequence of [2, Lemma 4.1].
**(5.4) Lemma**.: _Let \(J\) be a spherical subset of \(S\), let \(R\subseteq\mathcal{C},R^{\prime}\subseteq\mathcal{C}^{\prime}\) be \(J\)-residues, let \(\varphi:R\to R^{\prime}\) be an isometry, and let \((x,x^{\prime})\) be a \(\varphi\)-admissible pair. Then \(\varphi(\operatorname{proj}_{R}x)=\operatorname{proj}_{R^{\prime}}x^{\prime}\)._
Proof.: This is [9, Lemma (4.4)].
**(5.5) Lemma**.: _Let \(R_{+},R_{-}\subseteq\mathcal{C}\) be spherical and parallel residues in \(\Delta\) and \(R^{\prime}_{+},R^{\prime}_{-}\subseteq\mathcal{C}^{\prime}\) be spherical and parallel residues in \(\Delta^{\prime}\), let \(\varphi:R_{+}\cup R_{-}\to R^{\prime}_{+}\cup R^{\prime}_{-}\) be an isometry such that \(\varphi(R_{+})=R^{\prime}_{+}\) and \(\varphi(R_{-})=R^{\prime}_{-}\). Then \(\varphi(x)=\operatorname{proj}_{R^{\prime}_{\varepsilon}}\varphi(\operatorname{ proj}_{R_{-\varepsilon}}x)\) for each \(x\in R_{\varepsilon}\) for each \(\varepsilon\in\{+,-\}\)._
Proof.: This is a consequence of the previous lemma, Lemma (2.3) and Lemma (3.4)\((b)\)
**(5.6) Lemma**.: _Let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry, let \((x,x^{\prime})\in\mathcal{C}_{-}\times\mathcal{C}^{\prime}_{-}\) such that \(\varphi_{+}(x^{\mathrm{op}})\subseteq(x^{\prime})^{\mathrm{op}}\). Then \((x,x^{\prime})\) is a \(\varphi_{+}\)-admissible pair._
Proof.: This is [9, Lemma (7.4)].
**(5.7) Lemma**.: _Let \(x_{-}\in\mathcal{C}_{-}\), \(x^{\prime}_{-},y^{\prime}_{-}\in\mathcal{C}^{\prime}_{-}\) and let \(\varphi:\mathcal{C}_{+}\cup\{x_{-}\}\to\mathcal{C}^{\prime}_{+}\cup\{x^{ \prime}_{-}\},\psi:\mathcal{C}_{+}\cup\{x_{-}\}\to\mathcal{C}^{\prime}_{+}\cup \{y^{\prime}_{-}\}\) be two isometries such that \(\varphi(z)=\psi(z)\) for any \(z\in x_{-}^{\mathrm{op}}\). Then \(x^{\prime}_{-}=y^{\prime}_{-}\) and \(\varphi=\psi\)._
Proof.: This is [7, Lemma 4.10].
**(5.8) Theorem**.: _Let \((c_{+},c_{-})\) be a pair of opposite chambers in \(\Delta\). The only isometry of \(\Delta\) fixing \(E_{1}(c_{+})\cup\{c_{-}\}\) is the identity._
Proof.: This is [9, Theorem (3.2)].
**(5.9) Theorem**.: _Let \(\Delta,\Delta^{\prime}\) be \(2\)-spherical and of rank at least three. Let \((c_{+},c_{-}),(c^{\prime}_{+},c^{\prime}_{-})\) be two pairs of opposite chambers in \(\Delta\) and \(\Delta^{\prime}\), respectively, and let \(\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c^{\prime}_{+})\to\{c^{\prime}_{-}\}\) be an isometry. Then \(\varphi\) extends to an isometry from \(\mathcal{C}_{+}\cup\{c_{-}\}\) onto \(\mathcal{C}^{\prime}_{+}\cup\{c^{\prime}_{-}\}\)._
Proof.: This is a consequence of [2, Theorem 6.5].
### Isometries and wall-connected twin buildings
Let \(\Delta,\Delta^{\prime}\) be two twin buildings of type \((W,S)\). Let \((c_{+},c_{-})\in\mathcal{C}_{+}\times\mathcal{C}_{-},(c^{\prime}_{+},c^{ \prime}_{-})\in\mathcal{C}^{\prime}_{+}\times\mathcal{C}^{\prime}_{-}\) be pairs of opposite chambers and let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry such that \(\varphi_{+}(c_{+})=c^{\prime}_{+}\). Furthermore let \((c_{-},c^{\prime}_{-})\) be \(\varphi_{+}\)-admissible.
**(5.10) Lemma**.: _Let \((P_{0},\ldots,P_{k})\) be an \(\mathcal{P}_{s}(c_{-})\)-compatible path. Then \((\varphi_{+}(P_{0}),\ldots,\varphi_{+}(P_{k}))\) is an \(\mathcal{P}_{s}(c^{\prime}_{-})\)-compatible path._
Proof.: Clearly, \((\varphi_{+}(P_{0}),\ldots,\varphi_{+}(P_{k}))\) is a compatible path by Lemma (5.4). Now \(\varphi_{+}(P_{0})\) and \(\mathcal{P}_{s}(\varphi_{+}(c_{-}))=\mathcal{P}_{s}(c^{\prime}_{-})\) are opposite and by Proposition (4.4)\((c)\) and Lemma (5.4) we have \(\ell(c^{\prime}_{-},\mathrm{proj}_{\varphi_{+}(P_{k})}\,c^{\prime}_{-})=\ell( \varphi_{+}(c_{-}),\varphi_{+}(\mathrm{proj}_{P_{k}}\,c_{-}))=\ell(c_{-}, \mathrm{proj}_{P_{k}}\,c_{-})=\ell(P_{0},P_{k})+1=\ell(\varphi_{+}(P_{0}), \varphi_{+}(P_{k}))+1\). Now the claim follows from Theorem (4.7).
**(5.11) Lemma**.: _Let \(\Delta,\Delta^{\prime}\) be \(2\)-spherical. Then \(\Gamma_{s}(c_{-})\) is connected if and only if \(\Gamma_{s}(c^{\prime}_{-})\) is connected._
Proof.: It suffices to show that if \(Q_{1},Q_{2}\) are wall-adjacent of type \((c_{-},s)\), then \(\varphi_{+}(Q_{1}),\varphi_{+}(Q_{2})\) are wall-adjacent of type \((c^{\prime}_{-},s)\), too. Let \(Q_{1},Q_{2}\) be wall-adjacent of type \((c_{-},s)\). By definition \(Q_{1},Q_{2}\in\mathcal{P}_{s}(c_{-})^{\mathrm{op}}\) and there exist a panel \(T\subseteq\mathcal{C}_{+}\) and \(\mathcal{P}_{s}(c_{-})\)-compatible paths (\(P_{0}=Q_{1},\ldots,P_{k}=T\)) and (\(P_{0}^{\prime}=Q_{2},\ldots,P_{k}^{\prime}=T\)) of the same length and type. Lemma (5.10) implies that \(\varphi_{+}\) maps a \(\mathcal{P}_{s}(c_{-})\)-compatible to a \(\mathcal{P}_{s}(c^{\prime}_{-})\)-compatible path. Thus \(\varphi_{+}(Q_{1}),\varphi_{+}(Q_{2})\) are wall-adjacent of type \((c^{\prime}_{-},s)\) and the claim follows.
**(5.12) Definition**.: For \(x\in c_{-}^{\mathrm{op}}\) and \(s\in S\) we define the mapping
\[\varphi_{s}^{x}:\mathcal{P}_{s}(c_{-})\to\mathcal{P}_{s}(c^{\prime}_{-}),d\mapsto \left(\mathrm{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\mathcal{P}_{s}(\varphi_ {+}(x))}\circ\varphi_{+}\circ\mathrm{proj}_{\mathcal{P}_{s}(x)}^{\mathcal{P}_ {s}(c_{-})}\right)(d).\]
Since \(\mathcal{P}_{s}(c_{-}),\mathcal{P}_{s}(x)\) are parallel by Lemma (3.4)\((d)\) for each \(x\in c_{-}^{\mathrm{op}}\) and \(\varphi_{+}\) is an isometry it follows again by Lemma (3.4)\((d)\) that \(\varphi_{s}^{x}\) is a bijection and hence an isometry. In particular, \(\varphi_{s}^{x}(c_{-})=c^{\prime}_{-}\).
**(5.13) Proposition**.: _Let \(s\in S,x,z\in c_{-}^{\mathrm{op}}\) and let \(P=\mathcal{P}_{s}(x),Q=\mathcal{P}_{s}(z)\) be wall-adjacent of type \((c_{-},s)\). Then we have \(\varphi_{s}^{x}=\varphi_{s}^{z}\)._
Proof.: By definition there exist a panel \(T\subseteq\mathcal{C}_{+}\) and \(\mathcal{P}_{s}(c_{-})\)-compatible path's \((P_{0}=P,\ldots,P_{k}=T)\) and \((Q_{0}=Q,\ldots,Q_{k}=T)\) of the same length and type. By the previous lemma \((\varphi_{+}(P_{0}),\ldots,\varphi_{+}(P_{k}))\) and \((\varphi_{+}(Q_{0}),\ldots,\varphi_{+}(Q_{k}))\) are \(\mathcal{P}_{s}(c^{\prime}_{-})\)-compatible paths. Let \(Z\in\{P,Q\}\). By Proposition (4.4)\((a),(b)\) we obtain
\[\operatorname{proj}_{Z}^{\mathcal{P}_{s}(c_{-})} =\operatorname{proj}_{Z}^{T}\circ\operatorname{proj}_{T}^{ \mathcal{P}_{s}(c_{-})}\] \[\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(Z)} =\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(T)}\circ\operatorname{proj}_{\varphi_{+}(T)}^{\varphi_{+}(Z)}\]
By Lemma (5.5) we obtain \(\operatorname{proj}_{\varphi_{+}(T)}^{\varphi_{+}(Z)}\circ\varphi_{+}\circ \operatorname{proj}_{Z}^{T}=\varphi_{+}|_{T}\), since the panels \(Z\) and \(T\) are parallel by Lemma (4.1). We have
\[\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(Z)}\circ\varphi_{+}\circ\operatorname{proj}_{Z}^{\mathcal{P}_{s}(c_{-})} =\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(T)}\circ\operatorname{proj}_{\varphi_{+}(T)}^{\varphi_{+}(Z)}\circ \varphi_{+}\circ\operatorname{proj}_{Z}^{T}\circ\operatorname{proj}_{T}^{ \mathcal{P}_{s}(c_{-})}\] \[=\operatorname{proj}_{\mathcal{P}_{s}(c^{\prime}_{-})}^{\varphi_ {+}(T)}\circ\varphi_{+}\circ\operatorname{proj}_{T}^{\mathcal{P}_{s}(c_{-})}\]
This finishes the claim.
**(5.14) Corollary**.: _Let \(x,z\in c_{-}^{\operatorname{cp}}\) and \(s\in S\) such that \(\Gamma_{s}(c_{-})\) is connected. Then \(\varphi_{s}^{x}=\varphi_{s}^{z}\)._
Proof.: This follows by induction on the length of a path in \(\Gamma_{s}(c_{-})\) and Proposition (5.13).
### Extending isometries of wall-connected twin buildings
Let \(\Delta,\Delta^{\prime}\) be two twin buildings of type \((W,S)\). Let \((c_{+},c_{-})\in\mathcal{C}_{+}\times\mathcal{C}_{-},(c^{\prime}_{+},c^{\prime }_{-})\in\mathcal{C}^{\prime}_{+}\times\mathcal{C}^{\prime}_{-}\) be pairs of opposite chambers and let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry such that \(\varphi_{+}(c_{+})=c^{\prime}_{+}\). Furthermore let \((c_{-},c^{\prime}_{-})\) be \(\varphi_{+}\)-admissible. Assume that \(\Delta\) is wall-connected. By Corollary (5.14) we have \(\varphi_{s}^{x}=\varphi_{s}^{z}\) for any \(x,z\in c_{-}^{\operatorname{op}}\) and \(s\in S\). We denote this mapping by \(\varphi_{s}\).
**(5.15) Lemma**.: _For each \(s\in S\) the pair \((\varphi_{+},\varphi_{s})\) is admissible._
Proof.: Let \(d_{-}\in\mathcal{P}_{s}(c_{-})\). We will show that \(\varphi_{+}(d_{-}^{\operatorname{op}})\subseteq\varphi_{s}(d_{-})^{\operatorname {op}}\). Let \(y\in d_{-}^{\operatorname{op}}\). By Lemma (3.8) there exists \(x\in c_{-}^{\operatorname{op}}\) such that \(\delta_{+}(y,x)=s\). Since \(y\in d_{-}^{\operatorname{op}}\) we have \(\operatorname{proj}_{\mathcal{P}_{s}(x)}d_{-}\neq y\). This implies \(s=\delta_{+}(\operatorname{proj}_{\mathcal{P}_{s}(x)}d_{-},y)=\delta^{\prime}_ {+}(\varphi_{+}(\operatorname{proj}_{\mathcal{P}_{s}(x)}d_{-}),\varphi_{+}(y))\). Since \(\mathcal{P}_{s}(c^{\prime}_{-})\) and \(\mathcal{P}_{s}(\varphi_{+}(x))\) are opposite, Lemma (3.3) and the definition of \(\varphi_{s}\) yield \(\delta^{\prime}_{*}(\varphi_{s}(d_{-}),\varphi_{+}(\operatorname{proj}_{ \mathcal{P}_{s}(x)}d_{-}))=s\) by Lemma (3.3). By (Tw2) we have \(\delta^{\prime}_{*}(\varphi_{s}(d_{-}),\varphi_{+}(y))=1_{W}\). By Lemma (5.6) the pair \((d_{-},\varphi_{s}(d_{-}))\) is \(\varphi_{+}\)-admissible for all \(d_{-}\in\mathcal{P}_{s}(c_{-})\). The claim follows now by Lemma (5.3).
**(5.16) Lemma**.: _The isometry \(\varphi_{+}\) extends uniquely to an isometry \(\varphi:\mathcal{C}_{+}\cup E_{1}(c_{-})\to\mathcal{C}^{\prime}_{+}\cup E_{1}( c^{\prime}_{-})\). In particular, for every chamber \(x\in\mathcal{C}_{-}\) there exists a unique chamber \(x^{\prime}\in\mathcal{C}^{\prime}_{-}\) such that \((x,x^{\prime})\) is \(\varphi_{+}\)-admissible._
Proof.: Let \(s,t\in S\). Then \(\varphi_{s}(c_{-})=c^{\prime}_{-}=\varphi_{t}(c_{-})\). Therefore the mapping \(\varphi_{-}:E_{1}(c_{-})\to E_{1}(c^{\prime}_{-}),x\mapsto\varphi_{s}(x)\), if \(x\in\mathcal{P}_{s}(c_{-})\), is well-defined. Moreover, \(\varphi_{-}\) is bijective and for all \(x,y\in E_{1}(c_{-})\) with \(\delta_{-}(x,y)\in S\) we have \(\delta_{-}(x,y)=\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))\). Let \(x,y\in E_{1}(c_{-})\) with \(\delta_{-}(x,y)\notin S\). Then there exists \(s\neq t\in S\) with \(x\in\mathcal{P}_{s}(c_{-})\) and \(y\in\mathcal{P}_{t}(c_{-})\) and we have \(\delta_{-}(x,y)=\delta_{-}(x,c_{-})\delta_{-}(c_{-},y)=st\). We deduce \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(c_{-}))=s\) and \(\delta^{\prime}_{-}(\varphi_{-}(c_{-}),\varphi_{-}(y))=t\) and hence \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))=st\). Thus \(\varphi_{-}\) is an isometry and we obtain that \((x,\varphi_{-}(x))\) is \(\varphi_{+}\)-admissible for all \(x\in E_{1}(c_{-})\) by Lemma (5.15). By Lemma (5.3) the pair \((\varphi_{+},\psi_{-})\) is admissible and hence the mapping \(\varphi_{-}\) extends \(\varphi_{+}\) to an isometry from \(\mathcal{C}_{+}\cup E_{1}(c_{-})\) to \(\mathcal{C}^{\prime}_{+}\cup E_{1}(c^{\prime}_{-})\).
The uniqueness of \(x^{\prime}\) follows from Lemma (5.7). The existence follows by the first assertion of the lemma and induction on \(\ell_{-}(c_{-},x)\)
**(5.17) Theorem**.: _Let \(x\in\mathcal{C}_{-}\) and let \(x^{\prime}\in\mathcal{C}^{\prime}_{-}\) be the unique chamber such that \((x,x^{\prime})\) is \(\varphi_{+}\)-admissible. Then \(\varphi_{-}:\mathcal{C}_{-}\to\mathcal{C}^{\prime}_{-},x\mapsto x^{\prime}\) is an isometry._
Proof.: Let \((y,x^{\prime})\) be \(\varphi_{+}\)-admissible. Then \((x^{\prime},y)\) and \((x^{\prime},x)\) are \(\varphi_{+}^{-1}\)-admissible. The uniqueness part of the previous lemma implies \(x=y\) and hence \(\varphi_{-}\) is injective. Let \(y^{\prime}\in\mathcal{C}^{\prime}_{-}\) and let \(y\in\mathcal{C}_{-}\) be the unique chamber such that \((y^{\prime},y)\) is \(\varphi_{+}^{-1}\)-admissible. Then \((y,y^{\prime})\) is \(\varphi_{+}\)-admissible and hence \(\varphi_{-}\) is surjective. Thus \(\varphi_{-}\) is a bijection. Now we will show that \(\varphi_{-}\) preserves \(s\)-adjacency. Let \(x,y\in\mathcal{C}_{-},s\in S\) such that \(\delta_{-}(x,y)=s\). Note that \(\varphi_{+}\) is an isometry and that \((x,\varphi_{-}(x)),(y,\varphi_{-}(y))\) are \(\varphi_{+}\)-admissible. Let \(z\in\varphi_{+}(x)^{\mathrm{op}}\). Then \(\varphi_{+}^{-1}(z)\in x^{\mathrm{op}}\) and Lemma (3.8) yields \(z^{\prime}\in y^{\mathrm{op}}\) such that \(\delta_{+}(z,\varphi_{+}(z^{\prime}))=\delta_{+}(\varphi_{+}^{-1}(z),z^{\prime })=s\). Again, Lemma (3.8) implies \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))\in\langle s\rangle\). Since \(\varphi_{-}\) is injective, we have \(\delta^{\prime}_{-}(\varphi_{-}(x),\varphi_{-}(y))=s\). Now Lemma (5.2) finishes the claim.
**(5.18) Corollary**.: _Let \(\Delta\) be \(2\)-spherical, thick, wall-connected and of rank at least three. Then any isometry \(\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c^{\prime}_{+})\cup\{c^{\prime}_{-}\}\) extends uniquely to an isometry from \(\mathcal{C}_{+}\cup\mathcal{C}_{-}\) onto \(\mathcal{C}^{\prime}_{+}\cup\mathcal{C}^{\prime}_{-}\)._
Proof.: By Theorem (5.9), Theorem (5.17) and Lemma (5.3) we obtain an isometry \(\Phi:\mathcal{C}\to\mathcal{C}^{\prime}\) such that \(\Phi|_{E_{2}(c_{+})\cup\{c_{-}\}}=\varphi\). The uniqueness follows from Theorem (5.8).
## 6 Wall-connected twin buildings
### Chamber systems
Let \(I\) be a set. A _chamber system_ over \(I\) is a pair \(\mathbf{C}=(\mathcal{C},(\sim_{i})_{i\in I})\) where \(\mathcal{C}\) is a non-empty set whose elements are called _chambers_ and where \(\sim_{i}\) is an equivalence relation on the set of chambers for each \(i\in I\). Given \(i\in I\) and \(c,d\in\mathcal{C}\), then \(c\) is called _\(i\)-adjacent_ to \(d\) if \(c\sim_{i}d\). The chambers \(c,d\) are called _adjacent_ if they are \(i\)-adjacent for some \(i\in I\).
A _gallery_ in \(\mathbf{C}\) is a sequence \((c_{0},\ldots,c_{k})\) such that \(c_{\mu}\in\mathcal{C}\) for all \(0\leq\mu\leq k\) and such that \(c_{\mu-1}\) is adjacent to \(c_{\mu}\) for all \(1\leq\mu\leq k\). The chamber system \(\mathbf{C}\) is said to be _connected_, if for any two chambers \(c,d\) there exists a gallery \((c_{0}=c,\ldots,c_{k}=d)\). For a subset \(\mathcal{E}\subseteq\mathcal{C}\) the restriction \((\mathcal{E},(\sim_{i}|_{\mathcal{E}\times\mathcal{E}})_{i\in I})\) is again a chamber system over \(I\).
Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\). Then we define the chamber system \(\mathbf{C}(\Delta)\) as follows: The set of chambers is given by the set of chambers \(\mathcal{C}\) of \(\Delta\) and two chambers \(x,y\) are defined to be \(s\)-adjacent if \(\delta(x,y)\in\langle s\rangle\).
### \(3\)-spherical twin buildings
Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a twin building of type \((W,S)\) and let \(\varepsilon\in\{+,-\}\). For each pair \((c,k)\in\mathcal{C}_{\varepsilon}\times\mathbb{N}_{0}\) we put \(c^{\mathrm{op}(k)}:=\{d\in\mathcal{C}_{-\varepsilon}\mid\ell_{*}(c,d)\leq k\}\). We remark that \(c^{\mathrm{op}}=c^{\mathrm{op}(0)}\) for any \(c\in\mathcal{C}\). We say that the twin building \(\Delta\) satisfies Condition \(\left(\mathrm{co}\right)_{k}\), if for any \(\varepsilon\in\{+,-\}\) and any chamber \(c\in\mathcal{C}_{\varepsilon}\) the chamber system given by the restriction of \(\mathbf{C}(\Delta_{-\varepsilon})\) to \(c^{\mathrm{op}(k)}\) is connected. We say for short that \(\Delta\) satisfies Condition \(\left(\mathrm{co}\right)_{0}\).
For buildings \(\Delta=(\mathcal{C},\delta)\) of spherical type \((W,S)\) we have also a notion of \(c^{\mathrm{op}(k)}\), i.e. \(c^{\mathrm{op}(k)}=\{d\in\mathcal{C}\mid\ell(c,d)\geq\ell(r_{S})-k\}\). We say that a spherical building satisfies Condition \(\left(\mathrm{co}\right)_{k}\) if \(c^{\mathrm{op}(k)}\) is connected for any chamber \(c\in\mathcal{C}\).
**(6.1) Proposition**.: _Let \((W,S)\) be a spherical Coxeter system of rank \(3\) such that \(m_{st}\leq 4\) for all \(s,t\in S\) and let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a thick twin building of type \((W,S)\). Then \(\Delta_{+},\Delta_{-}\) and \(\Delta\) satisfy Condition \(\left(\mathrm{co}\right)_{1}\)._
Proof.: If \(m_{st}\leq 3\) for all \(s,t\in S\) the assertion follows from [7, Lemma 6.1 and Theorem 1.5]. Suppose now that \(m_{st}=4\) for some \(s,t\in S\). Again by [7] the assertion holds if the \(\{s,t\}\)-residues are not isomorphic to the building associated with the group \(C_{2}(2)\). In all the cases we mentioned so far, \(\Delta_{+},\Delta_{-},\Delta_{*}\) even satisfies the stronger condition (co). If the \(\{s,t\}\)-residue is isomorphic to the building associated to the group \(C_{2}(2)\), then the verification of Condition \(\mbox{(co)}_{1}\) boils down to an elementary calculation.
A _twin residue_ is a pair \((R,T)\) of opposite residues in a twin building. It is a basic fact that a twin residue is again a twin building. The _type_ (resp. _rank_) of a twin residue is defined to be the type (resp. rank) of the residues. Note that if \(P\) is a panel contained in \(R\) and if \((P_{0},\ldots,P_{k})\) is a \(P\)-compatible path in the twin residue \((R,T)\), then it is also a \(P\)-compatible path in the twin building \(\Delta\). In particular, if \(c\in R\) and if \(s\in S\) is contained in the type of \((R,T)\) and if \(Q_{1},Q_{2}\subseteq T\) are wall-adjacent of type \((c,s)\) in \((R,T)\), then \(Q_{1},Q_{2}\) are wall-adjacent of type \((c,s)\) in \(\Delta\).
**(6.2) Corollary**.: _Any \(3\)-spherical thick twin building \(\Delta\) satisfies Condition \(\mbox{(co)}_{1}\)._
Proof.: At first we convince ourselves that any rank \(3\) residue (which is spherical by definition) satisfies \(\mbox{(co)}_{1}\). Let \(R\) be a \(J\)-residue of rank \(3\) and \(x\in R\). For \(y\in x^{\rm op}\) the residues \(R_{J}(y),R\) are opposite in \(\Delta\), i.e. \((R_{J}(y),R)\) is a thick spherical twin building. Hence the previous proposition implies that \(R\) satisfies Condition \(\mbox{(co)}_{1}\).
The proof is similar to the proof of [7, Theorem 1.5]. Let \(c\) be a chamber of \(\Delta\) and let \(x\neq y\in c^{\rm op(1)}\). Let \(G=(x=c_{0},\ldots,c_{k}=y)\) be a gallery. We can assume that \(c_{i}\neq c_{i+1}\). Let \(i\) be minimal such that \(\ell(c,c_{i})=\max\{\ell(c,c_{j})\mid 0\leq j\leq k\}\). If \(\ell(c,c_{i})\leq 1\), we are done. Thus we can assume \(\ell(c,c_{i})>1\). Then \(\ell(c,c_{i-1})<\ell(c,c_{i})\geq\ell(c,c_{i+1})\). Let \(\delta(c_{i-1},c_{i})=s\) and \(\delta(c_{i},c_{i+1})=t\) (\(s=t\) is possible). As \(\ell(c,c_{i-1})\geq 1\), there exists \(r\in S\) such that \(\ell(\delta(c,c_{i-1})r)=\ell(c,c_{i-1})-1\). Let \(R\) be a \(J\)-residue containing \(c_{i}\), where \(|J|=3\) and \(\{r,s,t\}\subseteq J\). Using similar arguments as in [7, Lemma 6.1 and Theorem 1.5] we obtain a gallery \((c_{0},\ldots,c_{i-1}=d_{0},\ldots,d_{m}=c_{i+1},\ldots,c_{k})\) with \(\ell(c,d_{j})<\ell(c,c_{i})\) for any \(0\leq j\leq m-1\). Iterating this procedure we get a gallery from \(x\) to \(y\) which is contained in \(c^{\rm op(1)}\).
**(6.3) Theorem**.: _Let \(\Delta\) be a \(2\)-spherical, thick twin building of type \((W,S)\) satisfying Condition \(\mbox{(co)}_{1}\). If any rank \(3\) twin residue is wall-connected, then \(\Delta\) is wall-connected._
Proof.: Let \(\varepsilon\in\{+,-\},c\in{\cal C}_{\varepsilon}\) and \(s\in S\). We have to show that \(\Gamma_{s}(c)\) is connected. Let \(x,y\in c^{\rm op}\). By assumption there exists a gallery \((c_{0}=x,\ldots,c_{k}=y)\) such that \(\ell_{*}(c,c_{i})\leq 1\) for all \(0\leq i\leq k\). Let \(J=\{\delta_{-\varepsilon}(c_{0},c_{1}),\delta_{-\varepsilon}(c_{1},c_{2})\}\). Let \(x^{\prime}\in R_{J}(c_{0})\cap E_{1}(c_{2})\) be opposite to \(c\) and let \(J^{\prime}=J\cup\{s\}\). Then \(|J^{\prime}|\leq 3\). Let \(K\subseteq S\) with \(|K|=3\) and \(J^{\prime}\subseteq K\). By assumption the twin residue \((R_{K}(c),R_{K}(c_{0}))\) is wall-connected. Thus there exist \(P_{0}={\cal P}_{s}(x),\ldots,P_{m}={\cal P}_{s}(x^{\prime})\) such that \(P_{i-1},P_{i}\) are wall-adjacent of type \((c,s)\) for all \(1\leq i\leq m\). Applying induction to the shorter gallery \((x^{\prime},c_{2},\ldots,c_{k})\) the claim follows.
**(6.4) Corollary**.: _Every \(3\)-spherical, thick twin building is wall-connected._
Proof.: Let \(\Delta\) be a \(3\)-spherical thick twin building. By Corollary (6.2) \(\Delta\) satisfies Condition \(\mbox{(co)}_{1}\). Let \((R,Z)\) be a twin residue of \(\Delta\) of type \(J\) and rank \(3\). Let \(c\in R\) and \(s\in J\). If \((R,Z)\) is wall-connected, the claim follows from the previous theorem. Thus let \(Q_{1},Q_{2}\subseteq Z\) such that \(Q_{1},Q_{2}\in{\cal P}_{s}(c)^{\rm op}\). Then \(T:={\rm proj}_{Z}\,{\cal P}_{s}(c)\) is a panel which is parallel to \({\cal P}_{s}(c),Q_{1},Q_{2}\) by Lemma (3.4) and Lemma (3.6). Let \(w:=\delta(Q_{1},T)=\delta(Q_{2},T)\). Then \(\ell(sw)=\ell(w)+1\), since \(Q_{1},Q_{2}\) are \(s\)-panels. Let \(t\) be the type of \(T\). By Lemma (2.4) we have \(t=w^{-1}sw\). By Lemma (4.1) there exists a compatible path \((P_{0}=Q_{1},\ldots,P_{k}=T)\). Using Lemma (4.3) there exists a compatible path \((P_{0}^{\prime}=Q_{2},\ldots,P_{k}^{\prime}=T)\) of the same length and type. Since
\(\ell(\delta(Q_{1},T))+1=\ell(\delta(Q_{2},T))+1=\ell_{*}(c,\operatorname{proj}_{T}c)\), both compatible paths are \(\mathcal{P}_{s}(c)\)-compatible by Theorem (4.7) and \((R,Z)\) is wall-connected.
**(6.5) Corollary**.: _Every \(2\)-spherical, thick twin building which satisfies Condition_ (co) _is wall-connected._
Proof.: Using similar arguments as in Theorem (6.3) and Corollary (6.4) the claim follows.
### Wall-connected RGD-systems
A _reflection_ is an element of \(W\) that is conjugate to an element of \(S\). For \(s\in S\) we let \(\alpha_{s}:=\{w\in W\mid\ell(sw)>\ell(w)\}\) be the _simple root_ corresponding to \(s\). A _root_ is a subset \(\alpha\subseteq W\) such that \(\alpha=v\alpha_{s}\) for some \(v\in W\) and \(s\in S\). We denote the set of all roots by \(\Phi\). A root \(\alpha\in\Phi\) is called _positive_ (resp. _negative_), if \(1_{W}\in\alpha\) (resp. \(1_{W}\notin\alpha\)). We let \(\Phi_{+}\) (resp. \(\Phi_{-}\)) be the set of all positive (resp. negative) roots. For each root \(\alpha\in\Phi\) we denote the _opposite root_ by \(-\alpha\) and we denote the unique reflection which interchanges these two roots by \(r_{\alpha}\). Two roots \(\alpha\neq\beta\in\Phi\) are called _prenilpotent_ (or \(\{\alpha,\beta\}\) is called a _prenilpotent pair_) if \(\alpha\cap\beta\neq\emptyset\neq(-\alpha)\cap(-\beta)\). For a prenilpotent pair \(\{\alpha,\beta\}\) we define \([\alpha,\beta]:=\{\gamma\in\Phi\mid\alpha\cap\beta\subseteq\gamma\text{ and }(-\alpha)\cap(-\beta)\subseteq-\gamma\}\) and \((\alpha,\beta):=[\alpha,\beta]\backslash\{\alpha,\beta\}\).
An _RGD-system of type \((W,S)\)_ is a pair \(\mathcal{D}=\big{(}G,\left(U_{\alpha}\right)_{\alpha\in\Phi}\big{)}\) consisting of a group \(G\) together with a family of subgroups \(U_{\alpha}\) (called _root groups_) indexed by the set of roots \(\Phi\), which satisfies the following axioms, where \(H:=\bigcap_{\alpha\in\Phi}N_{G}(U_{\alpha}),U_{\pm}:=\langle U_{\alpha}\mid \alpha\in\Phi_{\pm}\rangle\):
1. For each \(\alpha\in\Phi\), we have \(U_{\alpha}\neq\{1\}\).
2. For each prenilpotent pair \(\{\alpha,\beta\}\subseteq\Phi\), the commutator group \([U_{\alpha},U_{\beta}]\) is contained in the group \(U_{(\alpha,\beta)}:=\langle U_{\gamma}\mid\gamma\in(\alpha,\beta)\rangle\).
3. For every \(s\in S\) and each \(u\in U_{\alpha_{s}}\backslash\{1\}\), there exists \(u^{\prime},u^{\prime\prime}\in U_{-\alpha_{s}}\) such that the product \(m(u):=u^{\prime}uu^{\prime\prime}\) conjugates \(U_{\beta}\) onto \(U_{s\beta}\) for each \(\beta\in\Phi\).
4. For each \(s\in S\), the group \(U_{-\alpha_{s}}\) is not contained in \(U_{+}\).
5. \(G=H\langle U_{\alpha}\mid\alpha\in\Phi\rangle\).
It is well-known that any RGD-system \(\mathcal{D}\) acts on a twin building, which is denoted by \(\Delta(\mathcal{D})\) (cf. [1, Section 8.9]). This twin building is a so-called _Moufang twin building_ (cf. [1, Section 8.3]). There is a distinguished pair of opposite chambers in \(\Delta(\mathcal{D})\), which we will denote by \((c_{+},c_{-})\).
**(6.6) Lemma**.: _For \(\varepsilon\in\{+,-\}\) the group \(U_{\varepsilon}\) acts simply transitively on the set of chambers opposite \(c_{\varepsilon}\)._
Proof.: This is [1, Corollary 8.32].
We say that an RGD-system \(\mathcal{D}=\big{(}G,\left(U_{\alpha}\right)_{\alpha\in\Phi}\big{)}\) is _wall-connected_, if the following condition is satisfied
\[\forall(\varepsilon,s)\in\{+,-\}\times S:U_{\varepsilon}=\langle U_{\beta} \mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\] (wc)
For the notion of _twin roots_ we refer to [1, Section 5.8.5]. Let \(\alpha\) be a twin root. Then we define the _wall_ associated to \(\alpha\) by the set of all panels \(P\) such that \(P\) is stabilized by \(r_{\alpha}\).
**(6.7) Lemma**.: _Let \(\varepsilon\in\{+,-\},P\subseteq\mathcal{C}_{\varepsilon},Q\subseteq\mathcal{ C}_{-\varepsilon}\) be two parallel panels and let \(s\in S\) be the type of \(P\). Then the reflection \(s\) stabilizes \(P\) and \(Q\)._
Proof.: By Theorem (4.6) there exists a \(P\)-compatible path \((Q_{0},\ldots,Q_{k}=Q)\). Since \(P\) and \(Q_{0}\) are opposite, both panels are stabilized by the reflection \(s\). The claim follows by induction and the fact, that opposite panels in a rank \(2\) residue are stabilized by the same reflection.
A criterion for wall-connectedness
**(6.8) Lemma**.: _Let \(s\in S,\varepsilon\in\{+,-\}\), let \(P:=\mathcal{P}_{s}(c_{\varepsilon})\subseteq\mathcal{C}_{\varepsilon}\) and let \(P_{0},\ldots,P_{k}\subseteq\mathcal{C}_{-\varepsilon}\) be panels such that \((P_{0},\ldots,P_{k})\) is a \(P\)-compatible path. Then the group \(\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\) acts transitively on the set of panels opposite \(P_{k}\) in \(R(P_{k-1},P_{k})\)._
Proof.: For \(s\in S\) we abbreviate \(W_{s}:=\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\). Since \(P_{k-1},P_{k}\) are opposite in \(R(P_{k-1},P_{k})\), it suffices to show that for any panel \(Q\subseteq R(P_{k-1},P_{k})\) which is opposite to \(P_{k}\) in \(R(P_{k-1},P_{k})\) there exists \(g\in W_{s}\) such that \(g.Q=P_{k-1}\). Let \(Q\) be such a panel. Then there exists \(y\in Q\) such that \(\operatorname{proj}_{P_{k}}c_{\varepsilon},y\) are opposite in \(R(P_{k-1},P_{k})\). Let \(x\in P_{k-1}\) be opposite to \(\operatorname{proj}_{P_{k}}c_{\varepsilon}\) in \(R(P_{k-1},P_{k})\). We will show that there exists \(g\in W_{s}\) such that \(g.y=x\). Let \((c_{0}=\operatorname{proj}_{P_{k}}c_{\varepsilon},\ldots,c_{k}=x)\) and \((d_{0}=\operatorname{proj}_{P_{k}}c_{\varepsilon},\ldots,d_{k}=y)\) be minimal galleries and let \(i=\max\{0\leq j\leq k\mid\forall 0\leq k\leq j:c_{k}=d_{k}\}\). We will show the hypothesis by induction on \(k-i\). If \(k-i=0\) there is nothing to show. Now let \(k-i>0\). Let \(\beta\) be the twin root such that \(c_{i}\in\beta,c_{i+1}\notin\beta\). Then \(c_{\varepsilon}\in\beta\). Since the twin building is a Moufang twin building, there exists \(g\in U_{\beta}\) such that \(g.d_{i+1}=c_{i+1}\) (cf. [1, Example 8.47]). Since \(o(r_{\beta}s)<\infty\) by Lemma (6.7) we have \(g\in W_{s}\). By induction we obtain \(h\in W_{s}\) such that \(hg.y=h.(g.y)=x\). This finishes the claim.
**(6.9) Theorem**.: _Let \(\mathcal{D}=\big{(}G,(U_{\alpha})_{\alpha\in\Phi}\big{)}\) be an RGD-system of type \((W,S)\) and let \((\varepsilon,s)\in\{+,-\}\times S\). Then the following are equivalent:_
1. \(U_{\varepsilon}=\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s )<\infty\rangle\)_;_
2. \(\Gamma_{s}(c_{\varepsilon})\) _is connected._
Proof.: Again, we abbreviate \(W_{s}:=\langle U_{\beta}\mid\beta\in\Phi_{\varepsilon},o(r_{\beta}s)<\infty\rangle\). At first we assume that \(\Gamma_{s}(c_{\varepsilon})\) is connected. Let \(x,y\in c_{\varepsilon}^{\text{op}}\) such that \(\mathcal{P}_{s}(x),\mathcal{P}_{s}(y)\) are wall-adjacent of type \((c_{\varepsilon},s)\). It suffices to show that there exists \(g\in W_{s}\) such that \(g.x=y\) (the general case follows by induction and Lemma (6.6) applied to \(y=h.x\) for \(h\in U_{\varepsilon}\)). Let \((P_{0}=\mathcal{P}_{s}(x),\ldots,P_{k}=T)\) and \((Q_{0}=\mathcal{P}_{s}(y),\ldots Q_{k}=T)\) be \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible paths of the same length and type. We show the hypothesis via induction on \(k\). If \(k=0\) we have \(\mathcal{P}_{s}(x)=\mathcal{P}_{s}(y)\) and therefore \(\delta_{-\varepsilon}(x,y)\in\langle s\rangle\). Then there exists \(g\in U_{\alpha_{s}}\) (\(\alpha_{s}\) is the twin root containing \(c_{\varepsilon}\) but not any \(s\)-adjacent chamber) with \(g.x=y\). Now let \(k>0\). By Lemma (6.8) there exists \(g\in W_{s}\) such that \(g.P_{k-1}=Q_{k-1}\). We obtain the \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible paths \((g.P_{0},\ldots,g.P_{k-1}=Q_{k-1})\) and \((Q_{0},\ldots,Q_{k-1})\). Using induction we obtain \(h\in W_{s}\) such that \(hg.x=h.(g.x)=y\).
Now we assume that \(U_{\varepsilon}=W_{s}\). Let \(\beta\in\Phi_{\varepsilon}\) such that \(o(r_{\beta}s)<\infty\) and let \(g\in U_{\beta}\). Since \(o(r_{\beta}s)<\infty\), there exists \(1\leq k\in\mathbb{N}\) such that \((r_{\beta}s)^{k}=1\). Then \((r_{\beta}s)^{k-1}r_{\beta}=s\) and hence we have \(v^{-1}tv=s\) for some \(v\in W\) and \(t\in\{r_{\beta},s\}\) Since \(r_{\beta}\) is a reflection, we have \(r_{\beta}=w^{-1}uw\) for some \(w\in W\) and \(u\in S\). In particular, we have \(v^{-1}sv=r\) for some \(v\in W,r\in S\). Note that \((sv)^{-1}s(sv)=r\). Thus let \(v^{\prime}\in\{v,sv\}\) be such that \(\ell(sv^{\prime})=\ell(v^{\prime})+1\). Let \(z\in A_{-\varepsilon}(c_{+},c_{-})\) be such that \(\delta_{-\varepsilon}(c_{-\varepsilon},z)=v^{\prime}\). Then \(\mathcal{P}_{s}(c_{-\varepsilon})\) and \(\mathcal{P}_{r}(z)\) are parallel by Lemma (2.4). Since \(\ell(v^{\prime}r)=\ell(sv^{\prime})=\ell(v^{\prime})+1\), we deduce \(z=\operatorname{proj}_{\mathcal{P}_{r}(z)}c_{-\varepsilon}\). By Lemma (4.1) there exists a compatible path \((P_{0}=\mathcal{P}_{s}(c_{-\varepsilon}),\ldots,P_{n}=\mathcal{P}_{r}(z))\). This compatible path is \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible by Theorem (4.7). Since \(\delta(\mathcal{P}_{s}(c_{-\varepsilon}),\mathcal{P}_{r}(z))=\delta(\mathcal{P} _{s}(g.c_{-\varepsilon}),\mathcal{P}_{r}(z))\) we have also a \(\mathcal{P}_{s}(c_{\varepsilon})\)-compatible path \((Q_{0}=\mathcal{P}_{s}(g.x),\ldots,Q_{n}=\mathcal{P}_{r}(z))\) by Lemma (4.3) of the same length and type. Hence \(\mathcal{P}_{s}(c_{-\varepsilon})\) and \(\mathcal{P}_{s}(g.c_{-\varepsilon})\) are wall-adjacent of type \((c_{\varepsilon},s)\). Using induction the claim follows from Lemma (6.6).
**(6.10) Corollary**.: _Let \(\mathcal{D}\) be an RGD-system of type \((W,S)\). Then the following are equivalent:_
1. \(\mathcal{D}\) _is wall-connected._
_._
2. \(\Delta(\mathcal{D})\) _is wall-connected._
Proof.: Using the fact that \(G\) acts transitive on the set of chambers in one half, the claim follows from Lemma (5.11) and the previous theorem.
### A result about Moufang polygons
We need a result about Moufang polygons. The proof communicated to us by Richard Weiss relies on the basic theory of Moufang polygons as developed in the first chapters of [14]. In this subsection we use the notation of [14].
For \(i<k<j,a_{i}\in U_{i},a_{j}\in U_{j}\) there exists \(a_{l}\in U_{l}\), \(i<l<j\), such that \([a_{i},a_{j}]=a_{i+1}\cdots a_{j-1}\). We define \([a_{i},a_{j}]_{k}:=a_{k}\) as well as \([U_{i},U_{j}]_{k}:=\{[a_{i},a_{j}]_{k}\mid a_{i}\in U_{i},a_{j}\in U_{j}\}\).
**(6.11) Proposition**.: _For each \(i+1\leq k\leq i+n-2\) we have \([U_{i},U_{i+n-1}]_{k}=U_{k}\)._
Proof.: By definition it suffices to show that \(U_{k}\subseteq[U_{i},U_{i+n-1}]_{k}\). We notice that [14, (6.4)] is also correct if we shift the indices. We will show the hypothesis by induction on \(k-i\). Let \(k-i=1\) and let \(a_{i+1}\in U_{i+1}\). By [14, (6.1)] we have \(U_{i+n-1}^{\mu(a_{i})}=U_{i+1}\), where \(\mu\) is the mapping defined in [14, (6.1)]. Thus there exists \(a_{i+n-1}\in U_{i+n-1}\) such that \(a_{i+1}=a_{i+n-1}^{\mu(a_{i})}\). For each \(i<j<i+n-1\) let \(b_{j}\in U_{j}\) such that \([a_{i},a_{i+n-1}^{-1}]=b_{i+1}\cdots b_{i+n-2}\). By [14, (6.4)(\(i\))] we have \(b_{i+1}=a_{i+n-1}^{\mu(a_{i})}=a_{i+1}\) and therefore \([a_{i},a_{i+n-1}]_{i+1}=a_{i+1}\). Now let \(k-i>1\). Using [14, (6.4)(\(iii\))] we obtain \([U_{i},U_{i+n-1}]_{k}=[U_{i+1},U_{i+n}]_{k}\) for each \(i+2\leq k\leq i+n-2\). Using induction the claim follows.
**(6.12) Corollary**.: _Let \(i+1\leq k\leq i+n-2\). Then \(U_{k}\leq\langle U_{i},U_{i+1},\ldots,U_{k-1},U_{k+1},\ldots,U_{i+n-1}\rangle\)._
Proof.: This is a direct consequence of the previous proposition.
### Affine twin buildings of rank \(3\)
**(6.13) Proposition**.: _Let \(\mathcal{D}=(G,(U_{\alpha})_{\alpha\in\Phi})\) be an RGD-system of irreducible affine type and of rank \(3\). Then \(\mathcal{D}\) is wall-connected._
Proof.: We argue in the geometric realization of the Coxeter complex associated with \((W,S)\) in the euclidean plane (as visualized in [15, Figures \(2.1-2.3\)]. Thus we think of the Coxeter complex \(\Sigma\) as tessellation of the euclidean plane by chambers (i.e. the triangles). The walls of \(\Sigma\) correspond to the lines and each wall determines two halfplanes and these correspond to the roots of \(\Sigma\). We choose a chamber \(c\) and identify the set of fundamental reflections with the reflection of the euclidean plane whose wall is a wall of c. Moreover, the set of positive roots \(\Phi_{+}\) is identified with the set of halfspaces that contain \(c\). Let \(s\in S\). By definition it suffices to show that \(U_{\gamma}\subseteq U^{\prime}\) for each \(\gamma\in\Phi_{+}\), where \(U^{\prime}:=\langle U_{\beta}\mid\beta\in\Phi_{+},o(sr_{\beta})<\infty\rangle\). Let \(\gamma\) be a root in \(\Phi_{+}\). If \(o(sr_{\gamma})<\infty\), then \(U_{\gamma}\subseteq U^{\prime}\) by the definition of \(U^{\prime}\). Thus, it remains to consider the case where \(o(sr_{\gamma})=\infty\). We consider a gallery \((c=c_{0},...,c_{k-1},c_{k})\) in \(\Sigma\) such that \(r_{\gamma}\) switches \(c_{k-1}\) and \(c_{k}\) and such that \(k\) is minimal for this property. As \(o(sr_{\gamma})=\infty\), we have \(k\geq 2\) and therefore a unique rank \(2\) residue \(R\) containing the chambers \(c_{k-2},c_{k-1}\) and \(c_{k}\). We put \(d:=\operatorname{proj}_{R}c\) and remark that the wall of \(\gamma\) is not a wall of \(d\) by the minimality of \(k\). In particular, the gonality \(m\) of the residue \(R\) is at least \(3\). The residue \(R\) corresponds to a vertex \(v\) in the geometric realization of \(\Sigma\) and we let \(\Phi_{+}^{v}\) denote the set of all positive roots having \(v\) on their boundary. Let \(\alpha\) and \(\beta\) be the two roots in \(\Phi_{+}^{v}\) such that \(\{d\}=\alpha\cap\beta\cap R\). Then \(\alpha\neq\gamma\neq\beta\) and we have a natural numbering \((\alpha=\alpha_{1},\alpha_{2},\ldots,\alpha_{m}=\beta)\) of \(\Phi_{+}^{v}\) and \(1<\ell<m\) such that \(\gamma=\alpha_{\ell}\). Furthermore, we have \(o(sr_{\alpha_{i}})<\infty\) for all \(1\leq i\leq m\) with \(i\neq\ell\). Therefore we have \(U_{\alpha_{i}}\subseteq U^{\prime}\) for all \(1\leq i\leq m\) with \(i\neq\ell\) by the previous case. Thus, it follows
from the previous corollary that \(U_{\gamma}\subseteq U^{\prime}\). To show that \(U_{-}=\langle U_{\beta}\mid\beta\in\Phi_{-},o(sr_{\beta})<\infty\rangle\) follows in a similar fashion.
**(6.14) Lemma**.: _Let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*}),\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be two thick, \(2\)-spherical twin buildings of rank \(\geq 3\), let \(c\in\mathcal{C}_{+},c^{\prime}\in\mathcal{C}^{\prime}_{+}\) and let \(\varphi:E_{2}(c)\to E_{2}(c^{\prime})\) be an isometry. Then \(\Delta\) is wall-connected if and only if \(\Delta^{\prime}\) is wall-connected._
Proof.: By [16, Proposition 7.1.6] there exist chambers \(d\in c^{\mathrm{op}}\) and \(d^{\prime}\in c^{\prime\mathrm{op}}\) such that the mapping \(d\to d^{\prime}\) extends the isometry \(\varphi\) to an isometry \(\psi:E_{2}(c)\cup\{d\}\to E_{2}(c^{\prime})\cup\{d^{\prime}\}\). If \(\Delta\) is wall-connected, then this isometry extends to an isometry of the whole twin buildings by Corollary (5.18). Now the claim follows from Lemma (5.11). If \(\Delta^{\prime}\) is wall-connected, then the isometry \(\psi^{-1}\) extends to an isometry of the whole twin buildings. Again, Lemma (5.11) implies that \(\Delta\) is wall-connected, too.
**(6.15) Convention**.: We label the diagrams \(\tilde{C}_{2}\) and \(\tilde{G}_{2}\) in a linear order by \(1,2,3\) such that \(o(s_{1}s_{2})=3\) in the case of \(\tilde{G}_{2}\).
**(6.16) Lemma**.: _Let \(\Delta,\Delta^{\prime}\) be two twin buildings of the same type \(\tilde{C}_{2}\) (resp. \(\tilde{G}_{2}\)). Suppose that the \(\{s_{1},s_{2}\}\)-residues of \(\Delta\) and \(\Delta^{\prime}\) are isomorphic to the building associated with \(C_{2}(2)\) (resp. \(A_{2}(2)\) or \(A_{2}(3)\)). Let \(c\in\Delta,c^{\prime}\in\Delta^{\prime}\) be chambers and let \(R\) and \(R^{\prime}\) denote the \(\{s_{2},s_{3}\}\)-residues containing \(c\) and \(c^{\prime}\) respectively. Then each isometry from \(R\) to \(R^{\prime}\) extends to an isometry from \(E_{2}(c)\) to \(E_{2}(c^{\prime})\)._
Proof.: We shall need the following elementary observation:
**Observation:** Let \(\Gamma\) be one of the buildings associated with \(A_{2}(2),A_{2}(3)\) or \(C_{2}(2)\) and let \(P\) be a panel of \(\Gamma\). Then the stabilizer of \(P\) in the full automorphism group of \(\Gamma\) induces all permutations on the set of chambers in \(P\).
The isometry \(\varphi:R\to R^{\prime}\) induces an isometry \(\mathcal{P}_{s_{2}}(c)\to\mathcal{P}_{s_{2}}(c^{\prime})\). By the observation there exists an isometry \(\psi:R_{\{s_{1},s_{2}\}}(c)\to R_{\{s_{1},s_{2}\}}(c^{\prime})\) as both residues are isomorphic to the building associated to one of the groups \(A_{2}(2),A_{2}(3),C_{2}(2)\). The claim follows.
**(6.17) Lemma**.: _Let \(\Delta\) be a twin building of type \(\tilde{C}_{2}\) such that the \(\{s_{1},s_{2}\}\)-residues are isomorphic to the buildings associated with \(C_{2}(2)\). Then \(\Delta\) is wall-connected._
Proof.: The \(\{s_{2},s_{3}\}\)-residues are Moufang quadrangles by [9, (8.3) Theorem 4] and since the \(s_{2}\)-panels have to contain precisely \(3\) chambers, the \(\{s_{2},s_{3}\}\)-residues are all isomorphic to \(C_{2}(2)\) or they are all isomorphic to the unique Moufang quadrangle of order \((2,4)\). Let \(c\) be a chamber of \(\Delta\). By (the proof of) [8, Proposition 4] and Lemma (6.16), there exists in both cases an RGD-system \(\mathcal{D}\) of type \(\tilde{C}_{2}\), a chamber \(c^{\prime}\) of \(\Delta(\mathcal{D})\) and an isometry \(\varphi:E_{2}(c)\to E_{2}(c^{\prime})\). Now the claim follows from Proposition (6.13), Corollary (6.10) and Lemma (6.14).
**(6.18) Lemma**.: _Let \(\Delta\) be a twin building of type \(\tilde{G}_{2}\) such that the \(\{s_{2},s_{3}\}\)-residues are isomorphic to the building associated with \(G_{2}(2)\) or \(G_{2}(3)\). Then \(\Delta\) is wall-connected._
Proof.: The \(\{s_{1},s_{2}\}\)-residues are Moufang planes by [9, (8.3) Theorem 4] and since the panel contain precisely \(3\) (resp. \(4\)) chambers, the \(\{s_{1},s_{2}\}\)-residues are all isomorphic to the building associated with \(A_{2}(2)\) (resp. \(A_{2}(3)\)). Let \(c\) be a chamber in \(\Delta\). By (the proof of) [8, Proposition 4] and Lemma (6.16) there exists an RGD-system \(\mathcal{D}\) of type \(\tilde{G}_{2}\), a chamber \(c^{\prime}\) in \(\Delta(\mathcal{D})\) and an isometry \(\varphi:E_{2}(c)\to E_{2}(c^{\prime})\). Now the claim follows from Proposition (6.13), Corollary (6.10) and Lemma (6.14).
**(6.19) Theorem**.: _Let \(\Delta\) be a thick irreducible twin building of affine type \((W,S)\) and of rank \(3\). Then \(\Delta\) is wall-connected._
Proof.: If there is no rank 2 residue of \(\Delta\) which is isomorphic to \(C_{2}(2),G_{2}(2)\) or \(G_{2}(3)\), then \(\Delta\) satisfies Condition (co) by [7, Section 1] and is therefore wall-connected by Corollary (6.5). If there is a residue isomorphic to \(C_{2}(2)\), then \(\Delta\) is wall-connected by [1, Corollary 5.157] and Lemma (6.17) and if there is a residue isomorphic to \(G_{2}(2)\) or \(G_{2}(3)\), then \(\Delta\) is wall-connected by [1, Corollary 5.157] and Lemma (6.18).
|
2301.00077 | A Study on a User-Controlled Radial Tour for Variable Importance in
High-Dimensional Data | Principal component analysis is a long-standing go-to method for exploring
multivariate data. The principal components are linear combinations of the
original variables, ordered by descending variance. The first few components
typically provide a good visual summary of the data. Tours also make linear
projections of the original variables but offer many different views, like
examining the data from different directions. The grand tour shows a smooth
sequence of projections as an animation following interpolations between random
target bases. The manual radial tour rotates the selected variable's
contribution into and out of a projection. This allows the importance of the
variable to structure in the projection to be assessed. This work describes a
mixed-design user study evaluating the radial tour's efficacy compared with
principal component analysis and the grand tour. A supervised classification
task is assigned to participants who evaluate variable attribution of the
separation between two classes. Their accuracy in assigning the variable
importance is measured across various factors. Data were collected from 108
crowdsourced participants, who performed two trials with each visual for 648
trials in total. Mixed model regression finds strong evidence that the radial
tour results in a large increase in accuracy over the alternatives.
Participants also reported a preference for the radial tour in comparison to
the other two methods. | Nicholas Spyrison, Dianne Cook, Kim Marriott | 2022-12-31T00:07:40Z | http://arxiv.org/abs/2301.00077v1 | # A Study on a User-Controlled Radial Tour for Variable Importance in High-Dimensional Data
###### Abstract
Principal component analysis is a long-standing go-to method for exploring multivariate data. The principal components are linear combinations of the original variables, ordered by descending variance. The first few components typically provide a good visual summary of the data. _Tours_ also make linear projections of the original variables but offer many different views, like examining the data from different directions. The grand tour shows a smooth sequence of projections as an animation following interpolations between random target bases. The manual radial tour rotates the selected variable's contribution into and out of a projection. This allows the importance of the variable to structure in the projection to be assessed. This work describes a mixed-design user study evaluating the radial tour's efficacy compared with principal component analysis and the grand tour. A supervised classification task is assigned to participants who evaluate variable attribution of the separation between two classes. Their accuracy in assigning the variable importance is measured across various factors. Data were collected from 108 crowdsourced participants, who performed two trials with each visual for 648 trials in total. Mixed model regression finds strong evidence that the radial tour results in a large increase in accuracy over the alternatives. Participants also reported a preference for the radial tour in comparison to the other two methods.
Multivariate data visualization, variable importance, radial tour, linear dimension reduction,
## 1 Introduction
Despite decades of research, multivariate data continues to provide fascinating challenges for visualization. Data visualization is important because it is a key element of exploratory data analysis [43] for assessing model assumptions and as a cross-check on numerical summarization [50, 2, 26]. One challenge is measuring if a new technique yields a more informed perception of information than current practices.
Dimension reduction is commonly used with visualization to provide informative low-dimensional summaries of quantitative multivariate data. Principal component analysis (PCA) [34] is one of the first methods ever developed, and it remains very popular. Visualization of PCA is typically in the form of static scatterplots of a few leading components. When the scatterplot is accompanied by a visual representation of the basis they are called a bird [17]. A basis is a \(p\times d\) matrix of the linear combination of the \(p\) variables mapped to a smaller \(d\)-dimensional space. That is, it is an orthogonal rotation matrix, the magnitude, and the angle that the variables contribute.
Dynamic visualizations called _tours_[4] amitue through a sequence of linear projections (orthonormal bases). Instead of a static view, tours provide a smoothly changing view by interpolating between bases. There are various types of tours distinguished by how the paths are generated. Asimov originally animated between randomly selected bases in the _grand_ tour. The _manual_ tour [11] allows for user control over the basis changes. A selected variable (or component) can be rotated into or out of view or to a particular value. The _radial tour_[42] is a variant of the manual tour that fixes the contribution angle and changes the magnitude along the radius. The permanence of the data points from basis to basis holds information between intermediate interpolated projections, and the user control of the basis could plausibly lead to more information being perceived than a static display. This is a hypothesis that a user study could assess.
Empirical studies have rarely assessed tours. An exception is [31], who compares scatterplots of grand tours on 2D monitors with 3D (stereoscopic, not head-mounted) over \(n=15\) participants. Participants perform cluster detection, dimensionality estimation, and radial sparseness tasks on six-dimensional data. They find that stereoscopic 3D leads to more accuracy in cluster identification, though the time to interact with the display was much higher in the 3D environment. In this work, we extend the evaluation of torus which compares the radial tour as benchmarked against the grand tour and discrete pairs of principal components.
The contribution of this paper is an empirical user study comparing the radial tour against PCA and the grand tour for assessing variable attribution on clustered data. This is the first empirical evaluation of the radial or manual tour. We discuss how this fits with other multivariate data visualization techniques and coordinated views of linear projections.
We are particularly interested in assessing the effectiveness of the new radial tour relative to common practice with PCA and grand tour. The user influence over a basis, uniquely available in the radial tour, is crucial to testing variable sensitivity to the structure visible in projection. If the contribution of a variable is reduced and the feature disappears, then we say that the variable was sensitive to that structure. For example, Fig. 1 shows two projections of simulated data. Panel (a) has identified the separation between the two clusters. The contributions in panel (b) show no such cluster separation. The former has a large contribution of V2 in the direction of separation, while it is negligible in the right frame. Because of this, we say that V2 is sensitive to the separation of the clusters.
Variable sensitivity is important for the interpretation of machine learning models. They are the magnitude and direction of contribution to the model. It is important that developers maintain the interpretability of models. Exploratory Artificial Intelligence (XAI) [1, 3], is an emerging field that extends the interpretability of such black-box models. Multivariate data visualization is essential for exploring feature spaces and communicating interpretations of models [6, 47, 5].
The paper is structured as follows. Sect. 2 provides background on standard visualization methods and linear dimension reduction techniques. Sect. 3 describes the experimental factors, task, and accuracy measure used. The results of the study are discussed in Sect. 4. Con
clusions and potential future directions are discussed in Sect. 6. More results, participant demographics, and analysis of the response time are available in the Supplemental Materials.
## 2 Related work
Consider the data to be a matrix of \(n\) observations (rows) and \(p\) variables (columns), denoted as \(X_{n\times p}\).
### Orthogonal multivariate visualization
Grinstein [19] illustrates many multivariate visualization methods. In particular, this work shows examples of actual visuals. Liu [25] give a good classification and taxonomy of such methods. The content below focuses on the most common visuals that use the full data space before discussing linear combinations of those variables in projections.
#### 2.1.1 Scatterplot matrix
One could consider looking at \(p\) histograms or univariate densities. Doing so will miss features in two or more dimensions. Fig. 2 shows a scatterplot matrix [9] of the four principal components of simulated data. Such displays do not scale well with dimensions because each plot would get less and less space. Scatterplot matrices can only display information in two orthogonal dimensions, so features in three dimensions may not be fully resolved.
#### 2.1.2 Parallel coordinates plot
Another common way to display multivariate data is with a parallel coordinates plot [32]. Parallel coordinates plots scale well with dimensions but poorly with observations as the lines overcrowd the display. Parallel coordinate plots are asymmetric across variable ordering. In that, shuffling the order of the variable can lead to different conclusions. Another shortcoming is the graphical channel used to convey information. [29] suggests that position is the visual channel that is most perceptible to humans. In the case of parallel coordinates plots, the horizontal axes span variables rather than the values of one variable, causing the loss of a display dimension to be used by our most perceptible visual channel.
### Multivariate projections
At some point, visualization will be forced to turn to dimension reduction to scale better with the dimensionality of the data. Below we introduce linear projections and the common principal component analysis. Then we touch on nonlinear projections and exclude them from consideration.
#### 2.2.1 Linear
Let data, \(X\), contain \(n\) observations of \(p\) variables. A linear projection maps a higher \(p\)-dimensional space onto a smaller \(d\)-space with an affine mapping (where parallel lines stay parallel). A projection, \(Y\), is the resulting space of the data multiplied by a _basis_, \(A\), such that \(Y_{n\times d}=X_{n\times p}\times A_{p\times d}\). This is essentially a reorientation of the original variable. This intuition is conveyed by thinking of a shadow as a 2D projection of a 3D object. Rotating the object changes the shadow it casts and, correspondingly, the basis that maps the reorientation of the object.
#### 2.2.2 Principal component analysis
PCA is a good baseline of comparison for linear projections because of its frequent and broad use across disciplines. PCA [34] defines new components, linear combinations of the original variables, ordered by decreasing variation through the help of eigenvalue matrix decomposition. While the resulting dimensionality is the same size, the benefit comes from the ordered nature of the components. The data can be said to be approximated by the first several components. The exact number is subjectively selected given the variance contained in each component, typically guided from a scree plot [8]. Features with sizable signal regularly appear in the leading components that commonly approximate data. However, this is not always the case and component spaces should be fully explored to look for signal in components with less variation. This is especially true for cluster structure [14].
#### 2.2.3 Nonlinear
Nonlinear transformations bend and distort spaces that are not entirely accurate or faithful to the original variable space. Popular modern methods include t-SNE and UMAP [28, 44]. There are various quality metrics, such as Trustworthiness, Continuity, Normalized stress, and Average local error, have been introduced to describe the distortion of the space [16, 18]. Unfortunately, these distortions are hard to visualize and comprehend, effectively breaking the variable interpretability of the resulting space. The intuition of this can be demonstrated with map projections. Snyder [41] lists over 200 different projections that
Figure 1: Illustration of cluster separation affected by variable importance. Panel (a) is a projection mostly of V2 and V3, and the separation between clusters is in the direction of V2, not V3. This suggests V2 is important for clustering, but V3 is not. Panel (b) shows a projection of mostly V3 and V4, with no contribution from V2 and little from V3. That there is no separation between the clusters indicates that V3 and V4 are not important.
distort the surface of the earth to display as a 2D map, each with unique properties and use cases.
Because of the difficulty of interpreting the distortions of nonlinear spaces and the added subjectivity of hyperparameter selection, we exclude nonlinear techniques and instead, decide to compare three linear techniques.
### _Tours, animated linear projections_
A _tour_ animates through many linear projections. One of the insightful features of the tour is the permanence of the data points; one can track the relative changes of observations as the basis changes, as opposed to discretely jumping to an orthogonal view angle with no intermediate information. Types of tours are distinguished by the generation of their basis paths [13, 22]. In contrast with the discrete orientations of PCA, we compare continuous linear projection changes with grand and radial tours.
#### 2.3.1 Grand tours
Target bases are selected randomly in a grand tour [4]. These target bases are then geodesically interpolated for a smooth, continuous path. The grand tour is the first and most widely known tour. The random selection of target bases makes it a general unguided exploratory tool. The grand tour will make a good comparison that has a continuity of data points similar to the radial tour but lacks the user control enjoyed by PCA and radial tours.
#### 2.3.2 Manual and radial tours
Whether an analyst uses PCA or the grand tour, cannot influence the basis. They cannot explore the structure identified or change the contribution of the variables. User-controlled steering is a key aspect of _manual_ _tours_ that helps to test variable attribution.
The manual tour [11] defines its basis path by manipulating the basis contribution of a selected variable. A manipulation dimension is appended onto the projection plane, giving a full contribution to the selected variable. The target bases are then chosen to rotate this newly created manipulation space. This manipulation space is similarly orthogonally restrained. The data is projected through its interpolated basis and rendered into an animation. When the contribution of one variable changes, the contributions of the other variables must also change, to maintain the orthonormality of the basis. A key feature of the manual tour is that it allows users to control the variable contributions to the basis. Such manipulations can be queued in advance or selected in real time for human-in-the-loop analysis [21]. Manual navigation is relatively time-consuming due to the vast volume of resulting view space and the abstract method of steering the projection basis. First, it is advisable to identify a basis of particular interest and then use the manual tour as a more directed, local exploration tool to explore the sensitivity of a variable's contribution to the feature of interest.
To simplify the task and keep its duration realistic, we consider a variant of the manual tour called a _radial_ tour. In a radial tour, the magnitude of along the radius with a fixed angle of contribution, as seen in Fig. 3. The radial tour benefits from both continuity of the data alongside grand tours and user-steering via choosing the variable to rotate.
Manual tours have been recently made available in the **R** package **spinflex**[42], which facilitates manual tour (and radial variant). It also provides an interface for a layered composition of tours and exporting to gif and mp4 with **gganimate**[35] or html widget with **plotly**[40]. It is also compatible with tours made by **tourr**[48]. Now that we have a readily available means to produce various tours, we want to see how they fare against traditional discrete displays commonly used with PCA.
### _Other animated linear projections_
The work of [15] allows users to interactively change the face of a local display by navigating to adjacent faces on a global overview scatterplot matrix. This offers analysts a way to geometrically explore the transition between adjacent faces of a scatterplot matrix as though rotating the face of face at right angles. The interpolated bases between the orthogonal faces display linear combinations of three variables at varying degrees. This is what [27] called a _little tour_ with the addition of user control. It is a particular type of manual tour where only horizontal or vertical rotation is allowed.
Star Coordinates [20] also arrive at the biplot scatterplot displays starting from the perspective of radial parallel coordinates. [23] extend this idea, mapping it back to orthogonal projections. They provide a means to interpolate through PCA components, the orthogonal contributions of the scatterplot matrix, and the grand tour. This work also defines user-controlled interaction, similar to small steps in a manual or radial tour.
TripAdvisor [30] is an interactive application that plans sequential interpolation between distant target bases. It also provides an additional global context of a subset of possible frames with glyph representation and an overview of variable attribution by summarizing the top ten principal components. It allows for user-steering by using a "touchpad polygon". This touchpad allows for contribution magnitudes to be changed. This is similar to an incremental change with the manual tour.
The number of orthogonal axes in static plots as well as the number of bases to view in a tour increase quadratically with the dimensions, \(p\). This is why it is particularly important to properly select variables or otherwise reduce the dimensions before viewing. PCA, Linear discriminant analysis and entropy are common approaches to variable selection [37, 38, 46]. Such methods often yield a sort of screeplot [8] where the analyst selects a subjective, but informed, number of components to approximate the data while discarding the least information. The variable sensitivity we test for, in contrast, is the act of visual analysis of one variable's contribution to the structure. In practice, this is a tool for the analyst to fine-tune their variable selection or otherwise evaluate the resulting approximated space.
In order to further mitigate the view time, objective functions can be used to inform static or animated biplots. A dissimilarity statistic can be used to solve a basis path for showing a particularly interesting tour [24]. More generally projection pursuit can be used to conduct a guided tour of any objective function applied to an embedding space [12, 13]. However, the function optimized is likely to show some feature of interest if it is ultimately selected by the analyst. The ability to stop
Fig. 2: Scatterplot matrix of the first four principal components of 6D simulated data containing four classes. The separation between classes is primarily in PC1 and PC4. This is not uncommon because PCA is summarizing variance, not cluster structure.
and control the exploration at any point only stands to improve one's understanding of the data.
### Empirical evaluation
Some studies compare visualizations across complete contributions of variables. Chang [10] conducted an \(n=51\) participant study comparing parallel coordinate plots and scatterplot matrix either in isolation, sequentially, or as a coordinated view. Accuracy, completion time, and eye focus were measured for six tasks. Three tasks were more accurate with scatterplot matrix and three with parallel coordinates, while the coordinated view was usually marginally more accurate than the max of the separate visuals. Cao [7] compare nonstandardized line-glyph and star-glyphs with standardized variants (with and without fill under the curve). Each of the \(n=18\) participants performed 72 trials across the six visuals, two levels of dimensions, and two levels of observations. Visuals with variable standardization outperformed the nonstandardized variants, and the radial star-glyph reportedly outperformed the line variant.
Other studies have investigated the relative benefits of projecting to 2- or 3D scatterplots in PCA-reduced spaces. Gracia [18] conducted an \(n=40\) user study comparing 2- and 3D scatterplots on traditional 2D monitors. Participants perform point classification, distance perception, and outlier identification tasks. The results are mixed and primarily have small differences. There is some evidence to suggest a lower error in distance perception from a 3D scatterplot. Wagner Filho [45] performed an \(n=30\) mixed-design study on PCA reduced space using scatterplot displays between 2D on monitors, 3D on monitors, and 3D display with a head-mounted display. None of the tasks on any dataset lead to a significant difference in accuracy. However, the immersive display reduced effort and navigation, resulting in higher perceived accuracy and engagement. Sedlmair [39] instead used two expert coders to evaluate 75 datasets and four dimension reduction techniques across the displays of 2D scatterplots, interactive 3D scatterplots, and 2D scatterplot matrices. They suggested a tiered guidance approach finding that 2D scatterplots are often sufficient to resolve a feature. If not, try 2D scatterplots on a different dimension reduction technique before going to scatterplot matrix display or concluding a true negative. They find that interactive 3D scatterplots help in very few cases.
### Conclusion
Orthogonal axes visualizations either scale poorly with dimensionality or introduce an asymmetry of the variable ordering. Projections visualize the full \(p\)-data as fewer dimensions, traditionally 1-3 at a time. In linear, orthogonal projections, the resulting space is composed of a linear combination of the original variables that maintain variable interpretability. While nonlinear techniques distort and bend space in different ways that are hard to visualize and communicate.
Tours are linear projections that are animated over changes in the basis. Several more-recent, orthographic-star coordinate methods independently reach animated linear projections similar to tours. Some quality metrics and empirical studies compare techniques but scarcely with animated methods. Below we conduct a user study to compare the radial tour with PCA and the grand tour on a variable attribution task on clustered data.
## 3 User study
The experiment was designed to assess the performance of the radial tour relative to the grand tour and PCA for interpreting the variable attribution to the separation between two clusters. Data were simulated across three experimental factors: location of the cluster separation, cluster shape, and data dimensionality. Participant responses were collected using a web application and crowdsourced through prolific.co, [33] an alternative to MTurk.
### Objective
PCA will be used as a baseline for comparison as it is the most commonly used linear embedding. It will use static, discrete jumps between orthogonal components. The grand tour will act as a secondary control that will help evaluate the benefit of observation trackability between nearby animation bases but without user-control of its path. Lastly, the radial tour will be compared, which benefits from the continuity of animation and user control of the basis.
Then for some subset of tasks, we expect to find that the radial tour performs most accurately. Conversely, there is less to be certain about the accuracy of such limited grand tours as there is no objective function in selecting the bases; it is possible that the random selection of the target bases altogether avoids the bases showing cluster separation. However, given that the data dimensionality is modest, it is probable that the grand tour coincidentally regularly crossed bases with the correct information for the task.
Experimental factors and the definition of an accuracy measure are given below. The null hypothesis can be stated as:
\[H_{0}:\text{accuracy does not change across the visual methods}\] \[H_{\alpha}:\text{accuracy does change across the visual methods}\]
### Visual factors
The visual methods are tested mixed design, with each visual being evaluated twice by each participant. Scatterplot matrices or parallel coordinates could alternatively be used to visualize these spaces. However, we opt to focus on single lipid displays to focus on the differences between the radial tour and its most comparable visuals, rather than a comprehensive comparison of visual methods. The rest of this section
Figure 3: A radial tour changing the contribution of V2. The contribution is in the direction of cluster separation. When its contribution is removed, the clusters overlap (right). Because of this, we say that V2 is sensitive to the separation of these two species.
discusses the design standardization and unique input associated with each visual.
The visualization methods were standardized wherever possible. Data were displayed as 2D scatterplots with biplots. All aesthetic values (color-blind safe colors, shapes, sizes, absence of legend, and axis titles) were constant. The variable contribution biplot was always shown left of the scatterplot embeddings with their aesthetic values consistent. What did vary between visuals were their inputs.
PCA allowed users to select between the top four principal components for each axis regardless of the data dimensionality (four or six). Upon changing an axis, the visual would change to the new view of orthogonal components without displaying intermediate bases. There was no user input for the grand tour; users were instead shown a 15-second animation of the same randomly selected path (variables containing cluster separation were shuffled after simulation). Participants could view the same clip up to four times within the time limit. Radial tours allowed participants to select the manipulation variable. The starting basis was initialized to a half-clock design, where the variables were evenly distributed in half of the circle. This design was created to be variable agnostic while maximizing the independence of the variables. Selecting a new variable resets the animation where the new variable is manipulated to a complete contribution, zeroed contribution, and then back to its initial contribution. Animation and interpolation parameters were held constant across grand and radial tour (five bases per second with a step size of 0.1 radians between interpolated bases). Fig. 4 displays screen captures of the visuals in the application.
### Experimental factors
In addition to the visual method, data are simulated across three experimental factors. First, the _location_ of the separation between clusters is controlled by mixing a signal and a noise variable at different ratios. Secondly, the _shape_ of the clusters reflects varying distributions of the data. And third, the _dimension_-ality of the data is also tested. The levels within each factor are described below, and Fig. 5 gives a visual representation.
The _location_ of the separation between the clusters is at the heart of the measure. It would be good to test a few varying levels. To test the sensitivity, a noise and signal variable are mixed at different ratios. The separation between clusters is mixed at the following percentages: 0/100% (not mixed), 33/66%, 50/50% (evenly mixed).
In selecting the _shape_ of the clusters, the convention given by Scrucca et al. (2016) is followed. They describe 14 variants of model families containing three clusters. The model family name is the abbreviation of the clusters' respective volume, shape, and orientation. The levels are either _Equal_ or _V_ary. The models EEE, EEV, and EVV are used. For instance, in the EEV model, the volume and shape of clusters are constant, while the shape's orientation varies. The EVV model is modified by moving four-fifths of the data out in a "\(>\)" or banana-like shape.
_Dimension_-ality is tested at two modest levels: four dimensions containing three clusters and six with four clusters. Such modest dimensionality is required to limit the difficulty and search space to make the task realistic for crowdsourcing.
Fig. 4: Examples of the application displays for PCA, grand tour, and radial tour.
Fig. 5: Levels of the visuals and three experimental factors: location of cluster separation, the shape of clusters, and dimensionality of the sampled data.
### Task and evaluation
With our hypothesis formulated, let us turn our attention to the task and how it is evaluated. Participants were asked to "check any/all variables that contribute more than average to the cluster separation of the green circles and the orange triangles". This was further explained in the explanatory video as "mark any and all variable that carries more than their fair share of the weight, or one quarter in the case of four variables". The participant instruction video can be viewed at [https://vimeo.com/712674984](https://vimeo.com/712674984).
The instructions iterated several times in the video were: 1) use the input controls to find a basis that contains separation between the clusters of green circles and orange triangles, 2) look at the orientation of the variable contributions in the grey circle (bipolar axes orientation), and 3) select all variables that contribute more than uniformed distributed cluster separation in the scatterplot. Independent of the experimental level, participants were limited to 60 seconds for each evaluation of this task. This restriction did not impact many participants as the 25th, 50th, and 75th quantiles of the response time were about 7, 21, and 30 seconds, respectively.
The accuracy measure of this task was designed with a couple of features in mind. 1) Symmetric about the expected value, without preference for under- or over-guessing. 2) Heavier than linear weight with an increasing difference from the expected value. The following measure is defined for evaluating the task.
Let the data \(\mathbf{X}_{ijk},i=1,...,n;j=1,...,P;k=1,...,K\) be simulated observations containing clusters of observations of different distributions. Where \(n\) is the number of observations, \(p\) is the number of variables, and \(K\) indicates the number of clusters. Cluster membership is exclusive; an observation cannot belong to more than one cluster.
The weights, \(w\), is a vector of the variable-wise difference between the mean of two clusters of less \(1/p\), the expected cluster separation if it were uniformly distributed. Accuracy, \(A\) is defined as the signed square of these weights if selected by the participant. Participant responses are a logical value for each variable -- whether or not the participant thinks each variable separates the two clusters more than uniformly distributed separation. Weights comparing clusters 1 and 2 are calculated as follows:
\[A= \sum_{j=1}^{p}I(j)\cdot sign(w_{j})\cdot w_{j}^{2},\text{ where}\] \[w_{j}= \frac{\overline{X}_{j1}-\overline{X}_{j2}}{\sum_{j=1}^{p}(| \overline{X}_{j1}-\overline{X}_{j2}|)}-\frac{1}{p}\]
where
\(I\) is the indicator function, returning a binary response
\(\overline{X}_{j,k}\), mean of the \(j\)-th variable of the \(k\)-th cluster
where \(I(j)\) is the indicator function, the binary response for variable \(j\). Fig. 6 shows one projection of a simulation with its observed variable separation (wide bars), expected uniform separation (dashed line), and accuracy if selected (thin vertical lines).
### Randomized factor assignment
Now, with simulation and their artifacts in hand, this section covers how the experimental factors are assigned and demonstrate how this is experienced from the participant's perspective.
The study is sectioned into three periods. Each period is linked to a randomized level of visual and location. The order of dimension and shape are of secondary interest and are held constant in increasing order of difficulty; four then six dimensions and EEE, EEV, then EVV-banana, respectively.
Each period starts with an untimed training task at the simplest remaining experimental levels; location = 0/100%, shape = EEE, and four dimensions with three clusters. This serves to introduce and familiarize participants with input and visual differences. After the training, the participant performs two trials with the same visual and location level across the increasing difficulty of dimension and shape. The plot was removed after 60 seconds, though participants rarely reached this limit.
We assigned these factors based on the following order: visual methods, location, shape, and dimensionality. We first assigned three visual methods to three different sessions. The session order and the order of location follow a nested Latin square. The order of dimension and shape are assigned based on increasing order of difficulty.
Through pilot studies sampled by convenience (information technology and statistics Ph.D. students attending Monash University), it was estimated that three complete evaluations are needed to power the study properly, a total of \(N=3\times 3!^{2}=108\) participants.
### Participants
\(N=108\) participants were recruited via prolific.co (Palan and Schitter, 2018). Participants are restricted based on their claimed education requiring that they have completed at least an undergraduate degree (some 58,700 of the 150,400 users at the time). This restriction is used on the premise that linear projections and bipid displays will not be regularly used for consumption by general audiences. There is also the implicit filter that Prolific participants must be at least 18 years of age and have implicit biases of timezone, internet availability, language compatibility, and socioeconomic status. Participants were compensated for their time at \(\ell 7.50\) per hour, whereas the mean duration of the survey was about 16 minutes. Previous knowledge or familiarity was minimal, as validated in the follow-up survey. The Supplemental Materials include a heatmap distribution of age and education paneled across preferred pronouns of the participants that completed the survey, who are relatively young, well-educated, and slightly more likely to identify as males.
## 4 Results
To recap, the primary response variable is accuracy, as defined in Sect. 3.4. Two primary data sets were collected; the user study evaluations and the post-study survey. The former is the 108 participants with the experimental factors: visual, location of the cluster separation signal, the shape of the variance-covariance matrix, and the dimensionality of the data. Experimental factors and randomization were discussed in Sect. 3.3. A follow-up survey was completed by 84 of these 108 people. It collected demographic information (preferred pronoun, age, and education) and subjective measures for each visual (preference, familiarity, ease of use, and confidence).
Below a battery of mixed regression models is built to examine the degree of the evidence and the size of the effects of the experimental factors. Then, Likert plots and rank-sum tests to compare the subjective measures between the visuals.
### Accuracy
To quantify the contribution of the experimental factors to the accuracy, mixed-effects models were fit. All models have a random effect term on the participant and the simulation. These terms explain the amount of error attributed to the individual participant's effect and variation due to the random sampling data.
In building a set of models to test, a base model with only the visual term being compared with the full linear model term and progressively interacting with an additional experimental factor. The models with three and four interacting variables are rank deficient; there is not enough varying information in the data to explain all interacting terms.
\[\begin{array}{ll}\text{{Fixed effects}}&\text{{Full model}}\\ \alpha&\bar{Y}=\mu+\alpha+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha+\beta+\gamma+\delta&\bar{Y}=\mu+\alpha+\beta_{+}\gamma+\delta+\mathbf{Z }+\mathbf{W}+\varepsilon\\ \alpha\times\beta+\gamma+\delta&\bar{Y}=\mu+\alpha\times\beta_{+}\gamma+\delta+ \mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha\times\beta+\gamma+\delta&\bar{Y}=\mu+\alpha\times\beta_{+}\gamma+ \delta+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha\times\beta\times\gamma+\delta&\bar{Y}=\mu+\alpha\times\beta_{+}\gamma \times\delta+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \alpha\times\beta\times\gamma\times\delta&\bar{Y}=\mu+\alpha\times\beta_{+} \gamma\times\delta+\mathbf{Z}+\mathbf{W}+\varepsilon\\ \end{array}\]
Table 1 compares the model summaries across increasing complexity. The \(\alpha\times\beta+\gamma+\delta\) model is selected to examine in more detail as it has relatively high condition \(R^{2}\) and not overly complex interacting terms. Table 2 looks at the coefficients for this model. There is strong evidence suggesting a relatively large increase in accuracy from the radial tour, though there is evidence that almost all of the increase the is lost under 33/66% mixing.
We also want to visually examine the conditional variables in the model. Fig. 7 illustrates the accuracy for each model term shown as mean and 95% confidence interval.
### Subjective measures
Modeling has proven that the use of the radial tour leads to a sizable improvement in the accuracy measure for this task. This is not the whole story. It is desirable to know what the users think of using the visuals. We follow the direction set by [45]. They observe four
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & AIC & BIC & R2 cond. & R2 marg. & RMSE \\ \hline a & **-71** & **-71** & -44.219 & 0.303 & 0.289 \\ a+b+c+d & -45 & -45 & 4.063 & 0.334 & 0.294 \\ a*b+c+d & -26 & -25 & 41.445 & 0.338 & 0.293 \\ a*b+c+d & 28 & 32 & 167.092 & **0.383** & 0.309 \\ a*b*c+d & 105 & 116 & **360.052** & 0.37 & **0.19** \\ \hline \hline \end{tabular} where
\(\mu\) is the intercept of the model
\(\alpha_{i}\) is the visual \(|\,i\in\,\)(peak, grand, radial)
\(\beta_{i}\) is the location \(|\,j\in\,\)(0100, 3366, 595/0% mix)
\(\beta_{i}\) is the label \(|\,k\in\,\)(EEE, EEV, EVV banana)
\(\delta\) is the dimension \(|\,l\in\,\)(4 \& 3, 6 \& 4) var \& clusters
\(\chi\sim\mathcal{N}(0,\,\sigma)\) is the random effect of participant
\(\psi\sim\mathcal{N}(0,\,\sigma)\) is the random effect of simulation
\(\epsilon\sim\mathcal{N}(0,\,\sigma)\) is the remaining error of the model
\end{table}
Table 1: Model performance of random effect models regressing accuracy. Complex models perform better in terms of \(R^{2}\) and RMSE, yet AIC and BIC penalize their large number of fixed effects in favor of the much simpler model containing only the visuals. Conditional \(R^{2}\) includes error explained by the random effects, while marginal does not.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Est & SE & df & t val & Prob \\ \hline (Intercept) & 0.10 & 0.06 & 16.1 & 1.54 & 0.143 \\
**Factor** & 0.06 & 0.04 & 622.1 & 1.63 & 0.104 \\ VisRadial & 0.14 & 0.04 & 617.0 & 3.77 & 0.000 *** \\
**Fixed effects** & & & & & \\ Loc33/66\% & -0.02 & 0.07 & 19.9 & -0.29 & 0.777 \\ Loc50/50\% & -0.04 & 0.07 & 20.0 & -0.66 & 0.514 \\ ShapeEEV & -0.05 & 0.06 & 11.8 & -0.82 & 0.427 \\ ShapeBanana & -0.09 & 0.06 & 11.8 & -1.54 & 0.150 \\ Dim6 & -0.01 & 0.05 & 11.8 & -0.23 & 0.824 \\
**Interactions** & & & & & \\ VisGrand:Loc33/66 & -0.02 & 0.06 & 588.9 & -0.29 & 0.774 \\ VisRadial:Loc33/66 & -0.12 & 0.06 & 586.5 & -2.13 & 0.033 * \\ VisGrand:Loc50/50 & -0.03 & 0.06 & 591.6 & -0.47 & 0.641 \\ VisRadial:Loc50/50 & -0.06 & 0.06 & 576.3 & -1.16 & 0.248 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The task accuracy model coefficients for \(\tilde{Y}=\alpha\times\beta+\gamma+\delta\), with visual = pca, location = 0/100%, shape = EEE, and dim = 4 held as baselines. Visual being radial is the fixed term with the strongest evidence supporting the hypothesis. Interacting with the location term, there is evidence suggesting radial performs with minimal improvement for 33/66% location mixing.
Figure 6: Illustration of how accuracy is measured. (L), Scatterplot and biplot of PC1 by PC4 of a simulated data set (R) illustrate cluster separation between the green circles and orange triangles. Bars indicate observed cluster separation, and (red/green) lines show the accuracy of the variable if selected. The horizontal dashed line has a height \(1/p\), the expected value of cluster separation. The accuracy weights equal the signed square of the difference between each variable value and the dashed line.
subjective measures. The following were used in this study: confidence, ease of use, prior familiarity, and preference. Each of these questions was asked of all for each visual as 5-point Likert items.
The 84 evaluations of the post-study survey are shown in Fig. 8. The figure uses Likert plots or stacked percentage bar plots and associated mean and 95% confidence intervals.
There was strong evidence to support that participants preferred the radial tour to either alternative. There is less evidence that the radial tour led to more confidence and was found easier to use than the grand tour. In confirmation of expectations, crowdsourced participants had low familiarity with all visuals, with no difference in mean supported.
## 4 Discussion
Data visualization is an integral part of understanding relationships in data and how models are fitted. When it comes to multivariate data giving a comprehensive view quickly becomes difficult as the dimensions become sizable. Analysts have the task of choosing which visualization technique to use. Because the viewing volume/time of multivariate spaces typically increase quadratically with dimensions dimension reduction must be properly conducted. While there are optimization methods for static and animated visuals, the particular function used is a guided choice of the analyst.
Sect. 2 discussed various types of visualization which are may be preferred for differing tasks and ends. The visualization and perception of multivariate spaces is a broad and heterogeneous task. This work focuses a subset of linear projections and especially sheds light on potential benefit of providing user control in conjunction with the animated projection over many bases as a radial tour.
The radial tour is a method for the analyst to choose a variable to alter its contribution to the basis. The animation over small changes to the basis allows the sensitivity of the structure to be assessed from the variable contribution. The hypothesis is that user control over the basis and the permanence of observations between intermediate frames may lead to a better perception of the variable attribution causing the separation of clusters.
A mixed modeling analysis of the study provides strong support for this conclusion. That is, there is significant evidence to suggest the use of the radial tour leads to a sizable increase in accuracy. One unexpected caveat is that mixing the location of the signal at 33/66% almost completely negates this gain. Perhaps this is because the "half-clock" basis used did not give enough weight to the variable containing the small fraction. It was also interesting to note that no level of the experimental factors alone had a significant effect on this setup. Lastly, the follow-up survey asked participants to evaluate measures of the visuals. Most notably, participants preferred the radial tour to the other visuals. Knowing that the radial tour outperforms alternatives and is the preferred choice can help inform the selection of visual methods for developers and analysts.
There are several implicit limitations to this study: the task, type of data, and levels of the factors to name a few. The expansion of any of these areas is conceptually simple, but exponentially increases the number of participants needed to properly power the study. Additionally the sample of crowd-sourced, educated, but unfamiliar users may not extrapolate well to more experienced users. There are several ways that future work could be extended. Aside from expanding the support of the experimental factors, more exciting directions include: introducing a new task, including more visualizations, and changing the experience level of the target population. It is difficult to achieve good coverage given the number of possible factors to vary.
## 5 Conclusion
This paper discussed a crowdsourced mixed design user study (\(n=108\)) comparing the efficacy of three linear projection techniques: PCA, grand tour, and radial tour. The participants performed a supervised cluster task, explicitly identifying which variables contribute to the separation of two target clusters. This was evaluated evenly over four experimental factors. In summary, mixed model regression finds strong evidence that using the radial tour sizably increases accuracy, especially when cluster separation location is not mixed at 33/66%. The effect sizes on accuracy are large relative to the change from the other
Figure 8: The subjective measures of the 84 responses of the post-study survey with five-point Likert items levels of agreement. (L) Likert plots (stacked percent bar plots) with (R) mean and 95% CI of the same measures. Participants are more confident using the radial tour and find it easier to use than the grand tour. The radial tour is the most preferred visual.
Figure 7: Accuracy of terms of the model \(\bar{Y}=\alpha\times\beta+\gamma+\delta\). Viewing the marginal accuracy of the terms corroborates the primary findings that the use of the radial tour leads to a significant increase in accuracy, at least over PCA, and this effect is particularly well supported when no location mixing is applied.
experimental factors and the random effect of data simulation, though smaller than the random effect of the participant. The radial tour was the most preferred of the three visuals.
There is no panacea for the comprehensive visualization of multivariate spaces. We have demonstrated that there is a definite value of user-control in linear projections. The agency of the analyst remains an important tool for the exploratory analysis of multivariate data.
## Acknowledgments
This research was supported by an Australian Government Research Training Program (RTP) scholarship. This article was created in **R**[36] and **markdown**[49]. Visuals were prepared with **spinifex**[42]. We thank Jieyang Chong for his help in proofreading this article. The code, response files, their analyses, and the study application are publicly available at [https://github.com/nspyrison/spinifex.study](https://github.com/nspyrison/spinifex.study). The participant instruction video can be viewed at [https://vimeo.com/712674984](https://vimeo.com/712674984).
|
2309.04433 | Variations and Relaxations of Normalizing Flows | Normalizing Flows (NFs) describe a class of models that express a complex
target distribution as the composition of a series of bijective transformations
over a simpler base distribution. By limiting the space of candidate
transformations to diffeomorphisms, NFs enjoy efficient, exact sampling and
density evaluation, enabling NFs to flexibly behave as both discriminative and
generative models. Their restriction to diffeomorphisms, however, enforces that
input, output and all intermediary spaces share the same dimension, limiting
their ability to effectively represent target distributions with complex
topologies. Additionally, in cases where the prior and target distributions are
not homeomorphic, Normalizing Flows can leak mass outside of the support of the
target. This survey covers a selection of recent works that combine aspects of
other generative model classes, such as VAEs and score-based diffusion, and in
doing so loosen the strict bijectivity constraints of NFs to achieve a balance
of expressivity, training speed, sample efficiency and likelihood tractability. | Keegan Kelly, Lorena Piedras, Sukrit Rao, David Roth | 2023-09-08T16:55:23Z | http://arxiv.org/abs/2309.04433v1 | # Variations and Relaxations of Normalizing Flows
###### Abstract
Normalizing Flows (NFs) describe a class of models that express a complex target distribution as the composition of a series of bijective transformations over a simpler base distribution. By limiting the space of candidate transformations to diffeomorphisms, NFs enjoy efficient, exact sampling and density evaluation, enabling NFs to flexibly behave as both discriminative and generative models. Their restriction to diffeomorphisms, however, enforces that input, output and all intermediary spaces share the same dimension, limiting their ability to effectively represent target distributions with complex topologies (Zhang and Chen 2021). Additionally, in cases where the prior and target distributions are not homeomorphic, Normalizing Flows can leak mass outside of the support of the target (Cornish et al. 2019; Wu et al. 2020). This survey covers a selection of recent works that combine aspects of other generative model classes, such as VAEs and diffusion, and in doing so loosen the strict bijectivity constraints of NFs to achieve a balance of expressivity, training speed, sample efficiency and likelihood tractability.
**Keywords:** Generative Modeling, Normalizing Flows, Diffusion
## 1 Introduction
Research in generative modeling with deep learning has in large part focused on four classes of models: flows, VAEs, diffusion models and GANs. Until recently, GANs had proven the model family capable of producing the highest fidelity generated samples, but a recent string of high-profile successes using diffusion models for natural image (Ho et al. 2020), audio (Kong et al. 2020) and video synthesis (Ho et al. (2022)), trajectory planning (Janner et al. 2022), protein and material design (Luo et al.; Anand and Achim 2022) has called into question their dominance in generative tasks. VAEs on the other hand, are a slightly older class of models that are easier to train but have been less successful at producing realistic data distributions. Some work has gone into improving the expressivity of VAEs (Aneja et al. 2021) but has encountered a tension between VAE expressivity and a tendency
towards posterior collapse, where the generated model ignores the latent codes \(z\) entirely in favor of learning a capable generator.
This paper presents the fundamentals for each of these basic model classes and a selection of recent works that combine aspects from each to achieve a balance of model expressivity, training speed, sample efficiency and likelihood tractability. In particular, we focus on a selection of papers that loosen the strict bijectivity constraints of Normalizing Flows (NF) and attempt to improve the expressivity and sample efficiency of NFs while retaining as much as possible the likelihood evaluation properties the strict construction affords.
## 2 Normalizing Flows
Normalizing Flows are notable among the broader family of generative models in that they are not only capable of expressing rich, complex distributions- they are able to do so while also retaining the ability to perform exact density evaluation. They achieve this capacity by expressing a complex target distribution of interest as a bijective, differentiable transformation of a simpler, known base distribution. This formulation provides a learning mechanism using maximum likelihood over i.i.d samples from the target distribution, a sampling mechanism via transformations over points drawn from the base distribution and exact density evaluation using the inverse of the learned transformation and a change of variables with the learned transform's Jacobian.
Normalizing Flows were popularized in the context of Variational Inference by Rezende and Mohamed (2015) as a choice of tractable posterior for continuous variables that is more capable of representing complex distributions than traditional choices for approximate posteriors, such as Mean Field Approximations. However, the use of flows for density estimation was first formulated by Tabak and Vanden-Eijnden (2010) and was used in subsequent works for clustering and classification tasks in addition to density estimation (Agnelli et al., 2010; Laurence et al., 2014).
The formal structure of a Normalizing Flow is as follows: Let \(Z\in\mathbb{R}^{D}\) be a random variable with known probability density function \(p_{Z}\): \(\mathbb{R}^{D}\mapsto\mathbb{R}\), referred to as the base distribution and let \(X\in\mathbb{R}^{D}\) be a random variable of interest over which we would like to define a density \(p_{X}\): \(\mathbb{R}^{D}\mapsto\mathbb{R}\), referred to as the target distribution. We then seek a parameterized transformation \(F_{\theta}\): \(\mathbb{R}^{D}\mapsto\mathbb{R}^{D}\) under which \(F_{\theta}(Z)=X.\) We restrict our choices for \(F_{\theta}\) to bijective, differentiable mappings, known as _diffeomorphisms_. Under these constraints, the density of a point \(x\sim X\), can be calculated under a change of variables using the determinant of the transformation's Jacobian, \(J_{F}\), as follows:
\[p_{X}(x)=p_{Z}(z)|detJ_{F}(z)|^{-1}\]
or, framed in terms of the reverse direction,
\[p_{X}(x)=p_{Z}(F_{\theta}^{-1}(x))|detJ_{F}^{-1}(x)|.\]
This product represents the probability density of the inverse-transformed point in the base distribution multiplied by the change in volume incurred by the transformation in an infinitesimal neighborhood around \(z\). In practice, \(F_{\theta}\) is often constructed as the composition of a sequence of \(N\) diffeomorphisms \(f_{1,\theta_{1}},\ldots,f_{M,\theta_{M}}\) such that
\[F_{\theta}=f_{1,\theta_{1}}\circ\cdots\circ f_{M,\theta_{M}}\]
Since each of these sub-transformations is itself invertible, their composition is also invertible and bijective. The determinant of \(J_{F}\) can be computed exactly as:
\[detJ_{F}(z)=\prod_{i=1}^{N}detJ_{f_{i,\theta_{i}}}\]
and the function's inverse as
\[F_{\theta}^{-1}=f_{M,\theta_{M}}^{-1}\circ\cdots\circ f_{1,\theta_{1}}^{-1}.\]
### Training of Normalizing Flows
Normalizing Flows can be trained in one of two ways, depending on the nature of access to the target distribution during training. In the setting where samples from \(p_{x}\) are available, but not their densities, the model parameters \(\theta\) can be estimated using the forward KL-Divergence:
\[\mathcal{L}_{\theta} =D_{KL}\left[p_{x}^{*}(x)\;||\;p_{X}(x;\theta)\right]\] \[=-\mathbb{E}_{p_{x}^{*}(x)}[\log p_{x}(x;\theta)]+const.\] \[=-\mathbb{E}_{p_{x}^{*}(x)}[\log p_{Z}(F_{\theta}^{-1}(x))+\log |detJ_{F}^{-1}(x)|]+const.\]
With a set of N samples \(\{x_{i}\}_{i=1}^{N}\), we can estimate the above loss as
\[\mathcal{L}_{\theta}\approx-\frac{1}{N}\sum_{i=1}^{N}\log p_{Z}(F_{\theta}^{- 1}(x))+\log|detJ_{F}^{-1}(x)|+const.\]
In the setting where it is possible to evaluate the target density \(p_{x}^{*}\) cheaply, but it is not straightforward to draw samples from said distribution, model parameters \(\theta\) can be estimated using the reverse KL-Divergence:
\[\mathcal{L}_{\theta} =D_{KL}\left[p_{X}(x;\theta)\;||\;p_{x}^{*}(x)\right]\] \[=\mathbb{E}_{p_{x}(x;\theta)}\left[\log p_{x}(x;\theta)-\log p_{x }^{*}(x)\right]\]
### Limitations of Normalizing Flows
Though Normalizing Flows are in principle capable of representing arbitrarily complex target distributions (Papamakarios et al., 2021), for choices of simple base distributions and reasonably smooth transformations they suffer from topological limitations (Stimper et al., 2021). Strict bijectivity enforces that the input, output and all intermediary spaces share identical dimensionality and topology. Cornish et al. (2019) demonstrate that for base and target distributions with distinct support topologies (e.g differing in number of connected components, number of holes), and choice of candidate transformation where \(F_{\theta}\) and \(F_{\theta}^{-1}\) are continuous, it is impossible to represent the target distribution as a transformation of the base distribution and an arbitrarily accurate approximation requires the bi-Lipschitz
constant of \(F_{\theta}\), a measure of a function's "invertibility" (Behrmann et al., 2020) to approach \(\infty\).
Evidence of this limitation can be seen in a "smearing" effect when attempting to represent a bi-modal or multi-modal target distribution using a standard unimodal Gaussian as a base distribution, where sharp boundaries cannot be expressed and density is leaked outside the support of the true target distribution. (Figure 1) Further, under the _manifold hypothesis_(Bengio et al., 2012), if real-world distributions reside in a low-dimensional (\(r<<d\)) manifold of the spaces they inhabit, it is a relative certainty that the base and target distributions will have mismatched support topologies and that probability density will leak outside of the target support.
_Figure taken from Stimper et al. (2021)_
## 3 Variational Autoencoders
Variational Autoencoders (VAEs) are a likelihood-based class of models that provide a principled framework for optimizing latent variable models (Kingma and Welling, 2013). It consists of two models- a recognition model or encoder and a generative model or decoder that are coupled together. The recognition model approximates the posterior over the latent random variables which is passed as input to the generative model to generate samples. The generative model, on the other hand, provides a scaffolding or structure for the recognition model to learn meaningful representations of the data. The recognition model is the approximate inverse of the generative model according to Bayes' rule. (Kingma and Welling, 2019)
In the typical setting for a latent variable model, we have some observed variables and some unobserved variables. To estimate the unconditional density of the observed variables, also called the model evidence, we marginalize over the joint distribution of the observed and unobserved variables, parameterized by \(\theta\). This is given by
\[p_{\theta}(x)=\int_{Z}p_{\theta}(x,z)dz\]
Figure 1: An example of ”smearing” in (\(c\)), where the target distribution (\(a\)) and the base distribution (\(b\)) differ in their number of connected components.
Framing the problem through an implicit distribution over x provides a great deal of flexibility. When we marginalize over the latents we end up with a compound probability distribution or mixture model. For example, if \(z\) is discrete and \(p_{\theta}(x|z)\) is a Gaussian distribution, then \(p_{\theta}(x)\) will be a mixture of Gaussians. For continuous \(z\), \(p_{\theta}(x)\), can be seen as an infinite mixture. Thus, depending on the choice of the latent distribution, we can control the expressivity of the unconditional density \(p_{\theta}(x)\), as desired.
This compound distribution, however, is obtained by integrating over the support of the latent distribution. Most of the time, this integral is intractable and thus, we cannot differentiate with respect to its parameters and optimize it using gradient descent. While the joint density \(p_{\theta}(x,z)\) is efficient to compute, the intractability of \(p_{\theta}(x)\), is related to the intractability of the posterior over the latent variable, \(p_{\theta}(z|x)\)(Kingma and Welling, 2019). From the chain rule, we have the following relationship between the densities
\[p_{\theta}(z|x)=\frac{p_{\theta}(x,z)}{p_{\theta}(x)}\]
The intractability of \(p_{\theta}(z|x)\) leads to the intractability of \(p_{\theta}(x)\). To overcome this hurdle, we employ approximate inference techniques. The framework of VAEs provides a computationally efficient way for optimizing latent variable models jointly with a corresponding inference model using gradient descent (Kingma and Welling, 2019). This is achieved by introducing the encoder or recognition model- a parametric inference model \(q_{\phi}(z|x)\), where \(\phi\) is the set of variational parameters.
Consequently, the optimization objective of VAEs is the variational lower bound or evidence lower bound (ELBO), where we optimize the variational parameters \(\phi\) such that
\[q_{\phi}(z|x)\approx p_{\theta}(z|x)\]
Figure 2: Computational flow in a VAE
_Figure taken from Kingma and Welling (2019)_
It follows from the derivation shown below
\[\log p_{\theta}(x) =\mathbb{E}_{q_{\phi}(z|x)}\log p_{\theta}(x)\] \[=\mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{p_{\theta}(x,z)}{p_{ \theta}(z|x)}\right]\] (chain rule) \[=\mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{p_{\theta}(x,z)q_{ \phi}(z|x)}{q_{\phi}(z|x)p_{\theta}(z|x)}\right]\] \[=\underbrace{\mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{p_{\theta} (x,z)}{q_{\phi}(z|x)}\right]}_{=\mathcal{L}_{\phi,\theta}(x)}+\underbrace{ \mathbb{E}_{q_{\phi}(z|x)}\log\left[\frac{q_{\phi}(z|x)}{p_{\theta}(z|x)} \right]}_{=D_{KL}(q_{\phi}(z|x)||p_{\theta}(z|x))}\]
The second term is the Kullback-Leibler (KL) divergence between \(q_{\phi}(z|x)\) and \(p_{\theta}(z|x)\), while the first term is the variational lower bound or evidence lower bound (ELBO).
Since the KL divergence is non-negative, the ELBO is the lower bound on the log-likelihood of the data
\[\mathcal{L}_{\phi,\theta}(x) =\log p_{\theta}(x)-D_{KL}(q_{\phi}(z|x)||p_{\theta}(z|x))\] \[\mathcal{L}_{\phi,\theta}(x) \leq\log p_{\theta}(x)\]
Thus, we can observe that maximizing the ELBO \(\mathcal{L}_{\phi,\theta}(x)\) with respect to \(\theta\) and \(\phi\), will have the following consequences
* It will approximately maximize the marginal likelihood \(p_{\theta}(x)\), implying that our generative model will get better
* It will minimize the KL divergence between \(q_{\phi}(z|x)\) and \(p_{\theta}(z|x)\), implying our approximation of the posterior, \(q_{\phi}(z|x)\), will get better
## 4 Denoising Diffusion
Diffusion-based generative models are parameterized Markov chains mainly used to create high quality images and videos, and also utilized in data compression and representation learning on unlabeled data. Diffusion models are both tractable and flexible, making them easier to train, evaluate and sample from. The transitions of the Markov chain gradually add noise to the data and then learn to reverse the diffusion process, producing desired samples after a finite time. Unlike VAEs, diffusion models are latent variable models where the dimensionality of latent space is the same as the original data. The idea of using diffusion for a generative process was initially introduced by Sohl-Dickstein et al. (2015) in 2015. Song and Ermon (2019) and Ho et al. (2020) improved the initial approach a couple of years after. The latter showed that diffusion models were capable of generating high quality images and unveiled an equivalence with denoising score matching in training and Langevin dynamics at the sampling stage.
The forward diffusion process gradually adds a small amount of noise in \(T\) steps to the original data until it is indistinguishable. A variance schedule \(\beta_{1},\ldots,\beta_{T}\), where \(w_{i}\in(0,1)\), is used to regulate the size of each step. If the noise is small enough, the transitions of the
reverse diffusion process will be conditional Gaussians as well. Given a point sampled from the original data distribution \(x_{0}\sim q(x)\) we have the following transition probabilities Weng (2021):
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\]
The forward process in a diffusion probabilistic model is fixed, other diffusion models such as diffusion normalizing flows have a trainable forward process Zhang and Chen (2021). A desirable property of the forward process shown by Sohl-Dickstein et al. (2015) is that we can sample \(x_{t}\) given \(x_{0}\) at any time step without having to apply \(q\) repeatedly
\[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1-\bar{\alpha}_ {t})I)\]
with \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=1}^{t}\alpha_{s}\).
We could use \(q(x_{t-1}|x_{t})\) to revert the forward diffusion process and generate a sample from the real distribution using random noise as input. Unfortunately \(q(x_{t-1}|x_{t})\) is intractable, therefore we will learn a model \(p_{\theta}(x_{t-1}|x_{t})\) to approximate it. Notably, the reverse conditional probability is tractable if we condition on \(x_{0}\)Weng (2021). Similar to VAE, we can use a variational lower bound to optimize \(-\log p_{\theta}(x_{0})\). After rewriting the lower bound into several KL-divergence terms and one entropy term and ignoring all terms that don't have any learnable parameters, we get two components \(L_{0}=-\log p_{\theta}(x_{0}|x_{1})\) and \(L_{t}=D_{KL}(q(x_{t}|x_{t+1},x_{0})||p_{\theta}(x_{t}|x_{t+1}))\). \(L_{t}\) is the KL divergence of two Gaussian distributions, where \(q(x_{t}|x_{t+1},x_{0})\) is the tractable reverse conditional distribution mentioned earlier. In Ho et al. (2020), \(L_{0}\) is modeled using a separate discrete decoder and a fixed variance term.
## 5 Score-Based Methods
Transport models employ maximum likelihood estimation to learn probability distributions. This reliance can pose a major challenge given complex partition functions, which may prove intractable. Some models add constraints to ensure MLE is feasible; the bijectivity of Normalizing Flows and the approximation of the partition function in VAEs are two such methods to overcome intractability. Another framework to address this scenario is known as the score-based method. For this setup, we model the score function rather than the density function directly:
\[s_{\theta}(x)=\nabla_{x}\log p_{\theta}(x)\]
Partition function \(Z_{\theta}\) is reduced to zero in the context of \(\nabla_{x}\log Z_{\theta}\) and thus is independent of the score-based model. We are therefore able to sidestep any challenging computation posed by the partition function while training. This setup introduces flexibility, where we can now work with many families of models that may have been otherwise intractable.
Score-based diffusion is an extension upon this method. As in the previous section on diffusion, this model class involves both a forward and backward stochastic differential equation. Again, the forward pass returns a noisy distribution:
\[x_{t}=e^{-t}x+\sqrt{1-e^{-t}}z\]
Where \(x\sim\pi_{d}\) and \(z\sim N(0,I)\).
In score-based diffusion, the reversed pass can now be written as a flow composed of diffusion drift plus an exact score:
\[dY_{t}=[Y_{t}-\nabla log\pi_{t}(Y_{t})]dt+\sqrt{2}dB_{t}\]
Where \(Y_{t}=X_{T-t}\) (Bruna (2022)).
The challenge now falls on estimating the scores from the data. This is particularly impactful in low density regions, where there is less data available to compute scores. In such cases, the model may produce poor quality sampling. To work around this obstacle, noise can be added to the data in increasingly larger magnitudes so that the model can at first train on less corrupted data, then learn low data density regions as the magnitude of noise grows. In this way, adding noise adds stability to score-based methods and aids in producing higher quality samples (Song and Ermon, 2019).
Score-based models can also be used to compute exact likelihoods. This requires converting the reverse stochastic differential equation into an ordinary differential equation, as seen below:
\[dx=[f(x,t)-\frac{1}{2}g^{2}(t)\nabla_{x}\log p_{t}(x)]dt\]
The above equation, known as the probability flow ODE, becomes a representation of a neural ODE when the score function \(\nabla_{x}\log p_{t}(x)\) is approximated by the score-based model \(s_{\theta}(x,t)\). Because of this relationship, the probability flow ODE takes on characteristics of a neural ODE, such as invertibility, and can compute exact likelihoods using the instantaneous change-of-variables formula (Song et al., 2021).
## 6 Relaxing Constraints
In this section, we explore several works that formulate new model classes by relaxing the strict bijectivity constraints of Normalizing Flows. These works expand the family of admissible functions to include surjective/stochastic transformations and take inspiration from score-based models and diffusion by introducing noise into the training process.
### SurVAE Flows
In an attempt to place VAEs and Normalizing Flows in a common context, Nielsen et al. (2020) introduce SurVAE Flows- a class of models composed of _surjective transformations_, allowing for models that mix bijective, surjective and stochastic components in a single end-to-end trainable framework. They identify three mechanisms needed for probabilistic generative models in this family:
1. A forward transformation: \(p(x|z)\)
2. An inverse transformation: \(p(z|x)\)
3. A likelihood contribution: \(p(x)\)
In a normalizing flow, the forward transformation and reverse transformations are deterministic and can be represented as \(p(x|z)=\delta(x-F(z))\) and \(p(x|z)=\delta(z-F^{-1}(x))\). In a VAE, both directions are stochastic and a variational approximation \(q(z|x)\) is used in place of the intractable posterior.
They use this decomposition to draw formal connections between stochastic transformations (VAEs) and bijections (normalizing flows) using Dirac delta functions. In particular, they show that the marginal density \(p(x)\) can be expressed under both paradigms as:
\[\log p(x)\simeq\log p(z)+\mathcal{V}(x,z)+\mathcal{E}(x,z)\]
where \(\mathcal{V}(x,z)\) represents the likelihood contribution and \(\mathcal{E}(x,z)\) represents the 'looseness' of the provided bound. Under VAEs and other stochastic transformations, the likelihood contribution term is calculated as \(\log\frac{p(x|z)}{q(z|x)}\) and the 'bound looseness' term is calculated as \(\log\frac{q(z|x)}{p(z|x)}\), while under normalizing flows and other bijections, the likelihood contribution term is \(\log|\det J|\) and the 'bound looseness' term is \(0\).
Through the use of surjective, non-injective layers, the authors present constructions that allow for _inference surjections_- models with exact inverses and stochastic forward transformations- and _generative surjections_- models with exact forward transformations and stochastic right inverses. In doing so, they formulate models that bypass the dimensionality constraints enforced by bijectivity without sacrificing the ability to perform exact likelihood evaluation.
The surjective layers they introduce include absolute value, max-value and stochastic permutation, which they use to demonstrate strong experimental results on synthetic modeling tasks. They demonstrate the effectiveness of surjective layers on a handful of synthetic modeling tasks, particularly those with inherent symmetries. Importantly for this survey, these experiments also demonstrate an ability to model sharper boundaries than a fully bijective flow is capable of producing.
The authors argue that a number of recently proposed generative model types can be understood as SurVAE flows, including diffusion models (Ho et al. (2020)), continuously indexed normalizing flows (Cornish et al. (2019)), stochastic normalizing flows (Wu et al. (2020)) and normalizing flows acting on augmented data spaces (Huang et al. (2020)).
### Stochastic Normalizing Flows
Stochastic Normalizing Flows (SNF) are a generalization of the Normalizing Flow framework introduced by Wu et al. (2020). They offer certain benefits over classical stochastic sampling methods and Normalizing Flows for sampling from a known energy model specified up to a normalization constant. Sampling methods such as Markov Chain Monte Carlo (MCMC) or Langevin Dynamics (LD) may have trouble converging because of slow mixing times and local energy minima- adding a deterministic transformation can help alleviate this problem. On the other hand, introducing noise to relax Normalizing Flow's bijectivity constraints can help solve the topological constraints mentioned in 2.2. In Figure 3 we show the double-well example, by adding stochasticity we're able to successfully separate the modes of the distributions avoiding the "smearing" effect.
Similar to NFs, SNFs are a sequence of deterministic transformations. Their contribution comes from adding stochastic blocks, such as Langevin, Metropolis-Hastings, VAE,
and diffusion normalizing flow layers. Both the deterministic and stochastic transformations help modify a prior into a complicated target distribution. We can use KL divergence to train NFs and SNFs. In the former we can calculate the probability density to generate a sample \(p_{x}(x)\) using change of variables, however, we can no longer do so- with the introduction of stochasticity, SNFs are no longer invertible. As described in 2, we can train a Normalizing Flow by energy-based training, used when we have a model for the target energy, or maximum likelihood training, when we have samples. We need to generalize the notion of energy and maximum likelihood training in order to train an SNF. We start by defining \(\mu_{z}(x)\propto e^{-u_{z}(z)}\) as our latent space distribution, \(p_{x}(x)\propto e^{-u_{x}(x)}\) as our output distribution, and maximizing the importance weights
\[\log w(z\to x)=\log\frac{\mu_{x}(x)}{p_{x}(x)}=-u_{x}(x)+u_{z}(z)+\sum_{t} \nabla S_{t}(y_{t})\]
where \(y_{t+1}|y_{t}\sim q_{t}(y_{t}\to y_{t+1}),\ y_{t}|y_{t+1}\sim\tilde{q}_{t}(y_{t+ 1}\to y_{t})\) are the forward/backward probability distributions at \(t\), we no longer have deterministic flow layers, and \(\nabla S_{t}=\log\frac{\tilde{q}_{t}(y_{t+1}\to y_{t})}{q_{t}(y_{t}\to y_{t+1})}\) represents the forward-backward probability ratio of step t. By maximizing the importance weights we get the following expressions for energy base training
\[\min\mathbb{E}_{\mu_{z}(x)\mathbb{P}_{f}(z\to x)}[-\log(w(z\to x))]= \text{KL}(\mu_{z}(x)\mathbb{P}_{f}(z\to x)||\mu_{x}(x)\mathbb{P}_{b}(x\to z))\]
and for maximum likelihood training
\[\min\mathbb{E}_{\mu_{x}(x)\mathbb{P}_{b}(x\to z)}[-\log(w(z\to x))]= \text{KL}(\mu_{x}(x)\mathbb{P}_{b}(x\to z)||\mu_{z}(x)\mathbb{P}_{f}(z\to x)).\]
where \(\mu_{z}(x)\mathbb{P}_{f}(z\to x),\ \mu_{x}(x)\mathbb{P}_{b}(x\to z)\) are our forward and backward pass probabilities. Notably, the KL divergence of the paths is an upper bound to the KL divergence of the marginal distributions.
\[\text{KL}(p_{x}||\mu_{x})\leq\text{KL}(\mu_{z}(x)\mathbb{P}_{f}(z\to x)||\mu_ {x}(x)\mathbb{P}_{b}(x\to z))\]
Finally, we can draw asymptotically unbiased samples from our target distribution \(x\sim\mu_{x}(x)\) by employing the Metropolis-Hastings algorithm and using the importance weights shown above.
Figure 3: Double well problem: a) Normalizing flows, b) NF with stochasticity, c) Sample from true distribution
### Diffusion Normalizing Flow
Diffusion Normalizing Flow (Zhang and Chen (2021)), or DiffFlow, was introduced as a cross between Normalizing Flows and Diffusion models. DiffFlow is composed of two neural stochastic differential equations (SDEs): a forward pass \(F\) that transforms the data X into a simple distribution like Gaussian and a backward pass \(B\) that removes noise from the simple distribution to generate samples that match the target data distribution. Like diffusion models, the SDEs are jointly trained to minimize the KL divergence between the forward pass distribution and the backward pass distribution at final time \(\tau\). The objective equation is as follows:
\[KL(p_{F(\tau)}|p_{B(\tau)})=\mathbb{E}_{\tau\sim p_{F}}[\log p_{F}(x_{0})]+ \mathbb{E}_{\tau\sim p_{F}}[-\log p_{B}(x_{N})]+\sum_{i=1}^{N-1}\mathbb{E}_{ \tau\sim p_{F}}[-\log\frac{p_{F}(x_{i}|x_{i-1})}{p_{B}(x_{i-1}|x_{i})}]\]
Similar to Normalizing Flows, DiffFlow is able to learn while mapping to the latent space. However, DiffFlow relaxes the bijectivity constraint of NFs on this mapping. In doing so, Difflow has more expressivity and can learn distributions with sharper boundaries. Further, bijectivity may prevent models from having density support over the whole space. Thus in lifting the constraint, DiffFlow has been found to perform better on tasks like image generation of complex patterns. The authors also claim that the boosted expressivity of DiffFlow results in better performance in likelihood over other NF implementations(Zhang and Chen (2021)).
Diffusion Normalizing Flow bypasses the bijectivity constraint by adding noise to the forward stochastic differential equation. Most diffusion models add noise indiscriminately, which can require many iterations to reach Gaussian noise and can lead to generated distributions with corrupted or missing details. On the other hand, due to the trainability of the forward SDE, DiffFlow adds noise only to targeted areas. Thus, DiffFlow can diffuse noise more efficiently and retain topological details that might have been blurred out in other diffusion processes.
Similar to diffusion, DiffFlow SDEs are composed of a drift term f, a vector valued function, and a diffusion term g, a scalar valued function. The equations are as follows:
Forward SDE: \[dx =f(x,t,\theta)dt+g(t)dw\] Backward SDE: \[dx =[f(x,t,\theta)-g^{2}(t)s(x,t,\theta)]dt+g(t)dw,\]
where \(x\) is data at time \(t\) and \(w\) represents the standard Brownian motion. The main distinguishing factor from Diffusion models is that DiffFlow includes the \(\theta\) parameter in the drift term which makes the SDEs learnable. From these equations, it is clear that when the diffusion term \(g\) tends to 0, DiffFlow reduces to Normalizing Flows.
Given the SDEs above, the discretized equations can be written as:
\[x_{i+1} =x_{i}+f_{i}(x_{i})\Delta t_{i}+g_{i}\delta_{i}^{F}\sqrt{\Delta t _{i}}\] \[x_{i} =x_{i+1}-[f_{i+1}(x_{i+1})-g_{i+1}^{2}(x_{i+1})]\Delta t_{i}+g_{ i+1}\delta_{i}^{B}\sqrt{\Delta t_{i}}\]
Returning to KL divergence, given that the first term is a constant and utilizing the discretized SDEs, the objective can be reduced to the form:
\[L=\mathbb{E}_{\delta^{F};x_{0}\sim p_{0}}[-\log p_{B}(x_{N})+\sum_{i=1}\frac{1}{2 }(\delta^{B}_{i}(\tau))^{2}]\]
where noise is represented as:
\[\delta^{B}_{i}(\tau)=\frac{1}{g_{i+1}\sqrt{\Delta t}}[x_{i}-x_{i+1}+[f_{i+1}(x _{i+1})=g^{2}_{i+1}s_{i+1}(x_{i+1})]\Delta t]\]
Loss can now be minimized with Monte Carlo gradient descent (Zhang and Chen (2021)).
### Stochastic Normalizing Flows and Diffusion Normalizing Flows
Zhang and Chen (2021) introduced Diffusion Normalizing Flows (DNF) as a new type of model. Nevertheless, per Hagemann et al. (2021) if we view SNMs as a pair of Markov chains \(((X_{0},\ldots,X_{t}),(Y_{t},\ldots,Y_{0}))\) where \((Y_{t},\ldots,Y_{0})\) is the reverse Markov chain of \((X_{0},\ldots,X_{t})\), we can view DNFs as a type of SDFs with specific forward and backward layers
\[\mathcal{K}_{t}(x,\cdot) =P_{X_{t}|X_{t-1}=x}=\mathcal{N}(x+\epsilon g_{t-1}(x),\epsilon h ^{2}_{t-1})\] \[\mathcal{R}_{t}(x,\cdot) =P_{Y_{t-1}|Y_{t}=x}=\mathcal{N}(x+\epsilon(g_{t}(x)-h^{2}_{t}s_{ t}(x)),\epsilon h^{2}_{t})\]
The equations above come from the Euler discretization with step size \(\epsilon\) of the stochastic differential equation with drift \(g_{t}\), diffusion coefficient \(h_{t}\) and Brownian motion \(B_{t}\)
\[dX_{t}=g_{t}(X_{t})dt+h_{t}dB_{t}\]
## 7 Discussion
In this section, we talk about the role of stochasticity in normalizing flows and compare the various techniques introduced above on the basis of the following criteria:
* _Expressivity-_ while expressivity is usually used in a broad sense in the literature, we focus on each technique's ability to capture the various modes of the distribution they are trying to model as well as regions with relatively low density.
* _Training speed-_ we characterize training speed as the time taken by each technique to reach convergence.
* _Ease of likelihood computation-_ for this criterion, we look at the tractability of the likelihood computation for density estimation.
* _Sampling efficiency-_ we differentiate sampling efficiency from data efficiency, with the former referring to the computational cost required to generate samples with the latter referring to the number of samples required for optimization.
We also direct the reader to the comprehensive comparison of the bulk of the techniques covered in this paper performed by Bond-Taylor et al. (2022)
### Expressivity
As described in section 3, VAEs employ the use of latent variables. The choice of these provides them with a great deal of flexibility, resulting in highly expressive models. On the other hand, bijectivity constraints imposed by the normalizing flows framework result in representational insufficiency. Their representational capacity depends on the type of flow used in the model. For example, linear flows are limited in their expressivity. On the other hand, coupling and autoregressive flows, two of the most widely used flow architectures, enable normalizing flows to represent very complex distributions. However, they are still limited in their expressivity due to the invertibility constraint imposed by the framework (Kobyzev et al., 2021).
Stochastic normalizing flows overcome some of these limitations by incorporating stochastic sampling blocks into the normalizing flow framework, thus improving representational capacity over deterministic flow architectures by overcoming topological constraints (Wu et al., 2020). DiffFlow enjoys better expressivity than vanilla normalizing flows by overcoming their bijectivity constraints by adding noise. The aforementioned constraints prevent normalizing flows from expanding density support to the whole space when transforming complex distributions to a base distribution. As a result, DiffFlow can learn distributions with sharper boundaries (Zhang and Chen, 2021)
Score-based methods are notably flexible due to the fact that they are independent of the normalizing constant \(Z_{\theta}\). This allows score-based methods to represent a more diverse set of models. Similar to other types of models, score-based methods are limited by the constraint that the dimension between their input and output must match. Otherwise, score-based models may take the form of any vector-valued function and thus are quite expressive (Song and Ermon, 2019).
Surjective flows empirically demonstrate an ability to represent sharper boundaries than vanilla NFs, however, their methods are non-general and require prior knowledge of relevant symmetries in the target distribution (Nielsen et al., 2020).
### Training speed
Normalizing Flows are known to be inefficient and difficult to train due to invertibility constraints on transformations and as a consequence input and output spaces with the same dimension (Bond-Taylor et al., 2022). By adding noise and bypassing strong bi-Lipschitz limitations, stochastic normalizing flows are easier to optimize. Moreover, adding stochastic layers is not computationally costly since they have linear computational complexity Wu et al. (2020).
DiffFlow tends to train slower in comparison to other models. While certain aspects, such as a trainable forward function, help improve efficiency, DiffFlow ultimately relies on backpropagation, making it slow to train. On the other hand, VAEs reach convergence quite quickly. Due to the reparameterization trick proposed by Kingma and Welling (2013), VAEs can use SGD during optimization.
Score-based models may struggle with training for low density regions, especially if the target distribution has multiple modes with a degree of separation. The model may then fail to converge in a reasonable time. As mentioned in section 5, adding progressively more noise to the data in training can improve model convergence in such cases.
### Ease of Likelihood Computation
Normalizing flows benefit from having bijective and invertible (diffeomorphisms) transformations applied to base distributions, resulting in the ability to compute exact likelihoods Kobyzev et al. (2021). Adding noise to stochastic and diffusion normalizing flows increases expressivity over normalizing flows but at the cost of not being able to compute exact likelihoods. The parameters of a stochastic normalizing flow can be optimized by minimizing the KL divergence between the forward and backward path probabilities. This minimization makes use of the variational approximation, which precludes them from computing exact likelihoods Wu et al. (2020). Diffusion normalizing flows add noise in the forward stochastic differential equation. Consequently, they use the reparameterization trick proposed by Kingma and Welling (2013) and thus we cannot compute exact likelihoods. To estimate likelihoods they use the marginals distribution equivalent SDE Zhang and Chen (2021).
VAEs optimize the variational lower bound, an approximation of the log-likelihood we are trying to optimize and as a result, we cannot compute exact likelihoods. Importance sampling or Monte Carlo sampling techniques are used to compute the likelihood of data after training is completed Kingma and Welling (2019). Finally, score-based methods provide an avenue to compute exact likelihoods. This requires some manipulation of the equations and the introduction of invertibility into the model. According to Song et al. (2021), score-based methods are then able to achieve'state-of-the-art likelihoods' on some image generation tasks.
Among the transformations proposed in Nielsen et al. (2020), only inference surjections, i.e. surjective layers that have full support in the base distribution and partial support in the target distribution, are able to produce exact likelihoods. Generative surjections, on the other hand, can only provide stochastic lower bound estimates.
### Sampling Efficiency
Sampling efficiency is mainly affected by the complexity of the model and number of iterations to generate a sample. For example, VAEs consist of an encoder and decoder that are typically complex neural networks. On the other hand, VAEs can generate samples from a single network pass and are thus more efficient than other energy-based models such as stochastic normalizing flows Bond-Taylor et al. (2022).
The sampling efficiency of normalizing flows is related to the cost of the generative direction. However, since the transformations applied to the base distribution are deterministic, samples can be generated in a single network pass and thus, normalizing flows enjoy high sampling efficiency. In comparison, diffusion normalizing flows have poor sampling efficiency, since they require MCMC during sampling. Nevertheless, they have better sampling efficiency than diffusion probabilistic models since they require fewer discretization steps Zhang and Chen (2021). Similar to diffusion normalizing flows, stochastic normalizing flows have lower sampling efficiency than vanilla normalizing flows because they use an MCMC method, Metropolis-Hastings algorithm, to generate samples.
Score-based methods tend to be slow in generating samples, due to the iterative nature of their sampling process. However, score-based methods are often able to produce high quality samples, comparable to GANs in image generation (Song and Ermon (2019)).
### On the Role of Stochasticity
Both Stochastic Normalizing Flows and Diffusion Normalizing Flows introduce stochasticity into their model formulations, though they provide different explanations for the role that stochasticity plays in improving expressivity. Wu et al. (2020) frame the addition of stochastic layers in SNFs as incorporating the strengths of deterministic methods at performing large-scale probability transport with the fine-grained expressivity of Metropolis-Hasting MC sampling- effectively removing samples in areas of lower probability density without incurring the sampling time costs of running a fully MC-reliant model (Wu et al., 2020). Zhang and Chen (2021), on their other hand, attribute the expressivity improvements by DNFs to an expansion of the training support to larger areas of the ambient space, improving gradient coverage for training (Zhang and Chen, 2021).
Both agree that adding stochasticity is central to bypassing topological constraints and representing sharp density boundaries in the target space, but the exact mechanism by which it improves expressivity is not fully elucidated by either work. Though beyond the scope of this paper, Bansal et al. (2022) demonstrates experimental evidence of successful diffusion-like models trained using deterministic, non-Gaussian forward processes, such as blurring and masking, calling into question the need for stochastic noise at all. None of the surjective layers proposed Nielsen et al. (2020) utilize added noise, yet they are nonetheless able to represent sharp boundaries in the target distribution. The role and necessity of added noise in improving model expressivity are not clear from these works and require further investigation.
## 8 Conclusion
This paper delved into a variety of generative models and compared the relative performance of each on expressivity, training speed, sample efficiency and likelihood tractability. Starting from a basis of likelihood-based models, we explored the ability of Normalizing Flows and Variational Autoencoders to directly learn a distribution's probability density in addition to their capacity to generate samples. VAEs have an encoder-decoder framework trained by optimizing the evidence lower bound, while NFs are structured as a series of bijective differentiable transformations on a data distribution to a simple base distribution.
The strict constraints of the NF architecture narrow the types of distributions that the model can represent. Thus, we explored several models that relaxed the strict bijectivity constraint of NFs. The variations we studied borrowed aspects from different frameworks, including diffusion and score-based models, and introduced stochasticity into the training process. The introduction of noise adds both flexibility and stability to these models. The variations of NFs have performed well in practice, particularly on sampling tasks like image generation. While they cannot be used to compute exact likelihoods, they add much to the field in terms of expressivity and sampling efficiency. |
2309.04355 | Value-Compressed Sparse Column (VCSC): Sparse Matrix Storage for
Redundant Data | Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression
formats for sparse matrices. However, both CSC and COO are general purpose and
cannot take advantage of any of the properties of the data other than sparsity,
such as data redundancy. Highly redundant sparse data is common in many machine
learning applications, such as genomics, and is often too large for in-core
computation using conventional sparse storage formats. In this paper, we
present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and
(2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of
high redundancy within a column to further compress data up to 3-fold over COO
and 2.25-fold over CSC, without significant negative impact to performance
characteristics. IVCSC extends VCSC by compressing index arrays through delta
encoding and byte-packing, achieving a 10-fold decrease in memory usage over
COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data
show that VCSC and IVCSC can be read in compressed form with little added
computational cost. These two novel compression formats offer a broadly useful
solution to encoding and reading redundant sparse data. | Skyler Ruiter, Seth Wolfgang, Marc Tunnell, Timothy Triche Jr., Erin Carrier, Zachary DeBruine | 2023-09-08T14:24:40Z | http://arxiv.org/abs/2309.04355v1 | # Value-Compressed Sparse Column (VCSC): Sparse Matrix Storage for Redundant Data
###### Abstract
Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression formats for sparse matrices. However, both CSC and COO are general purpose and cannot take advantage of any of the properties of the data other than sparsity, such as data redundancy. Highly redundant sparse data is common in many machine learning applications, such as genomics, and is often too large for in-core computation using conventional sparse storage formats. In this paper, we present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and (2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of high redundancy within a column to further compress data up to 3-fold over COO and 2.25-fold over CSC, without significant negative impact to performance characteristics. IVCSC extends VCSC by compressing index arrays through delta encoding and byte-packing, achieving a 10-fold decrease in memory usage over COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data show that VCSC and IVCSC can be read in compressed form with little added computational cost. These two novel compression formats offer a broadly useful solution to encoding and reading redundant sparse data.
\({}^{1}\)Grand Valley State University
1 Campus Drive
Allendale, Michigan, 49401, USA
\({}^{2}\)Van Andel Institute
333 Bostwick Ave NE
Grand Rapids, Michigan, 49503, USA
## 1 Introduction
Sparse data is mostly zero or missing, and is often encoded in sparse matrices that avoid explicit storage of these values. Sparse matrices are abundant in many domains that involve scientific computing, machine learning, and data engineering. In these domains, software priorities are often a combination of memory usage and fast compute, with these goals usually being at odds with one another.
Historically, most improvements to sparse matrix formats have prioritized compute optimizations over compression ratio. However, as the size of datasets continue to grow, the inability of matrices in conventional sparse storage formats to be processed in-core will be a primary bottleneck for computation.
General purpose sparse matrix formats, such as Coordinate (COO) or Compressed Sparse Column (CSC), offer compression with reasonably fast compute. Specifically, COO stores the matrix in triplet format, storing a row-index, column-index, and value for each nonzero value. As depicted in Figure 1, for each nonzero value, CSC (CSR) stores the value and the row-index (column-index), along with the pointer offsets to the start of each column (row). While popular, COO, CSC, and CSR fail to leverage
specific characteristics of the data, such as significant redundancy in nonzero values, common in count data and discrete distributions.
Massive sparse datasets such as those used in genomics, often contain highly redundant values. Take, for example, an \(18082\times 897734\), single-cell transcriptomics dataset from 10x Genomics containing the counts of genes in single-cells [1]. This matrix is \(92\%\) sparse and contains approximately \(1.3\) billion nonzero values. Despite the large number of nonzero values, there are only about \(7,000\) unique values. This type of data is critical for biological research, but existing data structures are memory inefficient, failing to leverage the high redundancy of the values. For instance, as shown in Table 1, in COO format this dataset requires approximately \(20\) gigabytes (GB), with CSC reducing the memory requirement by only \(25\%\) relative to COO. As genomics data continues to grow, memory limitations will become increasingly prohibitive to in-core analysis and massive data integration.
In this paper, we introduce two novel sparse matrix formats that aim to address this limitation of conventional sparse matrix formats on highly redundant data: (1) Value-Compressed Sparse Column (VCSC) and (2) Index- and Value-Compressed Sparse Column (IVCSC). Using VCSC, the size of the previously mentioned 10x Genomics dataset is reduced by \(74\%\) compared to COO and \(65\%\) compared to CSC. IVCSC enables further compression, reducing the size over COO and CSC by \(91\%\) and \(88\%\), respectively. These formats are implemented in C++ and available on Github ([https://github.com/Seth-Wolfgang/IVSparse](https://github.com/Seth-Wolfgang/IVSparse)).
This paper is organized as follows: Related work is discussed in section 2, methods underlying VCSC and IVCSC are discussed in section 3, experimental results are presented in section 4, and a conclusion and discussion is given in section 5.
## 2 Related Work
A variety of compressed matrix formats exist, for both sparse and dense matrices, with most formats focusing on enabling fast compute. As is typical with sparse matrix formats, we limit our discussion of related work to only lossless compressed matrix formats.
### Sparse Matrix Compression
A popular approach is to create a data structure that groups data into blocks [2, 3, 4, 5]. Block storage formats often take advantage of natural patterns in the data to speed up Sparse Matrix-Vector Multiplication (SpMV), utilizing this structure for better cache access [3, 4], optimizing for distributed systems [2], or for better memory bandwidth via data compression [5].
The authors of [6] utilize a form of data compression aimed at achieving faster SpMV computations. Specifically, index data is delta-encoded and decompressed to enable faster computation on sparse data with distinct local density patterns [6].
In [7], two modifications to further compress CSR, namely CSR Delta Unit (CSR-DU) and CSR Value Indexed (CSR-VI) are presented. CSR-DU compresses the column indices and row pointers in CSR by breaking up the matrix into units, applying
delta encoding over each unit with flags to store when a unit starts a new row. CSR-VI compresses values by storing unique values in a lookup table, reducing the memory footprint for redundant data but necessitating the storage of references for each nonzero value. While similar in idea to our approach, [7] attempts to optimize SpMV performance through compression, whereas we focus on optimizing compression with limited performance impact on common operations.
### Dense Matrix Compression
While most existing sparse matrix compression formats fail to take advantage of redundancy, many dense data compression algorithms are designed to capitalize on redundancy. For example, Huffman encoding constructs a dictionary of keys where more common symbols have smaller keys [8]. Lempel-Ziv-Welch (LZW) [9] compression is similar to Huffman, but targets redundancy by compressing repetitive patterns. LZW excels on data where repeated patterns are prevalent, albeit at the cost of increased storage for unique patterns. Run-length Encoding (RLE) is another compression technique that encodes consecutive identical symbols as a single count-value pair. RLE's compression ratio directly correlates with the extent of consecutive
Figure 1: Comparison of Sparse Matrix Storage Formats
redundant values in the data.
## 3 Methods
### Value-Compressed Sparse Column (VCSC) Format
VCSC takes advantage of the per-column redundancy of values by storing each unique value only once per column in which it occurs. For each column, VCSC stores three arrays, one for unique values, one for value counts, and one for row indices. The entries of the values array are the unique values in the column. The corresponding entries of the counts array are the number of times each value occurs in the column. The entries of the indices array are the row indices of each occurrence. These row indices are ordered first by value and within each value ordered in ascending order of row index. An example of VCSC format for a single column of a sparse matrix is shown in the middle panel of Figure 1.
By reordering the row indices first by value, we eliminate the need to store references to each element, as is necessary in CSC-VI. While ordering first by value, then by row index significantly improves compression for highly redundant data, traversal through the rows in a column is no longer guaranteed to be performed sequentially.
To evaluate the memory usage of VCSC, we first consider the memory usage of CSC which is given by
\[\text{CSC}_{\text{size}}=\underbrace{val_{\text{size}}*nnz}_{\text{bytes for nonzero vals}}+\underbrace{idx_{\text{size}}*nnz}_{\text{bytes for indices}}+\underbrace{idx_{\text{size}}*(ncol+1)}_{\text{bytes for col pointers}}, \tag{1}\]
where \(nnz\) is the number of nonzeros and \(val_{\text{size}}\) and \(idx_{\text{size}}\) are the byte sizes for values and indices, respectively. In contrast, the memory usage of VCSC is given by
\[\text{VCSC}_{\text{size}}=\sum_{i=1}^{nCols}\underbrace{(val_{\text{size}}*nUniq _{i}}_{\text{bytes for values}}+\underbrace{idx_{\text{size}}*nUniq_{i}}_{\text {bytes for counts}}+\underbrace{idx_{\text{size}}*nnz_{i}}_{\text{bytes for indices}}+ \underbrace{idx_{\text{size}}}_{\text{len}}), \tag{2}\]
where \(nUniq_{i}\) is the number of unique values in column \(i\), and \(nnz_{i}\) is the number of nonzeros in column \(i\). Unlike CSC (and CSC-VI), the only component of Equation 2 which grows at a fixed rate with the number of nonzeros in a column is the memory for the indices.
VCSC can be useful for compressing data with high within-column value-redundancy, even if it is not sparse. For instance, VCSC can compress a large dense matrix containing only a few unique values. However, as is common when using sparse matrix formats for relatively dense matrices, random access and computations would be significantly slower than dense storage.
### Index- and Value-Compressed Sparse Column (IVCSC) Format
Whereas VCSC compresses just values, IVCSC further compresses VCSC by also compressing the indices. For each column, IVCSC stores a single array. This array
contains sections, where each section is a unique value, followed by the index width, followed by the row indices where that value occurs, followed by a delimiter (a zero) to indicate the end of the row indices for that value. Within a single unique value, the indices are compressed by positive-delta encoding, as shown in the bottom pane of Figure 1, and then byte-packed.
By positive-delta encoding the row indices, the magnitude of the stored indices is reduced. Byte-packing the encoded indices discards leading bytes that are zero, further reducing the storage required for each index, while still allowing efficient traversal through the indices (which would not be the case with bit-packing). Depending on the redundancy and density of the data, it is often possible to represent indices for frequently occurring values with a single byte.
As with VCSC, traversal through the rows in a column is not necessarily sequential. Furthermore, traversal through a column in IVCSC requires decoding the positive-delta-encoded indices and checking for end-of-value delimiters.
The memory usage of IVCSC is given by
\[\text{IVCSC}_{\text{size}}=\sum_{i=1}^{nCols}\left(\underbrace{8}_{\text{len}}+ \underbrace{nUniq_{i}*(val_{\text{size}}+1)}_{\text{bytes for value and idx width}}+\underbrace{\sum_{j=1}^{nUniq_{i}}(nnz_{j}+1)*idxWid_{j}}_{\text{ bytes for encoded indices and delim}}\right), \tag{3}\]
where the 8 bytes (denoted by len) are used to store the size of the data array, \(nnz_{j}\) is the number of times the unique value \(j\) appears in the column, and \(idxWid_{j}\) is the byte width of the unique value's indices after positive-delta encoding and byte-packing. Comparing Equation 2 and Equation 3, one can see the bytes for storing the value data is similar, with the main difference being that the counts of unique values are replaced with delimiters, which are slightly smaller in most use cases. The differences due to index compression can be seen in Equation 3 and the term for bytes for encoded indices and delim.
IVCSC is well-suited for very large, highly redundant matrices. As with VCSC, IVCSC can be useful for compressing dense matrices with highly redundant values. In comparison to VCSC, IVCSC further prioritizes compression at increased computation and traversal time.
## 4 Experimental Results
### Setup and Implementation
All benchmarking is performed on a dedicated workstation computer running on a Ryzen 9 5950x (Zen 3) with a max clock speed of 5.1GHz, 512KB L1 cache (64KB per core), 8MB L2 cache (512KB per core) and 64MB shared L3 cache. This machine has 64-bit Ubuntu 22.04.1 LTS installed and all programs were compiled with GCC version 11.3.0 and the flag -O2. OpenMP was disabled for all benchmarking.
We test the compression ratio on a variety of datasets that represent a range of real-world use-cases, including best and worst case examples. For timing results relating to the performance of our formats, we benchmark against the CSC format of the Eigen C++ library, a widely used matrix and linear algebra package [10].
We present timing results on three common matrix operations: multiplication of a sparse matrix by a scalar (scalar multiplication), sparse matrix-vector multiplication (SpMV), and sparse matrix-matrix multiplication (SpMM). Additionally, we present timing results for iterator traversal, which is the fundamental cost to many sparse-dense BLAS-like operations, as well as construction time from a COO matrix. Each timing benchmark is given a cold start, which is not timed, and then results are reported as the mean of ten timed iterations using high_resolution_clock of the C++ chrono library. To isolate as many variables as possible and because we benchmark the data structure and not our implementation of BLAS-like routines, we implement the same naive algorithm for SpMV and SpMM on the Eigen CSC matrix and our implementations. For all benchmarks, rows and columns are stored as 4 byte integers and values are stored as 8 byte doubles (excluding any positive-delta encoding and byte-packing).
### Memory Usage
In order to quantify the efficiency of our format on redundant data, for all columns with nonzero elements we define the redundancy of the \(i\)-th column as
\[r_{i}=\left\{\begin{array}{cl}1-\frac{nUniq_{i}}{nnz_{i}},&\text{if }nUniq_{i}>1\\ 1,&\text{otherwise}\end{array}\right.. \tag{4}\]
Figure 2: Comparison of memory required for VCSC, IVCSC, COO and CSC (as a ratio over dense storage required) for simulated random \(10000\times 100\) matrices.
This value is averaged over all columns with nonzero elements, giving the mean matrix redundancy (MMR).
For the benchmarks in Figure 1(a), a single matrix is generated at the beginning of a session, the values of which are then modified to change the redundancy in further runs. Figure 1(a) shows the compression ratio of CSC, VCSC, and IVCSC as a function of MMR on a matrix with 90% sparsity. Compared to dense storage, COO and CSC require 20% and 15% of the dense memory, respectively. VCSC and IVCSC are able to compress by more than COO in all cases and by more than CSC at an MMR greater than 0.33 and 0.09, respectively.
Figure 1(b) shows the compression ratio over dense matrix memory usage for COO, CSC, VCSC, and IVCSC as a function of sparsity on a matrix fixed to 90% MMR. Regardless of sparsity, VCSC and IVCSC use less memory than a dense representation of the same matrix on highly redundant data. CSC and COO both require more memory to store the given dense representation until a sparsity of approximately 33% and 49%, respectively. In the worst case, at 0% sparsity, VCSC and IVCSC use approximately 64% and 13% of the memory required for dense storage, respectively.
We test on four real world datasets which represent a wide range of use cases. Of these four, three are conducive to memory compression using our two methods. Additionally, we test on two simulated datasets, representing the ideal and worst case scenarios. For each of these datasets, the dimensions, number of nonzeros, sparsity, and MMR are given in Table 1.
We use a single-cell transcriptomics dataset from [1] as a representative example of data that follows a zero-inflated negative bionmial counts distribution. Table 1 shows that VCSC and IVCSC compress this dataset to 25.96% and 8.94% of the COO memory footprint, respectively.
We use the large Web of Science dataset obtained from [11], and processed using CountVectorizer with default parameters [12], as a representative example of a bag-of-words model. Table 1 shows that VCSC and IVCSC compress this dataset to 32.80% and 15.83% of the COO memory footprint, respectively.
The MovieLens dataset, obtained from [13], is a representative example of discrete, ordinal data, containing 10 possible ratings between 0 and 5. Table 1 shows that VCSC and IVCSC compress this dataset to 32.86% and 17.83% of the COO memory footprint, respectively.
The PB02r dataset was obtained from SuiteSparse [14] and is representative of performance on comparable computational fluid simulation matrices. Table 1 shows
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **Dimensions** & **Nonzeros** & **Sparsity** & **MMR** & **COO Size (GB)** & **CSC \%** & **VCSC \%** & **IVCSC \%** \\ \hline Single-cell & \(18082\times 897734\) & \(1.30e9\) & \(91.92\%\) & \(9875\) & \(20.85\) & \(75.02\) & \(25.96\) & \(8.94\) \\ Web of Science & \(46985\times 124836\) & \(5.41e6\) & \(99.66\%\) & \(.7155\) & \(0.09\) & \(75.58\) & \(32.80\) & \(15.83\) \\ MovieLens & \(237187\times 100000\) & \(2.85e7\) & \(97.04\%\) & \(.6162\) & \(0.46\) & \(75.88\) & \(32.86\) & \(17.83\) \\ PR02R & \(161070\times 161070\) & \(8.19e6\) & \(99.97\%\) & \(.0054\) & \(0.12\) & \(75.49\) & \(103.47\) & \(97.61\) \\ Sim Binary & \(100000\times 1000\) & \(9.99e7\) & \(90.00\%\) & \(1\) & \(1.60\) & \(75.00\) & \(25.00\) & \(6.25\) \\ Sim Unique & \(1000000\times 1000\) & \(9.99e7\) & \(90.00\%\) & \(0\) & \(1.60\) & \(75.00\) & \(100.00\) & \(104.61\) \\ \hline \end{tabular}
\end{table}
Table 1: Memory usage of methods on real and simulated large datasets. CSC, VCSC, and IVCSC storage costs are given as a percentage of COO memory footprint.
that VCSC increases the memory footprint to 103.47% and IVCSC decreases the memory footprint to 97.61%.
A simulated binary matrix, having an MMR of 100% represents the best case scenario for compression using our formats. Table 1 shows that VCSC and IVCSC compress this dataset to 25.00% and 6.25% of the COO memory footprint, respectively. This simulated matrix has an MMR of 0 and represents the worst case scenario for compression using our formats. In this case, Table 1 shows that VCSC offers no change to the memory footprint and IVCSC increases the memory footprint to 104.61%.
### Computational Performance
Computational performance benchmarking results are shown in Figure 3. Because the scalar multiplication calculation only needs to loop over the unique values in VCSC, element-wise operations are performed more quickly than in CSC when data is redundant (Figure (a)a). For sparse matrix-vector (SpMV) and sparse matrix-matrix (SpMM) operations, VCSC is marginally slower than Eigen, and IVCSC is 2-4 fold slower (Figure (b)b).
Constructor time is measured as the time to construct the Eigen (CSC), VCSC, and IVCSC matrices from an underlying COO matrix. At low MMR, the construction time for VCSC and IVCSC is significantly slower, and at high MMR, the construction time for both VCSC and IVCSC approaches the Eigen construction time (Figure 4). However, in absolute terms, the construction time is negligible relative to almost any compute task and the benefits of in-core rather than distributed computing.
Iterator traversal time is measured as the time necessary to fully iterate through the sparse matrix. As shown in Figure 5, VCSC iterator traversal time is comparable to Eigen at high redundancy, whereas IVCSC is 2-4 fold slower.
Figure 3: Benchmarking results for common BLAS-like matrix operations on a 90% sparse \(1000000\times 10000\) matrix.
## 5 Conclusion and Future Work
In this paper we presented two novel compression formats, VCSC and IVCSC, for matrices with highly redundant values. Testing showed that both VCSC and IVCSC offer considerable compression over CSC and COO on data with high MMR in exchange for a very small cost to iterator speed.
One disadvantage of our method, mostly with IVCSC, is the slower iterator traversal, which results in increased compute time. However, the up to 3-fold and 10-fold decrease in size for VCSC and IVCSC, respectively, enables in-core processing of large matrices that would otherwise exhaust RAM on most workstations and necessitate much slower distributed or disk operations.
This work lays the foundation for future research on hybrid sparse matrix formats, such as a hybrid CSC-VCSC structure that uses VCSC to store redundant values and CSC to store non-redundant values for any given column. Additionally, distributed memory and SIMD parallelization of both VCSC and IVCSC could be beneficial to large scale machine learning applications.
We are actively testing VCSC and IVCSC on larger matrices, benchmarking additional operations, and adding further optimizations to address known performance bottlenecks.
## 6 Acknowledgements
This work was funded by a grant from the Chan Zuckerberg Initiative Single Cell Biology Data Insights DI-000-0287 (to Z.D., T.T., S.R., S.W.) and a Grand Valley State University Kindschi Fellowship (to S.W.). |
2309.14146 | Examining Temporal Bias in Abusive Language Detection | The use of abusive language online has become an increasingly pervasive
problem that damages both individuals and society, with effects ranging from
psychological harm right through to escalation to real-life violence and even
death. Machine learning models have been developed to automatically detect
abusive language, but these models can suffer from temporal bias, the
phenomenon in which topics, language use or social norms change over time. This
study aims to investigate the nature and impact of temporal bias in abusive
language detection across various languages and explore mitigation methods. We
evaluate the performance of models on abusive data sets from different time
periods. Our results demonstrate that temporal bias is a significant challenge
for abusive language detection, with models trained on historical data showing
a significant drop in performance over time. We also present an extensive
linguistic analysis of these abusive data sets from a diachronic perspective,
aiming to explore the reasons for language evolution and performance decline.
This study sheds light on the pervasive issue of temporal bias in abusive
language detection across languages, offering crucial insights into language
evolution and temporal bias mitigation. | Mali Jin, Yida Mu, Diana Maynard, Kalina Bontcheva | 2023-09-25T13:59:39Z | http://arxiv.org/abs/2309.14146v1 | # Examining Temporal Bias in Abusive Language Detection
###### Abstract
The use of abusive language online has become an increasingly pervasive problem that damages both individuals and society, with effects ranging from psychological alarm right through to escalation to real-life violence and even death. Machine learning models have been developed to automatically detect abusive language, but these models can suffer from temporal bias, the phenomenon in which topics, language use or social norms change over time. This study aims to investigate the nature and impact of temporal bias in abusive language detection across various languages and explore mitigation methods. We evaluate the performance of models on abusive data sets from different time periods. Our results demonstrate that temporal bias is a significant challenge for abusive language detection, with models trained on historical data showing a significant drop in performance over time. We also present an extensive linguistic analysis of these abusive data sets from a diachronic perspective, aiming to explore the reasons for language evolution and performance decline. This study sheds light on the pervasive issue of temporal bias in abusive language detection across languages, offering crucial insights into language evolution and temporal bias mitigation.
Department of Computer Science, The University of Sheffield, UK
{m.jin, y.mu, d.maynard, k.bontheva}@sheffield.ac.uk
## Introduction
The increasing use of social media platforms has given rise to a pervasive problem of online abusive language, which can cause harm to individuals and lead to societal polarization. In recent years, researchers have developed a huge variety of machine learning models that can automatically detect abusive language Mishra et al. (2019); Aurpa, Sadik, and Ahmed (2022); Das and Mukherjee (2023); Alrashidi, Jamal, and Alkathlan (2023). However, these models may be subject to temporal bias, which can lead to a decrease in the accuracy of abusive language detection models, potentially allowing abusive language to be undetected or falsely detected.
Temporal bias arises from differences in populations and behaviors over time Olteanu et al. (2019). In natural language processing (NLP), it can result from various issues. Temporal concept drift refers to the problem of language evolving over time Zhao et al. (2022). Languages change as new meanings develop for existing words and new words and topics come into use over time. Models trained on data from an earlier period can perform worse on chronologically newer data as they are unable to recognize new topics or linguistic features Lukes and Sogaard (2018); Vidgen et al. (2019); Mu et al. (2023). Previous work has examined temporal bias in various tasks such as named entity recognition Derczynski et al. (2016), sentiment analysis Lukes and Sogaard (2018) and rumour detection Mu et al. (2023).
In online abuse detection, words and expressions considered acceptable in the past may have an abusive or offensive connotation now due to the changing language or societal norms Wich et al. (2022); McGillivray et al. (2022). Also, temporal bias occurs when the abusive content fluctuates based on the latest trends, popular topics or breaking news. As the online discussion evolves with new development, certain topics and forms of abuse might gain prominence while others become less prevalent. For example, in 2020 a fraudulently altered video was circulated on Twitter purporting to show Al Jazeera journalist Ghada Oueiss naked in a jacuzzi, as part of an orchestrated attack designed to discredit her Posetti et al. (2021). The video and other photos were distributed with messages alleging she was an alcoholic, drug-addicted profuitute, which engendered in turn a large number of hateful messages connected with the alleged jacuzzi incident, a topic not typically associated with abuse.
Previous work identified temporal bias in an Italian hate speech data set associated with immigrants Florio et al. (2020). However, they have yet to explore temporal factors affecting predictive performance from a multilingual perspective. In this paper, we explore temporal bias in 5 different abusive data sets that span varying time periods, in 4 languages (English, Spanish, Italian, and Chinese). Specifically, we investigate the following core research questions:
* _RQ1:_ How does the magnitude of temporal bias vary across different data sets such as language, time span and collection methods?
* _RQ2:_ What type of language evolution causes the temporal bias in our data sets and how?
* _RQ3:_ Could domain adaptation models, large language models (LLMs) or a more robust data set help to mitigate the temporal bias in abusive language detection?
To answer these questions, we compare the predictive
performance between random and chronological data splits across data sets in different languages and with different temporal coverage. We also experiment with different transformer-based pre-trained language models (PLMs) using the original data set and a filtered data set. Finally, we present an in-depth analysis to investigate the factors for performance degradation.
## Related Work
### Bias in NLP
Bias refers to the presence of systematic and unfair favouritism or prejudice. In various contexts, bias can manifest as a skewed representation or inaccurate judgments that unfairly advantage or disadvantage certain individuals or groups [1]. Bias can arise from various sources such as data selection, annotation processes, models and research design. These biases can potentially lead to unfair or discriminatory outcomes through NLP applications [13]. For instance, biased language models might generate discriminatory content or fail to accurately understand and respond to underrepresented languages. Consequently, addressing and mitigating bias in NLP has become a critical research endeavour. Researchers are exploring techniques to measure and mitigate bias across diverse domains and languages [22, 23, 24]. Common debiasing methods include data reweighing and resampling, debiasing word embeddings, counterfactual data augmentation and bias fine-tuning [15, 16].
### Bias in Abusive Language Detection
Previous work has focused on identifying and mitigating different forms of social bias in abusive language detection, such as gender bias [20], dialect bias (e.g. African-Americans English) [17, 21, 22] and different forms of identity bias (e.g. transgender, black) [15, 16]. Moreover, Elsafoury et al. (2022) measured systematic offensive stereotyping bias (i.e., associating slurs or profane terms with specific groups of people, especially marginalized people) in different word embeddings.
However, little attention has been paid to temporal bias in abusive language detection. One exception is the work of Florio et al. (2020), who identified temporal bias in an Italian hate speech data set associated with immigrants. They investigated the impact of data size and time spans on temporal robustness by using two strategies, namely a sliding window model and an incremental model. Their results showed that adding training data temporally closer to the testing set greatly improved the performance but simply increasing the size of training data did not lead to performance improvement. Also, they found that offensive language in online contexts experienced rapid changes in topics over different time periods. Moreover, McGillivray et al. (2022) made use of time-dependent lexical features to detect abusive language effectively by training on smaller and older data. To facilitate this, they obtained a list of words for semantic change (i.e. acquired or lost an offensive meaning between 2019 and 2020). Their results showed that semantic change impacts abusive language detection and it is feasible to improve the detection by considering this change instead of depending on large labeled data sets. However, both work restricted themselves only to a single data set or a single language and did not explore other languages.
### Temporal Bias in Classification Tasks
Temporal bias occurs in classification tasks due to the variation and evolution of data patterns over time. This temporal variation can pose difficulties for machine learning models as patterns learned from one time period may not be applicable in another. Temporal bias was assessed in various classification tasks such as rumour detection [18], stance detection [19] and multi-label classification tasks related to legislation and biomedicine [13]. Mu et al. (2023) found that domain-adapted pre-trained language models are less sensitive to time and thus are beneficial to temporal gap mitigation; while Chalkidis and Sogaard (2022) proposed group-robust algorithms to reduce the temporal bias in multi-label classification. Moreover, Alkhalifa, Kochkina, and Zubiaga (2023) investigated the impact of word representations and machine learning model choice on temporal performance of various classification tasks such as stance detection and sentiment analysis.
## Data
We study two widely used English abusive data sets (_WaseEM_ and _FOUNTA_). We also study a Chinese data set (_JIANG_), a Spanish data set (_PEREIRA_), and an Italian data set (_SANGUNETTI_), in order to explore the impact of temporarily on different languages. We choose these data sets because the creation time of each post is provided or accessible (via tweet IDs). Details of the data sets are shown in Table 1.
Waseem(Waseem and Hovy, 2016) is an English abusive data set focusing on sexism and racism. They collect the tweets by manually searching common terms related to religious, sexual, gender, and ethnic minorities, and by using the public Twitter search API. They combine these two methods to ensure that non-offensive tweets that contain clearly or potentially offensive words are also obtained. The annotations are created by manual experts and then reviewed by an additional gender study expert. We merge the original _sexism_ and _racism_ labels into a single _abusive_ label, and rename the _neither_ label as _non-abusive_.
Founta(Founta et al., 2018) is an English data set collected from Twitter containing two types of online abuse expressions: abusive and hateful. They randomly collect and sample the data, using text analysis and machine learning techniques to create the boosted set of tweets which are likely to belong to the two abusive classes. The data is then
annotated by crowdsourced workers. Similar to Leonardelli et al. (2021), we map the four labels in the data set into a binary offensive or non-offensive label. We exclude tweets labeled as _spam_, and merge _abusive_ and _hateful_ labels into _abusive_. The _normal_ label is renamed _non-abusive_.
**Jiang**Jiang et al. (2022) is a Chinese sexism data set collected from Sina Weibo (a Chinese microblogging platform). They first collect gender-related Weibos by searching keywords such as 'feminism' and 'gender discrimination'. Then they extract the comments that link to these Weibos and filter out the comments to produce the final data set, which is annotated by three PhD students.
**PEreira**Pereira-Kohatsu et al. (2019) is a Spanish hate speech data set annotated by experts. They randomly collect the data using the Twitter Rest API and filter it using seven dictionaries, where six of them represent different types of hate speech (e.g., race, gender) and the last one contains generic insults.
**SANGUNETTI**Sanguinetti et al. (2018) is an Italian hate speech data set targeting immigrants, Roma and Muslims. They obtain the tweets by selecting a set of neutral keywords related to each target. The data is annotated by a team of both expert and crowdsourced annotators.
### Data Filtering
Since three is no time information or tweet content in the FOUNTA and SANGUNETTI datasets, we re-obtain the tweets with their created time using Twitter Academic API based on the provided tweet IDs. Given the provided tweet IDs and related texts in the PEREIRA corpus, we use them directly without re-collecting the data to avoid data loss as Twitter ids are time ordered1. For all data sets, we remove the duplicates and any tweets with no created time information.
Footnote 1: [https://developer.twitter.com/en/docs/twitter-ids](https://developer.twitter.com/en/docs/twitter-ids)
### Data Splits
We divide the data into training and testing sets using two strategies, namely random splits and chronological splits. The statistics of each data set are shown in Table 2. We can see that two of the data sets cover only a short period (FOUNTA contains many tweets but only covers 10 days, while PEREIRA covers 10 months but is fairly small in size) while all the other datasets span several years.
#### Random Splits
We randomly split the data sets into training and testing sets and keep class distribution the same as the original data sets.
#### Chronological Splits
We adopt a stratified chronological split strategy following the method in Mu, Bontcheva, and Aletras (2023). We first sort the abusive and non-abusive texts separately in chronological order. Then, we extract the first 70% of posts from abusive and non-abusive sets separately and combine them as the training set. Similarly, we combine the last 15% of posts from abusive and non-abusive sets as the testing set. The middle part of the two sets is merged into the validation set. In this way, the distribution of labels in each set is consistent with the original data.
## Predictive Models
LrWe use Logistic Regression with bag-of-words using L2 regularization as our baseline (LR).
**BERT**Bidirectional Encoder Representations from Transformers; Devlin et al. (2018) is a transformer-based Vaswani et al. (2017) language model, which is pre-trained on large corpora, such as the English Wikipedia and the Google Books corpus. During pre-training, it uses a technique called masked language modeling (MLM) where it randomly masks some of the words in the input text, aiming to predict the masked word based on the context Devlin et al. (2018). We fine-tune the BERT model on abusive language detection by adding an output layer with a softmax activation function.
#### RoBERTa
is an extension of BERT trained on more data with different hyperparameters and has achieved better performance in multiple classification tasks Liu et al. (2019). We fine-tune RoBERTa in a similar way to BERT.
#### RoBERTa-hate-speech
This domain adaptation model2 is trained on 11 English data sets for hate and toxicity based on the RoBERTa-base model Vidgen et al. (2020).
Footnote 2: [https://rb.gy/k5x9t](https://rb.gy/k5x9t)
#### OA
We use the OpenAssistant (OA) 30B model developed by LAIONAI, which fine-tunes the LLaMA (Large Language Model Meta AI; Touvron et al., 2023) 30B model using the OA dataset. Since the original LLaMA model is
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Dataset** & **Language** & **Source** & **Time** & **Size** & **Labels** \\ \hline Waseem and Hovy (2016) & English & Twitter & 07-04-2013 - 06-01-2016 (33 months) & 16,914 & neither, sexism, racism \\ \hline Founta et al. (2018) & English & Twitter & 30-03-2017 - 08-04-2017 (10 days) & 80,000 & normal, spam, abusive, hateful \\ \hline Jiang et al. (2022) & Chinese & Weibo & 06-04-2012 - 26-06-2020 (8 years) & 8,969 & sexism, not sexism \\ \hline Pereira-Kohatsu et al. (2019) & Spanish & Twitter & 04-02-2017 - 22-12-2017 (10 months) & 6,000 & hate speech, not hate speech \\ \hline Sanguinetti et al. (2018) & Italian & Twitter & 26-02-2015 - 25-04-2017 (26 months) & 6,928 & hate speech, not hate speech \\ \hline \end{tabular}
\end{table}
Table 1: Data sets details.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Dataset** & **Training** & **Validation** & **Testing** & **All** \\ \hline WASEEM & 12,214 & 2,156 & 2,536 & 16,906 \\ FOUNTA & 27,368 & 5,683 & 4,830 & 37,881 \\ \hline JIANG & 6,335 & 1,118 & 1,316 & 8,769 \\ PEREIRA & 4,335 & 765 & 900 & 6,000 \\ SANGUNETTI & 2,861 & 595 & 506 & 3,962 \\ \hline \end{tabular}
\end{table}
Table 2: Data sets statistics.
not fully open-source, we obtain the xor weights from HuggingFace3 and apply 8-bit quantisation techniques via BitsAndBytes [16] to decrease the inference memory requirements. We use OA for zero-shot classification where we provide the model with a sequence of texts and a prompt that describes what we want our model to do.
Footnote 3: [https://huggingface.co/OpenAssistant](https://huggingface.co/OpenAssistant)
Footnote 4: [https://www.cs.uc.edu/~huggingface.co/bert-base-chinese](https://www.cs.uc.edu/~huggingface.co/bert-base-chinese)
## Experimental Setup
Tweet Pre-ProcessingFor all data sets, we replace username mentions and hyperlinks with placeholder tokens, \(<\)USER\(>\) and \(<\)URL\(>\) respectively. For the Chinese data set, we use Jieba4, a Chinese text segmentation, to tokenize the texts.
Footnote 4: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese)
HyperparametersFor all the English data sets, we use RoBERTa-base5; for data sets in other languages, we use bert-base-chinese6, bert-base-spanish-wwm-cased7 and bert-base-italian-cased8 respectively, which are trained on big corpora of the corresponding language based on the BERT-base model. We fine-tune all models with learning rate \(l\) = 3e-6, \(l\) {1e-4, 1e-5, 5e-6, 3e-6, 1e-6, 1e-7}. The batch size is set to 32 and the maximum sequence length is set to 128. All experiments are performed on a NVIDIA Titan RTX GPU with 24GB memory. We follow the official guidelines9 to run the 30B OA model on a local server with two NVIDIA A100 GPUs.
Footnote 6: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese)
Footnote 7: [https://ftb.gy.br/2ys](https://ftb.gy.br/2ys)
Footnote 8: [https://huggingface.co/dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased)
Training and EvaluationWe split the data sets into training, validation and testing sets with a ratio of 70:15:15. During training, we choose the model with the smallest validation loss value over 12 epochs. We run all models five times with different random seeds for both random and chronological split strategies. We report predictive performance using the average Accuracy, Precision, Recall and macro-F1 scores. For OA, we only input the prompt (i.e. _identify if the following text is abusive or non-abusive_) and the same testing sets using two data split strategies.
## Results
The predictive results are shown in Table 3 (English data sets)10 and Table 4 (data sets in Chinese, Spanish and Italian). Values in the _Performance Drop_ column are calculated by subtracting the results of chronological splits from that of random splits, where \(\downarrow\) indicates a positive value and \(\uparrow\) indicates a negative value. In other words, performance drop refers to the performance decreases using chronological splits compared to random splits with the same model.
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Splits**} & \multicolumn{4}{c|}{**JIANG**} & \multicolumn{4}{c|}{**PEREIRA**} & \multicolumn{4}{c|}{**SANGUINETTI**} \\ \cline{3-13} & & Acc & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 \\ \hline \multirow{2}{*}{**LR**} & _Random_ & 76.14 & 73.24 & 72.81 & 73.01 & 77.00 & 70.35 & 70.95 & 70.64 & 86.22 & 73.93 & 77.19 & 75.36 \\ \cline{2-13} & _Chronological_ & 71.50 & 68.28 & 68.76 & 68.49 & 80.67 & 76.08 & 69.86 & 71.83 & 85.21 & 71.71 & 71.71 & 71.71 \\ \cline{2-13} & Performance Drop & 4.641 & 4.96\(\downarrow\) & 4.05\(\downarrow\) & 4.521 & 3.67\(\uparrow\) & 5.73\(\uparrow\) & 1.09\(\downarrow\) & **1.19\(\uparrow\)** & 1.01\(\uparrow\) & 2.221 & 5.48 & **3.65\(\uparrow\)** \\ \hline \multirow{2}{*}{**BERT**} & _Random_ & 80.68 & 78.95 & 76.65 & 77.52 & 80.67 & 75.30 & 72.31 & 73.44 & 88.07 & 78.09 & 72.69 & 74.85 \\ \cline{2-13} & _Chronological_ & 78.66 & 76.28 & 77.80 & 76.81 & 82.78 & 83.15 & 69.72 & 72.67 & 84.87 & 70.22 & 63.08 & 65.13 \\ \cline{2-13} & Performance Drop & 2.02\(\downarrow\) & 2.67\(\uparrow\) & 1.15\(\uparrow\) & **0.71\(\downarrow\)** & 2.11\(\uparrow\) & 7.85\(\uparrow\) & 2.59\(\downarrow\) & 0.77\(\uparrow\) & 3.20\(\uparrow\) & 7.87\(\uparrow\) & 9.61\(\downarrow\) & 9.72\(\downarrow\) \\ \hline \end{tabular}
\end{table}
Table 4: Model predictive performance on a Chinese, Spanish and Italian data set using random and chronological splits. The smallest performance drops (or rise) across models are in bold.
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Splits**} & \multicolumn{4}{c|}{**WASEEM**} & \multicolumn{4}{c|}{**FOOTNOTE:**} \\ \cline{3-13} & & Acc & P & R & F1 & Acc & P & R & F1 \\ \hline \multirow{2}{*}{**LR**} & _Random Splits_ & 81.94 & 79.27 & 79.08 & 79.18 & 92.54 & 83.66 & 84.69 & 84.16 \\ \cline{2-13} & _Chronological Splits_ & 74.88 & 76.93 & 62.69 & 63.15 & 93.26 & 85.56 & 85.28 & 85.42 \\ \cline{2-13} & Performance Drop & 7.06\(\downarrow\) & 2.33\(\downarrow\) & 16.39\(\downarrow\) & 16.03\(\downarrow\) & 0.72\(\uparrow\) & 1.90\(\uparrow\) & 0.59\(\uparrow\) & **1.26\(\uparrow\)** \\ \hline \multirow{2}{*}{**RoBERTa**} & _Random Splits_ & 85.73 & 84.10 & 82.65 & 83.26 & 94.95 & 90.98 & 86.43 & 88.49 \\ \cline{2-13} & _Chronological Splits_ & 76.77 & 80.54 & 65.20 & 66.33 & 94.81 & 91.16 & 85.45 & 87.99 \\ \cline{2-13} & Performance Drop & 8.96\(\downarrow\) & 3.56\(\downarrow\) & 17.45\(\downarrow\) & 16.93\(\downarrow\) & 0.14\(\downarrow\) & 0.18\(\uparrow\) & 0.98\(\uparrow\) & 0.50\(\downarrow\) \\ \hline \multirow{2}{*}{**RoBERTa-hate-speech**} & _Random Splits_ & 89.20 & 87.50 & 87.82 & 87.64 & 96.42 & 93.16 & 91.11 & 92.09 \\ \cline{2-13} & _Chronological Splits_ & 81.58 & 85.99 & 72.21 & 74.71 & 96.07 & 92.03 & 90.79 & 91.39 \\ \cline{2-13} & Performance Drop & 7.62\(\downarrow\) & 1.51\(\downarrow\) & 15.61\(\downarrow\) & **12.93\(\downarrow\)** & 0.35\(\downarrow\) & 1.13\(\downarrow\) & 0.32\(\downarrow\) & 0.70\(\downarrow\) \\ \hline \multirow{2}{*}{**OA**} & _Random Splits_ & 64.47 & 68.96 & 70.88 & 64.26 & 80.43 & 68.11 & 81.93 & 70.54 \\ \cline{2-13} & _Chronological Splits_ & 72.36 & 72.53 & 75.89 & 71.48 & 80.75 & 68.24 & 81.83 & 70.77 \\ \cline{1-1} \cline{2-13} & Performance Drop & 7.89\(\uparrow\) & 3.57\(\uparrow\) & 5.01\(\uparrow\) & 7.22\(\uparrow\) & 0.32\(\uparrow\) & 0.13\(\uparrow\) & 0.10\(\uparrow\) & 0.23\(\uparrow\) \\ \hline \end{tabular}
\end{table}
Table 3: Model predictive performance on English data sets using random and chronological splits. The smallest F1 performance drop (or rise) across models is in bold.
Random vs. chronological splitsIn general, we observe performance degradation using chronological splits compared to random splits across all pretrained language models (PLMs). This is in line with previous work on other classification tasks such as document classification [1], stance detection [12] and rumour detection [12]. Furthermore, the longer the time span, the greater the performance degradation. For the data sets with long time spans, we observe 16.93\(\downarrow\) F1 on WASEEM using RoBERTa and 9.72\(\downarrow\) F1 on SANGUINETTI using BERT); while for the data sets with short time spans we observe only 0.5\(\downarrow\) F1 on FOUNTA using RoBERTa and 0.77\(\downarrow\) F1 on PEREIRA using BERT.
However, although the performance of LR is not as good as that of PLMs, it has a smaller performance drop (or even performance rise) on data sets with small time spans (e.g., 1.26\(\uparrow\) F1 on FOUNTA compared with 0.50\(\downarrow\) F1 using RoBERTa).
Interestingly, we observe only a slight performance drop on the data set of JIANG (0.71\(\downarrow\) F1 using BERT) despite the eight-year time span. This may be due to the differences in the expression of abusive language online in Chinese and English (JIANG vs. WASEEM) or different collection methods between these two data sets. Another speculation is that JIANG only focuses on sexist abuse (sexism or not) which is one of the domains of abusive language. In this case, it covers fewer topics than other abusive data sets, which makes the performance less affected by temporalities (we will further investigate it in the following section).
Vanilla vs. domain adaptation modelsWe compare the vanilla RoBERTa model with the domain adaptation model (RoBERTa-hate-speech) on two English data sets. We found that RoBERTa-hate-speech not only outperforms RoBERTa across two data sets using both random and chronological splits as expected but also has a smaller performance drop on WASEEM (12.93\(\downarrow\)), where tweets span three years. This suggests that domain adaptation models can help mitigate temporal bias in abusive language detection, especially over long time spans. However, there are no domain-specific models for other languages, suggesting that further efforts are needed to develop such models.
Zero-Shot ClassificationSince OA is trained after the year of waseem2016multidimensional and rounta2018multidimensional, we hypothesize that the difference of predictive results between two data split strategies using OA will be negligible (e.g. smaller than 1). The performance drop of FOUNTA is as expected (0.23\(\uparrow\) F1); while the F1 performance on WASEEM using chronological splits is 7.22 higher than using random splits. We speculate that the large performance difference between these two splitting ways on WASEEM is due to the more explicit abusive content in the testing set using chronological splits as temporalities are less likely to be an influencing factor for OA. To investigate this, we calculate the swearing rates (the percentage of tweets containing at least one swear word among all tweets) of these two testing sets using an English swearword list from Wiktionary (words considered taboo and vulgar or offensive)11. The swearing rate of WASEEM using random and chronological splits is 5.60% and 8.40%; while that of FOUNTA is 4.64% and 5.51% respectively. The performance of OA is more likely to be influenced by the explicitness of abusive expressions instead of temporal factors based on the results of two English data sets. However, more abusive data sets are needed to make a more robust conclusion.
Footnote 11: [https://en.wiktionary.org/wiki/Category:English_swear_words](https://en.wiktionary.org/wiki/Category:English_swear_words)
We further explore whether temporal bias has a greater influence on abusive texts or non-abusive texts. Table 5 shows the performance of each class as well as the overall performance on five data sets using their best-performing models (_RoBERTa-hate-speech_ for English data sets and _BERT_ for other language data sets). In general, the performance drop in abusive classes is larger than that in non-abusive classes. Also, the larger the time span of the data sets, the greater the difference in performance degradation between abusive and non-abusive classes (e.g. F1 1.8\(\uparrow\) vs. 27.4\(\downarrow\) for PEREIRA with ten-month time span and F1 1.6\(\downarrow\) vs. 17.8\(\downarrow\) for SANGUINETTI with two-year time span). However, jiang2022multidimensional is an exception where F1 scores of abusive classes increase by 1.2. We also notice that the degradation of precision for non-abusive content is larger than that of recall using chronological splits (e.g. 3.2\(\downarrow\) precision and 0.4\(\downarrow\) recall in SANGUINETTI); while for abusive content, the performance drop in precision and recall is reversed (e.g. 5.4\(\uparrow\) precision and 43.8\(\downarrow\) in recall in WASEEM). This indicates that by using chronological splits, non-abusive texts are more likely to be detected; fewer abusive texts can be detected but the detected ones are more likely to be correct.
## Analysis
### Text Similarities
We hypothesize that the drop in performance is due to a larger difference between training and testing sets using chronological splits. To verify this, we use three methods to
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Random Split} & \multicolumn{3}{c|}{Chronological Split} \\ \hline & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline \multicolumn{6}{|c|}{WASEEM} \\ \hline Non-abusive & 92.6 & 91.6 & 92.0 & 77.6 (15.0\(\downarrow\)) & 97.4 (5.8\(\downarrow\)) & 86.6 (5.4\(\uparrow\)) \\ \hline Abusive & 82.6 & 84.0 & 83.0 & 88.0 (5.4\(\uparrow\)) & 40.2 (42.8\(\downarrow\)) & 35.6 (27.4\(\downarrow\)) \\ \hline Overall & 87.5 & 87.8 & 87.6 & 86.9 (15.1) & 72.2 (5.6\(\downarrow\)) & 74.7 (12.9\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{POUNTA} \\ \hline Non-abusive & 97.6 & 98.2 & 98.0 & 96.3 (0.8\(\downarrow\)) & 98.0 (0.2\(\downarrow\)) & 97.0 (1.0\(\downarrow\)) \\ \hline Abusive & 88.6 & 83.8 & 86.4 & 86.2 (2.4\(\uparrow\)) & 77.4 (6.4\(\downarrow\)) & 81.6 (4.8\(\downarrow\)) \\ \hline Overall & 93.2 & 91.1 & 92.1 (1.2\(\downarrow\)) & 90.8 (0.3\(\downarrow\)) & 91.4 (0.7\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{IMAG} \\ \hline Non-abusive & 83.2 & 88.8 & 85.8 & 86.2 (3.0\(\uparrow\)) & 80.6 (8.2\(\downarrow\)) & 83.2 (2.6\(\downarrow\)) \\ \hline Abusive & 74.6 & 64.2 & 69.0 & 66.2 (8.4\(\uparrow\)) & 74.8 (0.6\(\uparrow\)) & 70.2 (1.2\(\downarrow\)) \\ \hline Overall & 79.0 & 76.7 & 77.5 & 76.3 (27.1) & 77.8 (1.1\(\downarrow\)) & 76.8 (0.7\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{PERREIRA} \\ \hline Non-abusive & 85.0 & 89.6 & 87.4 & 82.8 (2.2\(\uparrow\)) & 97.0 (7.4\(\uparrow\)) & 89.2 (1.8\(\uparrow\)) \\ \hline Abusive & 65.6 & 55.0 & 59.6 & 83.6 (16.0\(\downarrow\)) & 42.4 (12.6\(\downarrow\)) & 55.8 (0.8\(\downarrow\)) \\ \hline Overall & 75.3 & 72.3 & 73.4 & 83.2 (7.9\(\downarrow\)) & 69.7 (2.6\(\downarrow\)) & 72.7 (0.7\(\downarrow\)) \\ \hline \multicolumn{6}{|c|}{SANOUNETTI} \\ \hline Non-abusive & 91.4 & 95.0 & 95.2 & 88.2 (3.2\(\downarrow\)) & 94.6 (0.4\(\uparrow\)) & 91.6 (1.6\(\downarrow\)) \\ \hline Abusive & 64.8 & 50.4 & 56.8 & 52.2 (12.6\(\downarrow\)) & 31.4 (19.9\(\downarrow\)) & 93.0 (17.8\(\downarrow\)) \\ \hline Overall & 78.1 & 72.7 & 74.9 & 70.2 (7.9\(\downarrow\)) & 63.1 (9.6\(\downarrow\)) & 65.1 (9.8\(\downarrow\)) \\ \hline \end{tabular}
\end{table}
Table 5: Model predictive performance of each class as well as the overall performance using random and chronological splits.
calculate text similarities: (a) Jaccard similarity coefficient; (b) DICE coefficient [14] and (c) overlap coefficient (OC).
**Jaccard similarity coefficient** is defined as the size of the intersection divided by the size of the union of two sets, A and B,
\[J(A,B)=\frac{|A\cap B|}{|A\cup B|} \tag{1}\]
**DICE coefficient** is defined as twice the size of the intersection divided by the sum size of two sets, A and B,
\[DICE(A,B)=\frac{2*|A\cap B|}{|A|+|B|} \tag{2}\]
**Overlap coefficient** is defined as the size of the intersection divided by the smaller size of the two sets, A and B,
\[OC(A,B)=\frac{|A\cap B|}{\min(|A|,|B|)} \tag{3}\]
where A and B denote the set of distinctive words from training and test sets, respectively. \(|A\cap B|\) and \(|A\cup B|\) indicate the sum of distinctive words that appear in the intersection and union of the two subsets respectively. When the two subsets have no shared vocabulary, these three coefficient values will be zero, while if they are identical, the two values will be 1.
Table 6 shows the similarity coefficient between training and testing sets using _random_ and _chronological_ splits. Firstly, we notice that values from three similarity calculation methods drop across all data sets, indicating that using chronological splits leads to a larger difference between training and testing sets. Secondly, the longer the time span of data sets, the larger the similarity drop. For example, OC of WASEEM (three years) drops 0.044 while that of FOUNTA (one week) drops 0.004. Also, there tends to be a positive correlation between the magnitude of similarity reduction and the performance drop. However, considering the minor decline (drop 0.71 F1) in the predictive performance of JIANG (eight years), the text similarity drop is not consistent (e.g. OC drops 0.31). This can be explained by the fact that text similarity calculation is granular down to words; while topics might be limited (number, variety) in a sexist data set (i.e. JIANG).
cause their time spans are both long (three years vs. eight years) while their predictive performance drops vary widely (16.93 vs. 0.71).
For WASEEM, most abusive tweets in the testing set using chronological splits involve an Australian TV show, My Kitchen Rules (MKR) (e.g. _#mkr_, _#cuntandandre_, _#kateandre_, _kat_, _andre_, _annie_), which is one of the queried terms for data collection. Our speculation is that the discussion about this show began to emerge during the later timeframe of the data set (within the time covered by the testing set when using chronological splits). However, there are hardly any new topics in the testing set when using random splits (e.g. _countless_, _lower_, _forget_).
For JIANG, testing sets using both split strategies mainly contain basic or gender-related terms (e.g. _more_, _not_, _misogyny_, _female manager_) and do not involve terms related to specific events. This is also correlated to how they collect the data: searching gender-related keywords such as 'feminism' and 'gender discrimination' for sexist content instead of using specific events as keywords. This suggests that collecting data using generic terms as keywords instead of terms associated with current hot events is likely to introduce less temporal bias.
### Topic Distribution
We also explore topic distribution over time across two English data sets. We first use a topic modelling technique, BERTopic12, to extract the 10 most important topic groups in a data set. Then we manually remove repeated or commonly used words (e.g. 'this','said') in these topic groups and combine similar groups into one group (e.g. combining 'women','men','she', and 'girls' into _gender-related_ group). The generated topic groups of each data set are shown as follows13:
Footnote 12: [https://github.com/MaartenGr/BERTopic](https://github.com/MaartenGr/BERTopic)
Footnote 13: We also try to extract topics of data sets with other language using BERTopic but the results are not good.
**WASEEM:** Group 1: {_sexist_, _women_, _men_, _bitch_, _her_, _she_, _girls_, _female_, _woman_, _notsexist_}; Group 2: {_kat_, _mkr_, _face_, _mkr2015_, _karma_}; Group 3: {_drive_, _drivers_, _driving_, _driver_}; Group 4: {_blondes_, _blonde_, _pretty_, _hot_, _dumb_}; Group 5: {_israel_, _hamas_, _palestinians_, _israelis_, _palestinian_, _palestine_, _gays_, _destroy_, _muslims_}; Group 6: {_sports_, _annocupers_, _commutators_, _annoucer_, _football_, _stand_, _commentator_}; Group 7: {_feminism_, _feminists_, _feminist_, _equality_, _movement_, _hypocrisy_, _rights_, _emma_, _modern_}; Group 8: {_funny_, _cometalons_, _comedian_, _jokes_}.
**FOUNTA:** Group 1: {_trump_, _president_, _obama_, _voted_, _republicans_, _idiot_}; Group 2: {_nigga_, _niggas_}; Group 3: {_hate_, _bitch_, _bad_, _fucking_, _bitches_, _she_}; Group 4: {_syria_, _assad_, _syrian_, _chemical_, _trump_, _missiles_, _attack_, _obama_, _war_, _refugees_}; Group 5: {_pizza_, _eat_, _pineapple_, _digusting_, _food_, _home_, _taco_}; Group 6: {_vstrellemania_, _wwe_, _match_, _rawreflemania_, _westlemania_3}.
Figure 1 shows the topic distributions over time of these two data sets. For WASEEM, Group 2 (MKR TV show related ), 5 (race and religion related) and 7 (feminism related) appear only after 2015, which is also the starting time of the testing data set using chronological splits [1]. This results in the models barely seeing these words in the training set and a lack of knowledge in these three topics during training, especially for Group 2. Thus, it would be easier for models to fail when predicting text involving these topics using chronological splits. All topic groups are evenly distributed in FOUNTA except for Group 6 (wrestling match related). However, Topic Group 6 rarely appears in the testing set using chronological splits [1], which is less likely to influence the performance.
### Filtered Data Set
We explore whether removing words related to specific topics or events will enhance the robustness of the models when predicting abusive content. We hypothesize that the model performance will drop slightly while the difference between random and chronological splits will be more minor by removing these words. We experiment with WASEEM as its performance drop has room to reduce. We filter the data set by excluding three types of words: (1) words in all eight groups extracted by BERTopic **(D1)**; (2) words se
Figure 1: Topic distribution over time.
lected by attention mechanisms (**D2**) and (3) the union of the words extracted by (1) and (2) (**D3**). For (2), we first use the RoBERTa-hate-speech model to produce attention scores that represent a probability distribution over each text. We then manually remove topic-related tokens among the top five tokens with the highest probability in each abusive tweet. Most of the removed tokens are names or hashtags related to the cooking TV show.
The results of filtered data sets are shown in Table 9. Similar to the previous experiment, we run five times for each method. First, all three strategies for removing topic-related words hurt performance in most cases, especially for chronological splits (e.g. 87.64 vs. 84.75 F1 using random splits, 74.71 vs. 72.11 F1 using chronological splits). However, the performance on D2 using chronological splits improves by 0.32 F1. Second, using more robust data sets leads to more minor performance drops. We achieve the smallest performance drop (9.65\(\downarrow\) F1) using D3. Also, using D2 achieves a comparable performance drop but only slightly hurts the performance. This suggests that filtering out specific topic-related words in a data set (i.e. a more robust data set) helps reduce temporal bias.
### Error Analysis
Additionally, we perform an error analysis on two data sets containing sexist abuse, WASEEM and JIANG, using chronological splits. For WASEEM, we found that most errors happen when content involves the TV show (MKR). Also, when names from the show are mentioned, it is easy for models to misclassify the texts as non-abusive. We guess this is because the model cannot associate names in the testing set with male, female (gender-related) or abusive if it has not seen those names in the training set. However, the annotators of this data set have prior knowledge of this TV show and its characters. Thus, they are able to classify dissatisfaction or hatred toward specific characters as _sexist_. In the following two examples, tweets belonging to _abusive_ are misclassified as _non-abusive_ (names are highlighted in bold)14:
Footnote 14: Note that WASEEM is originally a sexist and racist data set, so other abusive content will be labeled as neither (_non-abusive_ in our paper).
T1: _**Kat** on #mkr is such a horrible person.. I wish **Kat** and **Andre** would just get eliminated already._
T2: _#MKR-I am seriously considering not watching just because I have to see **Kats** face. God. I want to slap it with a spatula!_
However, when gender-related words also appear in the content, models are more likely to classify them correctly. The following tweets are correctly classified as _abusive_:
T3: _#katandandre gaaaaaah I just want to slap **her** back to WA #MKR_
T4: _#MKR **Girls**, thank you for filling the slapper quotient on this years series... we no longer have a need for **bitchy blondes**! Au Revoir!_
For JIANG, it is easy for models to fail to understand the actual meaning of a text without knowing traditional Chinese cultural viewpoints related to gender and marriage (e.g. some people value sons more than daughters). The following text belong to _abusive_ (sexism originally) is wrongly classified as _non-abusive_:
T5: _#1_#2_#3_#4_#5_#6_#7
## Limitations
This work aims to investigate the impact and causes of temporalities across different abusive data sets. In our work, we can only evaluate limited data sets that provide time information (e.g. 2 English ones, 2 data sets spanning more than 3 years) which limits control experiments for more sound comparisons. Also, all debiasing methods can only applied to English abusive data sets due to the imperfect implementation of techniques in other languages (i.e. domain adaptation models, BERTopic, OA). Moreover, our studies on temporal bias only explore topic changes and lack a comprehensive understanding of language evolution over time.
## Conclusion
In this work, we investigate the impact of temporal bias on abusive language detection. We compare the predictive results using two data split methods (i.e. random and chronological splits) across different data sets (_RQ1_). The results indicate that temporal bias has a larger influence on data sets with a larger time span and collected using keywords, especially specific event-related keywords. Languages (or culture) may also be a factor but due to insufficient data sets, we can not draw concrete conclusions. We also conduct extensive analysis including text similarities, feature analysis and topic distribution to explore the causes of temporalities (_RQ2_). We found that performance degradation is mostly because of topic changes in our data sets. To provide a complete answer to _RQ3_, we filter a data set by removing topic-related words that appear in abusive texts. The predictive results suggest that using domain adaptation models and LLMs and training on a more robust data set can effectively reduce temporal bias in abusive language detection.
In the future, we plan to study temporal bias patterns in abusive data sets across different languages or platforms, aiming to understand the importance of considering the specific nature of the target variable when collecting the data sets and developing models. It can also be expanded to other text classification tasks.
## Ethics Statement
This work has received ethical approval from our Research Ethics Committee. All datasets are acquired either through the URLs provided in the original papers or by requesting them from the respective authors. Note that we did not gather any fresh data from Twitter for this study. Additionally, we can verify that the data has been completely anonymized prior to its utilization in the Language Model Inference process.
## Acknowledgements
This research is supported by a UKRI grant ES/T012714/1 ("Responsible AI for Inclusive, Democratic Societies: A cross-disciplinary approach to detecting and countering abusive language online").
|
2309.15258 | Efficient Quasiparticle Determination beyond the Diagonal Approximation
via Random Compression | Calculations of excited states in Green's function formalism often invoke the
diagonal approximation, in which the quasiparticle states are taken from a
mean-field calculation. Here, we extend the stochastic approaches applied in
the many-body perturbation theory and overcome this limitation for large
systems in which we are interested in a small subset of states. We separate the
problem into a core subspace, whose coupling to the remainder of the system
environment is stochastically sampled. This method is exemplified on computing
hole injection energies into CO$_2$ on an extended gold surface with nearly
3000 electrons. We find that in the extended system, the size of the problem
can be compressed up to $95\%$ using stochastic sampling. This result provides
a way forward for self-consistent stochastic methods and determining Dyson
orbitals in large systems. | Annabelle Canestraight, Xiaohe Lei, Khaled Ibrahim, Vojtech Vlcek | 2023-09-26T20:33:04Z | http://arxiv.org/abs/2309.15258v1 | # Efficient Quasiparticle Determination beyond the Diagonal Approximation via Random Compression
###### Abstract
Calculations of excited states in Green's function formalism often invoke the diagonal approximation, in which the quasiparticle states are taken from a mean-field calculation. Here, we extend the stochastic approaches applied in the many-body perturbation theory and overcome this limitation for large systems in which we are interested in a small subset of states. We separate the problem into a core subspace, whose coupling to the remainder of the system environment is stochastically sampled. This method is exemplified on computing hole injection energies into CO\({}_{2}\) on an extended gold surface with nearly 3000 electrons. We find that in the extended system, the size of the problem can be compressed up to 95% using stochastic sampling. This result provides a way forward for self-consistent stochastic methods and determining Dyson orbitals in large systems.
pacs: 71.10.-m, 71.10.-k, 71.10.-k _Introduction_Single particle states are frequently used in the study of excitation phenomena such as photoionization, electron injection, and generally optical transitions[1; 2; 3; 4; 5; 6; 7; 8; 9]. The physical interpretation of such single particle states often depends on the specific type of observables[7; 10]. In particular, Dyson orbitals, which correspond to the probability amplitude distribution of a specific electron or hole excitation (i.e., quasiparticle state), are directly accessible via orbital tomography and provide insights into the relation between energies and real-space distribution of single particle excitation[11; 12]. This has fundamental implications for chemistry - e.g., hybridization of quasiparticles on surfaces governs the propensity for direct injection of an electron [9]. These are just a few compelling reasons to account for the physically meaningful orbital disturtions, especially for problems concerning (chemical) interfaces.
In practice, however, single-particle states for interfacial systems are typically taken from the Density Functional Theory (DFT) [13; 14; 15], as the cost of higher level theory is too high. While DFT can handle extremely large systems[16], these calculations can not, even in principle, yield quasiparticle (QP) energies or the Dyson orbitals[7; 17]. A natural and widely applied extension, especially in condensed matter problems, is application of the Many Body Perturbation Theory (MBPT) employing Green's function formalism[7; 18; 19; 20]. In particular, the \(GW\) approximation, which truncates the correlation expansion to non-local charge density fluctuations, has emerged as arguably the most popular approach[21; 22] and higher order corrections emerged recently[23; 24; 25; 26]. Its self-consistent solution yields both QP energies and the Dyson orbitals [27; 28; 29]. However, it is common to apply \(GW\) approach as a one-shot correction, \(G_{0}W_{0}\), employing the Kohn-Sham Green's function \(G_{0}\) and the screened coulomb interaction \(W_{0}\) derived from the underlying Kohn Sham DFT solutions. Despite its approximate nature, \(G_{0}W_{0}\) often provides good estimates of band gaps[30; 31; 32; 33; 34]. The use of one-shot corrections has been largely motivated by the computational cost, which scales as \(\mathcal{O}(N^{4})\) with the number of electrons in conventional implementations[35; 36]. The computational cost has been significantly decreased by stochastic sampling approaches in \(GW\) (and post-\(GW\)) to be nearly linear; 1000's of states can thus be studied[37; 38; 39; 40; 41]. However, even in the stochastic \(GW\), "updating" the single-particle basis (i.e., finding the Dyson orbitals) is difficult[42] and, in practice, usually avoided[43]. Routine calculations of QP orbitals in realistic systems with thousands of electrons are still elusive. This is true even if one is, in principle, interested in treating a _small subset_ of states, as exemplified in this work (see below).
Here, we tackle this problem and present a scheme without the diagonal approximation for realistic nanoscale systems. This stochastic framework is exemplified for CO\({}_{2}\) molecule on a large Au slab. For this problem, the surface contributions to the orbitals are sampled, drastically reducing the cost of QP calculations. This
method divides the system into a set of states in an "core" subspace, treated by standard stochastic MBPT, and a rest space, for which additional sampling is introduced. This step is combined with a search over the fixed-point solutions of the frequency-dependent QP Hamiltonian, which is basis representation independent and thus enables the use of random vectors.
We apply these methods to a prototypical system of a small molecule on a plasmonic surface (CO\({}_{2}\) on Au illustrated in the inset in Fig. 1). In the practical demonstration for an extended Au (111) surface with 270 atoms (2986 electrons), we found convergence in the hybridized HOMO energy with a 95% rank compression compared to evaluation in the full canonical orbital basis. This success provides a way to use costly high-level theories to study realistic chemical systems.
_Formalism_ The time-ordered Green's function (GF) contains information about the quasiparticle (QP) energy spectrum and lifetimes, and it corresponds to the probability amplitude of a QP propagation between two space-time points \(\mathbf{r},t\) and \(\mathbf{r}^{\prime},t^{\prime}\). In the Lehmann representation, it is expressed as
\[G(\mathbf{r},\mathbf{r}^{\prime},\omega)=\sum_{n}\bigg{[}\frac{\psi_{n}( \mathbf{r})\psi_{n}(\mathbf{r}^{\prime})^{*}}{\omega-\varepsilon_{n}-i\eta} \bigg{]}, \tag{1}\]
where the Dyson orbitals are obtained as from the \(N\)-particle ground state and the \(n^{\text{th}}\) excited state of the \(N-1\) particle system, where \(\hat{\psi}(\mathbf{r})\) is the field operator. The poles of the GF are located at the QP energies, \(\varepsilon_{n}\), here corresponding to the charge removal [7]. Charge addition is treated analogously. The GF poles are conveniently expressed as solutions to a non-linear eigenvalue problem for an effective Hamiltonian obtained by downfolding interactions with the system[7]:
\[\hat{H}_{QP}(\omega)\ket{\psi}=\omega\ket{\psi} \tag{2}\]
In practice, the QP Hamiltonian is divided into a static and local term, \(H_{0}\), which typically contains all one-body contributions, while a space-time non-local portion is represented by the self-energy operator \(\hat{\Sigma}\)[22]. The latter is approximated by selected types of interaction diagrams (and their resummation). As \(\hat{\Sigma}\) is conceptually equivalent to the exchange-correlation potential applied in the Kohn-Sham density functional theory (KS DFT), the QP Hamiltonian is practically constructed as a perturbative correction on top of such a mean-field starting point:
\[\hat{H}_{QP}(\omega)=\hat{H}_{0,\text{KS}}-\hat{V}_{xc}+\hat{\Sigma}(\omega), \tag{3}\]
where \(\hat{H}_{0,\text{KS}}\) is the KS DFT Hamiltonian.
Further, the "one-shot" correction corresponds to:
\[\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)=i\int\frac{d\omega^{\prime}}{2 \pi}G_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega+\omega^{\prime})W_{0}(\mathbf{ r},\mathbf{r}^{\prime},\omega^{\prime}), \tag{4}\]
where \(\mathcal{G}_{0}\) has poles at the DFT Kohn Sham eigenvalues, \(\varepsilon_{0}\), and \(W_{0}\) is the screened coulomb interaction. The self-consistency requires repeated construction of \(\Sigma\) and re-evaluation of Eq. 2; multiple flavors of self-consistent approaches have been developed [27; 28]. Typically, the convergence pattern is smooth. If the KS DFT single-particle states are close to the Dyson orbitals, the "one-shot" correction provides good estimates of QP energies, yet the quality of the mean-filed eigenstates are not _a priori_ known.
A step beyond this practice is to diagonalize \(H_{QP}\) in Eq. 2 in the orbital basis, yielding Dyson orbitals (in the first iteration) and updated one-shot QP energies in the \(GW\) approximation[7]. Note that, in principle, the nonlinear problem in Eq. 2 holds for multiple values of \(\omega\) associated with satellite features [44; 23; 45]. In this work, we will focus only on the primary QP peaks, i.e., we seek a single solution to the QP Hamiltonian in the vicinity of \(\varepsilon_{0}\) and look for the fixed point solutions to \(\omega_{i}=\bra{\phi_{i}}\hat{H}_{QP}[\omega_{i}]\ket{\phi_{i}}\). Note that \(H_{QP}\) is non-hermitian, and each QP state, in general, corresponds to \(H_{QP}\) computed at a different frequency.
In practical schemes[46; 47; 29; 43], it is common to construct a single "static" effective Hamiltonian (yielding orthogonal eigenstates). However, due to the non-linearity of this problem, it is not entirely clear at what frequency the self-energy should be evaluated. For strongly diagonally dominant \(H_{QP}\), i.e., those where KS DFT orbitals are, in fact, close to the Dyson orbitals, one may evaluate \(\omega_{i}\) as the fixed point solution for the diagonal entries. The remaining off-diagonal self-energy is e.g., \(\Sigma_{ij}=\frac{1}{4}\left[\Sigma_{ij}(\omega_{i})+\Sigma_{ji}(\omega_{i})+ \Sigma_{ij}(\omega_{j})+\Sigma_{ji}(\omega_{j})\right]\). In this
Figure 1: Illustration of the stochastic compression technique, which samples the “rest subspace” using a set of (filtered) random vectors, here spanning the single particle states of the gold substrate.
form, it is possible to construct a static and hermitized QP Hamiltonian. By enforcing the hermicity of \(H_{QP}\), we impose that the resulting QP states are orthonormal. The QP energies are purely real, corresponding to an infinite lifetime QP. Alternatively, one can therefore relax the latter step by taking \(\Sigma_{ij}=\frac{1}{2}\left[\Sigma_{ij}(\omega_{i})+\Sigma_{ij}(\omega_{j})\right]\).
Note that both approaches strongly depend on the basis choice. We illustrate this in detail in the Supporting Information (SI) for the acrolein molecule, for which the magnitudes of the off-diagonal terms are 98.5% smaller than the diagonal ones for the canonical KS DFT basis. The situation changes dramatically when localized (unitary transformed) orbitals are employed. Hence, depending on the construction of a single \(H_{QP}\), the resulting QP energies change by as much as 10% and translate to changes of 0.77 eV on average for acrolein.
Since our goal is to determine Dyson orbitals for a selected subspace of interest (which will be constructed from localized basis states), we avoid any approximation to the fixed point solution. In this method, the whole QP Hamiltonian is evaluated at multiple frequencies, and the QP eigenvalues are found as the fixed point solutions to Eq. 2. No assumptions are further made about the hermicity of the Hamiltonian matrix; a graphical example of such a fixed point solution for the \(H_{QP}\) is also illustrated in the SI.
Stochastic Compression of QP statesWhen studying a large system with a subspace of particular interest, it is prohibitively expensive to employ all \(M\) electronic states. It is also insufficient to assume that the Hamiltonian matrix takes a block-diagonal form due to the coupling between the subspace and its orthogonal complement. To handle such a case, we propose a method of stochastic matrix compression where a portion of the \(H_{QP}\) matrix is represented by a set of random vectors. These vectors sample a large portion of the Hilbert space, which overall contributes to the QP shift and affects the Dyson orbitals, but for which each individual single particle state has only a limited contribution.
As illustrated in Fig. 1, we separate the "core subspace" spanned by \(N_{c}\) deterministic states, \(\{\phi^{c}\}\), (e.g., the original KS DFT eigenstates), and the remainder spanned by a \(N_{s}\) stochastic states \(\{\zeta\}\), constructed as a random linear combination of the KS states that are orthogonal to the \(\{\phi^{c}\}\) set: \(\left|\zeta\right\rangle=\sum_{i=1\pm\phi^{c}}^{N_{s}}c_{i}\left|\phi_{i}\right\rangle\). In the final step, the individual random states are orthogonalized via the Gram-Schmidt process. Because this change of basis is guaranteed to be a unitary transformation of the Hamiltonian matrix, when the whole system is diagonalized, the resulting eigenstates will be the same. When the Hamiltonian matrix is truncated in this new stochastic basis, the coupling of each stochastic state to the core subspace will represent the subspace interaction with the full environment. In this way, we have "compressed" the information of the whole system environment into a single state. Given that the fixed point solution is basis independent (as illustrated in Fig. S.3), a total number of states \(N_{c}+N_{s}\) is the same as the dimension of \(H_{QP}\), \(M\), we necessarily obtain the same QP energies. For fewer random states, \(N_{c}+N_{s}<M\), and the computation is less expensive. Note that the QP energy has a finite statistical error, which decreases as \(1/\sqrt{N_{s}}\) with the number of states sampling the off-diagonal self-energy contributions. As we show below, the convergence of the QP energies is smooth. Further, note that instead of the canonical single particle states in the above equation, we achieve further speedup if already preselected (filtered) subset of states (orthogonal to the \(\{\phi^{c}\}\)) are used in the construction of \(\left|\zeta\right\rangle\).
ResultsWe now demonstrate the method practically for the CO\({}_{2}\) molecule on the Au substrate for which we intend to extract the energies of quasi-hole states on the molecule (i.e., corresponding to the charge removal energies from CO\({}_{2}\) on the surface). We first construct a minimal example on which we can solve entirely and illustrate how stochastic sampling smoothes the convergence of the QP energies. Later, we show a realistic example with nearly 3,000 electrons, which cannot be easily solved without the sampling methodology.
We will demonstrate the success of our stochastic sampling method on a minimal system of CO\({}_{2}\) on a bilayer of 8 gold atoms. This system contains only 52 occupied states, which we also treat explicitly. Note that, in principle, the hybridization extends beyond merely the occupied manifold, but to illustrate the methodology, we consider only the rotation within the occupied subspace. To see the surface-induced changes, calculate the QP states for a CO\({}_{2}\) molecule in a vacuum (\(N=8\)) and for the minimal composite system (\(N=52\)). We find that the seven lowest valence states of the molecule shift in energy when the substrate is included, but the eigenvectors (orbitals) do not change in response to the gold substrate.
In contrast, HOMO state behaves differently: no single state would correspond to the molecular HOMO (either the canonical DFT or Dyson orbitals computed at the \(G_{0}W_{0}\) level). Instead, there are multiple _hybridized_ states sufficiently localized on the molecule, whose eigenvalues lay within a small range of energies. We aim to characterize them and, consequently, to find a characteristic QP energy for this distribution of HOMO QP for the CO\({}_{2}\) molecule on Au.
We thus define a "core subspace" comprising the states with the most molecular character. In practice, they are identified based on projection onto localized (unitary transformed) orbitals centered on CO\({}_{2}\), e.g., using the molecular reconstruction technique which is applied here[48; 41]. The corresponding projection value is:
\[P_{i}=\sum_{j}|\left\langle\xi_{j}|\phi_{i}\right\rangle|^{2} \tag{5}\]
Here, \(\{\left|\xi\right\rangle\}\) and \(\{\left|\phi\right\rangle\}\) are the sets of transformed (localized) and canonical KS DFT states respectively. Each KS state with \(P\) greater than a chosen threshold is included in the core region. This preselection separates the "core" subspace from the rest.
We now track the fixed point HOMO QP solution with the number of states considered in the \(H_{QP}\), i.e., we gradually add more states outside of the core subspace. The molecular HOMO is hybridized with many of the surface states. We thus define a single energy for this state by taking its mean value, constructed by weighting by the projection onto the HOMO of CO\({}_{2}\) in a vacuum. The results are shown by the green color in Fig. 2. The left-most point represents only the core space, containing 12 orbitals corresponding to 23% of the entire \(H_{QP}\). The size of the problem is increased by adding states depending on their distance from the KS DFT HOMO energy, as one would expect that the hybridization of states will be small for energetically distant states. This does not produce a smooth convergence (green line in Fig. 2) as some surface hybridization is due to Au orbitals that are far from the core subspace.
To demonstrate the stochastic approach, we now instead sample the remaining KS states using random vectors:
\[\ket{\zeta}=\frac{1}{\sqrt{N_{e}}}\sum_{j=1}^{N_{e}}w_{j}e^{i\theta_{j}}\ket{ \phi_{j}} \tag{6}\]
Here \(\theta_{j}\in[0,2\pi]\) is randomly chosen, and \(N_{e}\) is the number of "rest" states used with weight \(w_{i}\). Note that we can either sample all remaining states evenly (\(w_{i}=1\forall i\)), but generally, we consider a random selection from a distribution within the sampled subspace (determined by \(P_{i}\) in Eq. 5) as we show later.
Once we have obtained the set \(\{\zeta\}\), we randomly draw \(N_{s}\) of them and the fixed point solution is then found for \(H_{QP}\) with the dimension of \(N_{c}+N_{s}\). The results for \(N_{c}=12\) and variable \(N_{s}\) is in Fig. 2 and shows a monotonic and smooth convergence towards to the asymptotic value (obtained for the entire 52 occupied states). The stochastic sampling was repeated ten times for each step with a different set of \(N_{s}\) random vectors; the standard deviations are indicated in the plot and they naturally disappear in the complete basis limit. For instance, for \(N_{s}=20\), i.e., 62% of the entire system, we see a difference of \(0.057\pm 0.19\) eV between the mean HOMO QP energies. For an increased core space, \(N_{c}=15\), we see that the HOMO QP value converges similarly, i.e., the size of the core space is not changing the convergence profile significantly. For \(N_{s}=20\) (i.e., 32.5% compression of matrix rank), the resulting spectrum mean falls within a 100 meV from the value obtained from the diagonalization of the full matrix.
Without any prior knowledge or arbitrary truncation of the KS states, we can capture molecule-surface hybridization effects by employing stochastic states representing the substrate environment. This description is systematically improvable by increasing both \(N_{c}\) and \(N_{s}\). In general, the cost reduction provided by the stochastic sampling is due to circumventing the summation over many states that contribute either similarly or very little to the expectation values in question[49]. For a small system such as the one used here, the amount of compression is less significant as most of the states contribute to the QP HOMO energy.
We now turn to a realistic large-scale system for which such a calculation would not be possible with standard methods. Here, we study CO\({}_{2}\) molecule on an extended Au-111 surface of 270 atoms, containing 2986 electrons. The system is treated analogously to the minimal example: we selected a core subspace of 15 and 25 states. Due to the molecule-surface hybridization, \(N_{c}=15\) is the minimal size of the core space identified for this particular problem. Next, the stochastic sampling uses a filtered distribution in Eq. 6 in which we consider a linear combination of states that are sufficiently localized on the molecules. In practice, this step determines the sampled subspace, which is practically restricted to states with \(P\) greater than a selected threshold, \(P_{T}\). Here we consider two cases \(P_{T}=10^{-3}\) and \(P_{T}=5\times 10^{-4}\).
From Fig. 3 we can see that the HOMO energy converges with only 5% of the total number of states used[50]. For slightly increased selectivity (i.e., lower projection threshold \(P\)), the stochastic sampling of the hybridization converges similarly. Further, the size of the core subspace does not significantly impact the convergence rate: when \(N_{c}=25\) with the filtering threshold of \(P_{T}=5\times 10^{-4}\), the curve matches that of the \(N_{c}=15\) for the same value of \(P_{T}\). This suggests that the size of the core subspace can be decreased, possibly at the expense of using more stochastic samplings.
Finally, note that when the orbital re-hybridization is used at the \(G_{0}W_{0}\) level, the HOMO QP energy moves down in energy by more than 1 eV. Since approximate semilocal KS DFT is known to suffer from overdelocaliza
Figure 2: **Hybridized HOMO Convergence (Minimal System)**: Core sizes of \(N_{c}=12\) and \(N_{c}=15\) are used, with the remaining states sampled with equal weight. In contrast, the adding the states by energy (\(\varepsilon\) ordered) demonstrates the lack of smooth convergence. The gray-shaded region shows where the spectrum converges within 0.1 eV
tion, it is expected that the physical Dyson orbitals are more localized than the canonical KS DFT eigenstates. In turn, stronger localization of HOMO is typically associated with its energy decrease[9]. These observations are thus in line with what the MBPT should accomplish and underline the need for more appropriate treatment of surface phenomena.
_Outlook_ The rapid convergence of the QP energies with \(N_{s}\) implies that when we stochastically sample the matrix, aided by preselection and filtering, we can represent the full QP spectrum for a molecule that hybridizes with an extended surface using less than 5% of the system. The \(H_{QP}\) matrix size is thus compressed by 95%. This is largely due to the significant "redundancy" of information encoded in individual single-particle states, and the sampling allows sampling all (or a large filtered portion of them) simultaneously through random vectors. The approach presented here will enable the treatment of large-scale interfacial problems and opens the door for efficient self-consistent stochastic MBPT.
## Acknowledgements
The development (A.C., X.L., and V.V.) of the stochastic compression technique was supported by the NSF CAREER award (DMR-1945098). The implementation and numerical testing (A.C., K.I., and V.V.) is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Basic Energy Sciences, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number DE-SC0022198. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0020089.
|
2310.00050 | Scalar boson emission from a magnetized relativistic plasma | We investigate the differential emission rate of neutral scalar bosons from a
highly magnetized relativistic plasma. We show that three processes contribute
at the leading order: particle splitting ($\psi\rightarrow \psi+\phi $),
antiparticle splitting ($\bar{\psi} \rightarrow \bar{\psi}+\phi $), and
particle-antiparticle annihilation ($\psi + \bar{\psi}\rightarrow \phi $). This
is in contrast to the scenario with zero magnetic field, where only the
annihilation processes contribute to boson production. We examine the impact of
Landau-level quantization on the energy dependence of the rate and investigate
the angular distribution of emitted scalar bosons. The differential rate
resulting from both (anti)particle splitting and annihilation processes are
typically suppressed in the direction of the magnetic field and enhanced in
perpendicular directions. Overall, the background magnetic field significantly
amplifies the total emission rate. We speculate that our model calculations
provide valuable theoretical insights with potentially important applications. | Jorge Jaber-Urquiza, Igor A. Shovkovy | 2023-09-29T18:00:03Z | http://arxiv.org/abs/2310.00050v2 | # Scalar boson emission from a magnetized relativistic plasma
###### Abstract
We investigate the differential emission rate of neutral scalar bosons from a highly magnetized relativistic plasma. We show that three processes contribute at the leading order: particle splitting (\(\psi\to\psi+\phi\)), antiparticle splitting (\(\bar{\psi}\to\bar{\psi}+\phi\)), and particle-antiparticle annihilation (\(\psi+\bar{\psi}\to\phi\)). This is in contrast to the scenario with zero magnetic field, where only the annihilation processes contribute to boson production. We examine the impact of Landau-level quantization on the energy dependence of the rate and investigate the angular distribution of emitted scalar bosons. The differential rate resulting from both (anti)particle splitting and annihilation processes are typically suppressed in the direction of the magnetic field and enhanced in perpendicular directions. Overall, the background magnetic field significantly amplifies the total emission rate. We speculate that our model calculations provide valuable theoretical insights with potentially important applications.
## I Introduction
The properties of matter under extreme conditions, where relativistic effects play a profound role, are a source of great fascination. This fascination is not surprising, as such extreme conditions naturally occur in the early Universe and in stars [1; 2; 3; 4; 5]. However, replicating these conditions in a laboratory setting is exceptionally challenging. The most promising efforts in this direction involve heavy-ion experiments conducted at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN [6; 7]. In these experiments, heavy ions are accelerated to sufficiently high energies to produce tiny volumes of quark-gluon plasma (QGP) [8; 9]. Although this new state of matter has a very brief lifetime and is likely far from equilibrium, some of its properties can still be deduced [10; 11; 12].
Over the last two decades, significant progress has been made in understanding the properties of hot QGP [13; 14]. The emerging picture can be summarized as follows. Heavy-ion collisions generate matter with high energy density, which is initially far from equilibrium. Due to strong interaction, this matter rapidly approaches a quasi-equilibrium QGP state. Furthermore, it behaves almost like a perfect hydrodynamic fluid, undergoing expansion, cooling, and eventual hadronization [15; 16; 17; 18]. The resulting hadrons carry its remnants to the detectors, enabling one to unveil the properties of hot QGP.
The QGP produced in heavy-ion collisions not only possesses an extremely high temperature but also carries a strong magnetic field [19; 20; 21; 22] and exhibits high vorticity [23; 24; 25]. Theoretical investigations indicate that both the magnetic field and vorticity can modify the observed properties of QGP [26; 27; 28; 29; 30]. Of particular significance are the observables linked to electromagnetic probes, as they convey information about the plasma's properties across all stages of its evolution [31].
In this paper, we will study the differential production rate of neutral scalar bosons within a strongly magnetized relativistic plasma. Previously, an attempt to address this problem was undertaken in Ref. [32]. However, only simplified kinematics with momenta of scalar bosons parallel to the magnetic field was considered. Another related study on the scalar boson decay at zero temperature and weak field was reported in Ref. [33]. In both instances, the constraints imposed by kinematics allowed only for the contribution of particle-antiparticle annihilation processes to the absorptive part of the self-energy (or boson decay). Herein, we undertake a comprehensive approach, removing all constraints on kinematics, permitting arbitrary magnetic field strengths, and incorporating the thermal effects of the plasma to address this problem in its entirety.
At first glance, this problem may not have direct phenomenological implications for QGP in heavy-ion collisions. After all, there are no known spin-zero particles to be emitted from a relativistic plasma. Nevertheless, we believe that this problem has theoretical value. By comparing the results with the emission of photons [34; 35; 36; 37; 38] and dileptons [39; 40; 41; 42; 43; 44; 45; 46; 47; 48] (i.e., spin-one channel) studied previously, we can get insights into the impact of particle spin on emission
rates and angular distributions. This hypothetical scenario also extends our understanding of the fundamental laws of physics and their potential applications in other fields.
For example, neutral scalar bosons often appear in dark matter [49; 50; 51] and inflationary models [52]. Moreover, their properties, when modified by a nonzero temperature and primordial magnetic fields, can have cosmological implications [53; 54]. In the end, our goal is to refine our theoretical tools and expand the frontiers of scientific knowledge. Not every thought experiment or hypothetical scenario leads to a discovery, but more often than not, it provides fresh insights and perspectives.
The paper is organized as follows. We introduce the model of magnetized plasma with a single flavor of fermion species, coupled to a neutral scalar field via a Yukawa-type interaction, in Sec. II. There, we also define the differential emission rate in terms of the imaginary part of the scalar-boson self-energy. The general expression for the self-energy at nonzero temperature is obtained in Sec. III. In the derivation, we utilize the Landau-level representation for the fermion propagator, which allows us to extract an analytical expression for the imaginary part of the self-energy in the form of a convergent series. In Sec. IV, the corresponding result is used to calculate the differential emission rate of scalar bosons from a magnetized plasma. We study in detail the angular dependence of the emission rate, as well as analyze the partial contributions due to annihilation (i.e., \(\bar{\psi}\to\bar{\psi}+\phi\) and \(\psi\to\psi+\phi\)) processes. A discussion of the main findings and a summary of the results are given in Sec. V. For comparison, the bosonic self-energy in the zero magnetic field limit is presented in Appendix A.
## II Model
For simplicity, we consider a model of magnetized plasma with a single flavor of fermion species \(\psi\). By assumption, the fermions interact with the neutral scalar field \(\phi\) via a Yukawa interaction. The corresponding Lagrangian density reads
\[\mathcal{L}=\bar{\psi}\left(i\gamma^{\mu}D_{\mu}-m\right)\psi+\frac{1}{2} \partial^{\mu}\phi\partial_{\mu}\phi-\frac{1}{2}M^{2}\phi^{2}-g\phi\bar{\psi }\psi, \tag{1}\]
where \(m\) and \(M\) are the masses of the fermion and scalar particles, and \(q\) is the electric charge of the fermion. The covariant derivative is defined as usual, i.e., \(D^{\mu}\equiv\partial^{\mu}+iqA^{\mu}(x)\), where \(A^{\mu}(x)\) is an Abelian gauge field, capturing the effect of a background magnetic field \(\mathbf{B}\). The corresponding field strength tensor is given by \(F^{\mu\nu}=\partial^{\mu}A^{\nu}(x)-\partial^{\nu}A^{\mu}(x)\). Without loss of generality, we will assume that the magnetic field points along the \(z\) axis and use the following Landau gauge: \(A^{\mu}(x)=-yB\delta_{1}^{\mu}\). The explicit form of the strength tensor reads \(F^{\mu\nu}=-\varepsilon^{0\mu\nu 3}B\). Here we use the conventional definition of the contravariant coordinates, i.e., \(x^{\mu}=(t,x,y,z)\), and the Minkowski metric \(g_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\).
The differential thermal emission rate of scalar bosons from the corresponding plasma is given by
\[\frac{d^{3}R}{d^{3}k}=-\frac{n_{B}(\Omega)}{(2\pi)^{3}\Omega}\mathrm{Im}\left[ \Sigma^{R}(\Omega,\mathbf{k})\right], \tag{2}\]
where \(\Omega=\sqrt{M^{2}+|\mathbf{k}|^{2}}\) is the (on shell) scalar particle energy, \(n_{B}(\Omega)=1/\left[e^{\Omega/T}-1\right]\) is the Bose-Einstein distribution function, and \(\Sigma^{R}(\Omega,\mathbf{k})\) is the retarded self-energy of the scalar field. At leading order in coupling, the latter is determined by the one-loop Feynman diagram in Fig. 1, where the solid and dashed lines represent fermions and bosons, respectively. Because of the background magnetic field, the fermion propagators are labeled by the longitudinal momenta and the Landau-level indices.
Note that, in view of the detailed balance, the expression in Eq. (2) can represent either the emission or the absorption rate per unit volume. However, the total emission (or absorption) rate can be also affected by the system size, if the latter is comparable to or larger than the mean free path \(l_{\phi}\) of the scalar bosons with energy \(\Omega\). For simplicity, we will ignore the corresponding effects below. If needed, however, they could be incorporated approximately by separating the surface layers of depth \(l_{\phi}\) from the rest of the plasma. The rate in Eq. (2) is valid only for the surface layers. The emission from the inner parts is approximately vanishing.
Figure 1: Leading order one-loop Feynman diagram for the scalar self-energy.
In view of the rotational symmetry of a magnetized plasma about the magnetic field direction, the differential rate is independent of the azimuthal angle \(\phi\) (which is measured in \(xy\)-plane from the positive \(x\)-axis). Taking this fact into account, we derive the following expression for the total rate integrated over all directions:
\[\frac{dR}{dk}=-\int_{0}^{\pi}\frac{k^{2}n_{B}(\Omega)}{(2\pi)^{2}\Omega}\text{ Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]\sin\theta d\theta, \tag{3}\]
where the polar angle \(\theta\) is measured from the positive \(z\)-axis towards the \(xy\)-plane. In other words, the transverse and the longitudinal components of the boson momentum are \(k_{\perp}=k\sin\theta\) and \(k_{z}=k\cos\theta\), respectively. By rewriting the rate in terms of the boson energy, we have
\[\frac{dR}{d\Omega}=-\int_{0}^{\pi}\frac{k\,n_{B}(\Omega)}{(2\pi)^{2}}\text{Im} \left[\Sigma^{R}(\Omega,\mathbf{k})\right]\sin\theta d\theta. \tag{4}\]
In order to characterize the angular profile of emission, we will utilize the following definition of the ellipticity parameter:
\[v_{2}=-\frac{\int_{0}^{\pi}\left(d^{3}R/d^{3}k\right)\cos(2\theta)d\theta}{ \int_{0}^{\pi}\left(d^{3}R/d^{3}k\right)d\theta}, \tag{5}\]
which is analogous to the one used in heavy-ion physics but expressed in terms of a different angular coordinate. An extra negative sign in the definition ensures that a positive value of ellipticity (\(v_{2}>0\)) describes an oblate emission profile, i.e., stronger average emission in the directions perpendicular to the magnetic field (or, in heavy-ion physics language, in the reaction plane). A negative value of ellipticity (\(v_{2}<0\)) implies a prolate emission profile, i.e., stronger average emission in the directions parallel to the magnetic field (or, in heavy-ion physics language, perpendicularly to the reaction plane).
## III One-loop self-energy
In the presence of a background magnetic field, translation symmetry is broken in the plane perpendicular to the magnetic field. As a consequence, the transverse momenta are not good quantum numbers for classifying fermionic states. This fact is also reflected in the structure of the fermion propagator, which takes the following form in coordinate space [55]:
\[S(x,y)=\exp\left(-iq\int_{y}^{x}A_{\mu}(x)dx^{\mu}\right)\bar{S}(x-y), \tag{6}\]
where the first factor is the so-called Schwinger's phase. Formally, this phase is the only part that breaks the translation symmetry. The second factor, \(\bar{S}(x-y)\), is a translation invariant part of the propagator. Its explicit form will be given below.
In coordinate space, the one-loop self-energy of the scalar field is given by
\[\Sigma(x-y)=ig^{2}\text{Tr}\left[\bar{S}(x-y)\bar{S}(y-x)\right], \tag{7}\]
see Fig. 1, where the trace runs over the Dirac indices. Note that, at this leading order in coupling, it is determined only by the translation invariant part of the fermion propagator \(\tilde{S}(x-y)\).
It should not be surprising that the dependence of \(\tilde{S}(x)\) on the transverse and longitudinal spatial coordinates (i.e., \(\mathbf{r}_{\perp}\) and \(z\), respectively) is very different. Unlike translations in the \(xy\)-plane, translations in the \(z\) direction are part of the remaining symmetry in the problem. In other words, the corresponding longitudinal momentum \(k_{z}\) is a good quantum number. Thus, it convenient to use the following mixed representation for translation invariant part of the propagator:
\[\bar{S}(x)=\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2}}\tilde{S}(p_{\parallel}; \mathbf{r}_{\perp})e^{-ip_{\parallel}\cdot x_{\parallel}}, \tag{8}\]
where, by definition, \(x_{\parallel}=(t,z)\), \(\mathbf{r}_{\perp}=(x,y)\), and \(p_{\parallel}=(p_{0},p_{z})\) is the longitudinal momentum. The explicit form of \(\tilde{S}(p_{\parallel};\mathbf{r}_{\perp})\) reads [27]
\[\tilde{S}(p_{\parallel};\mathbf{r}_{\perp})=i\frac{e^{-\zeta/2}}{2\pi\ell^{2}} \sum_{n=0}^{\infty}\frac{\tilde{D}\left(p_{\parallel};\mathbf{r}_{\perp} \right)}{p_{\parallel}^{2}-m^{2}-2n|qB|} \tag{9}\]
with the shorthand notation \(\zeta\equiv|\mathbf{r}_{\perp}|^{2}/(2\ell^{2})\) and
\[\tilde{D}\left(p_{\parallel};\mathbf{r}_{\perp}\right)\equiv\left(p\!\!\!/_{ \parallel}+m\right)\left[\mathcal{P}_{+}L_{n}\left(\zeta\right)+\mathcal{P}_{- }L_{n-1}\left(\zeta\right)\right]-i\frac{\mathbf{r}_{\perp}^{\prime}}{\ell^{2} }L_{n-1}^{1}\left(\zeta\right), \tag{10}\]
where \(\ell\equiv 1/\sqrt{|qB|}\) is the magnetic length, \(L_{n}(z)\) are the Laguerre polynomials, \(L_{n}^{a}(z)\) are the generalized Laguerre polynomials, and \(\mathcal{P}_{\pm}\equiv\frac{1}{2}\left(1\pm i\mathrm{sign}\left(qB\right) \gamma^{1}\gamma^{2}\right)\) are the spin projectors along the magnetic field direction.
After substituting the expression for \(\tilde{S}(x)\) into the definition of self-energy in Eq. (7) and performing the Fourier transform, we derive the following momentum representation:
\[\Sigma(k)=ig^{2}\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2}}\int d^{2}\mathbf{r} _{\perp}e^{-i\mathbf{r}_{\perp}\cdot\mathbf{k}_{\perp}}\mathrm{Tr}\left[ \tilde{S}(p_{\parallel};\mathbf{r}_{\perp})\tilde{S}(p_{\parallel}-k_{ \parallel};-\mathbf{r}_{\perp})\right]. \tag{11}\]
By using the fermion propagator in Eq. (9) and performing the trace over the Dirac indices, we obtain the following expression for the scalar self-energy:
\[\Sigma(k) = -\frac{ig^{2}}{2\pi^{2}\ell^{4}}\int\frac{d^{2}p_{\parallel}}{(2 \pi)^{2}}\int d^{2}\mathbf{r}_{\perp}e^{-i\mathbf{r}_{\perp}\cdot\mathbf{k}_{ \perp}}e^{-\zeta} \tag{12}\] \[\times \sum_{n,n^{\prime}}\frac{\left(m^{2}+p_{\parallel}\cdot(p-k)_{ \parallel}\right)\left[L_{n}(\zeta)L_{n^{\prime}}(\zeta)+L_{n-1}(\zeta)L_{n^{ \prime}-1}(\zeta)\right]-\frac{2\mathbf{r}_{\perp}^{2}}{\ell^{4}}L_{n-1}^{1}( \zeta)L_{n^{\prime}-1}^{1}(\zeta)}{\left(p_{\parallel}^{2}-m^{2}-2n|qB|\right) \left((p_{\parallel}-k_{\parallel})^{2}-m^{2}-2n^{\prime}|qB|\right)}.\]
The integration over the transverse spatial coordinates can be performed exactly using the same approach as in Refs. [37; 38]. The result reads
\[\Sigma(k)=-i\frac{g^{2}}{\pi\ell^{2}}\int\frac{d^{2}p_{\parallel}}{(2\pi)^{2} }\sum_{n,n^{\prime}}\frac{\left(m^{2}+p_{\parallel}\cdot(p-k)_{\parallel} \right)\left(\mathcal{I}_{0}^{n,n^{\prime}}(\xi)+\mathcal{I}_{0}^{n-1,n^{ \prime}-1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I}_{2}^{n-1,n^{\prime}-1}( \xi)}{\left(p_{\parallel}^{2}-m^{2}-2n|qB|\right)\left((p_{\parallel}-k_{ \parallel})^{2}-m^{2}-2n^{\prime}|qB|\right)}. \tag{13}\]
where \(\xi\equiv(k_{\perp}\ell)^{2}/2\) and the two new functions are
\[\mathcal{I}_{0}^{n,n^{\prime}}(\xi) \equiv (-1)^{n+n^{\prime}}e^{-\xi}L_{n}^{n^{\prime}-n}(\xi)L_{n^{\prime }}^{n-n^{\prime}}(\xi), \tag{14}\] \[\mathcal{I}_{2}^{n,n^{\prime}}(\xi) \equiv 2(n^{\prime}+1)(-1)^{n+n^{\prime}}e^{-\xi}L_{n}^{n^{\prime}-n}( \xi)L_{n^{\prime}+1}^{n-n^{\prime}}(\xi). \tag{15}\]
To take thermal effects into account, we introduce the Matsubara frequencies through the imaginary time formalism. Then, replacing the fermion energy \(p_{0}\to i\omega_{k}=2i\pi(k+1)T\) and the boson energy with the bosonic counterpart, i.e., \(k_{0}\to i\Omega_{m}=2i\pi mT\), the corresponding finite-temperature scalar self-energy reads
\[\Sigma(i\Omega_{m},\mathbf{k})=\frac{g^{2}T}{\pi\ell^{2}}\sum_{k=-\infty}^{ \infty}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime}}\frac{\left(m^{2}+p_{ \parallel}\cdot(p-k)_{\parallel}\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}( \xi)+\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}} \mathcal{I}_{2}^{n-1,n^{\prime}-1}(\xi)}{\left(i\omega_{k})^{2}-p_{z}^{2}-m^{2 }-2n|qB|\right)\left((i\omega_{k}-i\Omega_{m})^{2}-(p_{z}-k_{z})^{2}-m^{2}-2n^ {\prime}|qB|\right)}, \tag{16}\]
where the shorthand notation \(p_{\parallel}\cdot(p-k)_{\parallel}\) stands for \(i\omega_{k}(i\omega_{k}-i\Omega_{m})-p_{z}(p_{z}-k_{z})\). Computing the sum over the Matsubara frequencies, we derive the following expression for the self-energy:
\[\Sigma(i\Omega_{m},\mathbf{k}) = \frac{g^{2}}{\pi\ell^{2}}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime }}\sum_{\eta,\lambda=\pm 1}\frac{n_{F}\left(E_{n,p_{z}}\right)-n_{F}\left(\lambda E_{n^{ \prime},p_{z}-k_{z}}\right)}{4\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}} \left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+i\eta\Omega_{m}\right)} \tag{17}\] \[\times \bigg{[}\left(\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}+m^{2} -p_{z}\left(p_{z}-k_{z}\right)\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}(\xi)+ \mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I}_{ 2}^{n-1,n^{\prime}-1}(\xi)\bigg{]},\]
where \(E_{n,p_{z}}\equiv\sqrt{p_{z}^{2}+m^{2}+2n|qB|}\) and \(E_{n^{\prime},p_{z}-k_{z}}\equiv\sqrt{(p_{z}-k_{z})^{2}+m^{2}+2n^{\prime}|qB|}\) are the Landau level energies, and \(n_{F}(\Omega)=1/\left(e^{\Omega/T}+1\right)\) is the Fermi-Dirac distribution function. In the derivation we used the following general result:
\[T\sum_{k=-\infty}^{\infty}\frac{i\omega_{k}(i\omega_{k}-i\Omega_{m})\mathcal{Y}+ \mathcal{Z}}{\left[(i\omega_{k})^{2}-a^{2}\right]\left[(i\omega_{k}-i\Omega_{m })^{2}-b^{2}\right]}=\sum_{\eta,\lambda=\pm 1}\frac{n_{F}(a)-n_{F}(\lambda b)}{4\lambda ab\left(a- \lambda b+\eta i\Omega_{m}\right)}\left[\lambda ab\mathcal{Y}+\mathcal{Z} \right]. \tag{18}\]
To obtain the self-energy in Minkoswky space, we need to perform a suitable analytic continuation in Eq. (17). The retarded expression for the self-energy is obtained by replacing \(i\Omega_{m}\rightarrow\Omega+i\epsilon\)
\[\Sigma^{R}(\Omega,\mathbf{k}) = \frac{g^{2}}{\pi\ell^{2}}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime} }\sum_{\lambda=\pm 1}\frac{n_{F}\left(E_{n,p_{z}}\right)-n_{F}\left(\lambda E_{n^{ \prime},p_{z}-k_{z}}\right)}{4\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}} \left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+\eta\Omega+i\eta\epsilon \right)} \tag{19}\] \[\times \bigg{[}\left(\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}+m^{2} -p_{z}\left(p_{z}-k_{z}\right)\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}( \xi)+\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}} \mathcal{I}_{2}^{n-1,n^{\prime}-1}(\xi)\bigg{]},\]
where \(\epsilon\rightarrow+0\).
It should be noted that the expression for the self-energy in Eq. (19) contains both vacuum and thermal contributions. While the latter is finite, the former has an ultraviolet divergence. Therefore, one has to regularize it in order to proceed with the calculation. Fortunately, only the real part of the self-energy is divergent. The imaginary part, which appears in the definition of the emission rate, is finite.
### Absorptive part of the self-energy
From the expression for the retarded self-energy in Eq. (19), one can extract the imaginary part by using the well-known Sokhotski formula, see Eq. (100). The corresponding result reads
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right] = -\frac{g^{2}}{\ell^{2}}\int\frac{dp_{z}}{2\pi}\sum_{n,n^{\prime} }\sum_{\eta,\lambda=\pm 1}\frac{n_{F}\left(E_{n,p_{z}}\right)-n_{F}\left( \lambda E_{n^{\prime},p_{z}-k_{z}}\right)}{4\eta\lambda E_{n,p_{z}}E_{n^{ \prime},p_{z}-k_{z}}}\delta\left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z} }+\eta\Omega\right) \tag{20}\] \[\times \bigg{[}\left(\lambda E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}+m^{2} -p_{z}\left(p_{z}-k_{z}\right)\right)\left(\mathcal{I}_{0}^{n,n^{\prime}}(\xi) +\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I} _{2}^{n-1,n^{\prime}-1}(\xi)\bigg{]}.\]
Note that the Dirac \(\delta\)-function inside the integrand enforces the following energy conservation relation:
\[E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+\eta\Omega=0. \tag{21}\]
The imaginary part of the self-energy (20) is an odd function of the scalar energy \(\Omega\). Without loss of generality, therefore, we will assume that \(\Omega>0\) from this point onward.
Depending on the choice of signs of \(\lambda\) and \(\eta\), the energy conservation relation (21) represents one of the three possible processes involving particles and/or antiparticles states with Landau indices \(n\) and \(n^{\prime}\). Two of them correspond to particle-splitting processes involving fermions (\(\lambda=1\) and \(\eta=-1\)) or antifermions (\(\lambda=1\) and \(\eta=-1\)). In Fig. 2, they are represented by the diagrams in panels (a) and (b), respectively. The third process (\(\lambda=-1\) and \(\eta=-1\)) corresponds to the fermion-antifermions annihilation, represented by the diagram in panel (c) of Fig. 2. When \(\Omega\) is positive, there are no physical processes associated with the fourth combination of signs, i.e., \(\lambda=-1\) and \(\eta=1\). It is clear since the energy conservation equation (21) has no real solutions in this case.
The necessary and sufficient conditions for having real-valued solutions to the energy conservation equation (21) are given as follows:
\[\psi\rightarrow\psi+\phi: \sqrt{\Omega^{2}-k_{z}^{2}}\leq k_{-}\ \ \mathrm{and}\ \ n>n^{\prime}, \tag{22}\] \[\bar{\psi}\rightarrow\bar{\psi}+\phi: \sqrt{\Omega^{2}-k_{z}^{2}}\leq k_{-}\ \ \mathrm{and}\ \ n<n^{\prime},\] (23) \[\psi+\bar{\psi}\rightarrow\phi: \sqrt{\Omega^{2}-k_{z}^{2}}\geq k_{+}, \tag{24}\]
Figure 2: Feynman diagrams for the three processes involving a scalar boson and fermion states in the Landau levels \(n\) and \(n^{\prime}\): (a) particle splitting \(\psi\rightarrow\psi+\phi\), (b) antiparticle splitting \(\bar{\psi}\rightarrow\bar{\psi}+\phi\), (c) particle-antiparticle annihilation \(\psi+\bar{\psi}\rightarrow\phi\).
for the three types of processes. Here we introduced the following shorthand notation for the transverse momenta thresholds:
\[k_{\pm}\equiv\bigg{|}\sqrt{m^{2}+2n|qB|}\pm\sqrt{m^{2}+2n^{\prime}|qB|}\bigg{|}, \tag{25}\]
which depend on the Landau-level indices \(n\) and \(n^{\prime}\). The constraints for \(\Omega\) are identical for the two particle-splitting processes in Eqs. (22) and (23), except for the restrictions on the Landau-level indices. However, they are very different from the kinematic constraint for the annihilation process in Eq. (24). The requirements \(n>n^{\prime}\) and \(n<n^{\prime}\) in Eqs. (22) and (23), respectively, ensure that the initial Landau level state lies above the final one.
By solving the energy conservation relation (21), we find the following pair of analytical solutions for the longitudinal momentum:
\[p_{z}^{(\pm)}\equiv\frac{k_{z}}{2}\left[1+\frac{2(n-n^{\prime})|qB|}{\Omega^{ 2}-k_{z}^{2}}\pm\frac{\Omega}{|k_{z}|}\sqrt{\left(1-\frac{k_{-}^{2}}{\Omega^{ 2}-k_{z}^{2}}\right)\left(1-\frac{k_{+}^{2}}{\Omega^{2}-k_{z}^{2}}\right)} \right]. \tag{26}\]
Note that these are exactly the same as in the case of dilepton emission [48], provided the dilepton invariant mass is replaced with the scalar boson mass. Nevertheless, as we will see below, the rate and the angular distribution of scalar emission will be very different.
By using the analytical solutions in Eq. (26), we can also obtain the corresponding fermion Landau-level energies,
\[E_{n,p_{z}}\Big{|}_{p_{z}^{(\pm)}} = -\frac{\eta\Omega}{2}\left[1+\frac{2(n-n^{\prime})|qB|}{\Omega^{ 2}-k_{z}^{2}}\pm\frac{|k_{z}|}{\Omega}\sqrt{\left(1-\frac{k_{-}^{2}}{\Omega^{ 2}-k_{z}^{2}}\right)\left(1-\frac{k_{+}^{2}}{\Omega^{2}-k_{z}^{2}}\right)} \right], \tag{27}\] \[E_{n^{\prime},p_{z}-kz}\Big{|}_{p_{z}^{(\pm)}} = \frac{\lambda\eta\Omega}{2}\left[1-\frac{2(n-n^{\prime})|qB|}{ \Omega^{2}-k_{z}^{2}}\mp\frac{|k_{z}|}{\Omega}\sqrt{\left(1-\frac{k_{-}^{2}}{ \Omega^{2}-k_{z}^{2}}\right)\left(1-\frac{k_{+}^{2}}{\Omega^{2}-k_{z}^{2}} \right)}\right]. \tag{28}\]
Having explicit analytical solutions for the longitudinal momentum, now we can rewrite the Dirac \(\delta\)-function in Eq. (20) as follows:
\[\delta\left(E_{n,p_{z}}-\lambda E_{n^{\prime},p_{z}-k_{z}}+\eta\Omega\right)= \sum_{s=\pm}\frac{2E_{n,p_{z}}E_{n^{\prime},p_{z}-k_{z}}}{\sqrt{\left(\Omega^{ 2}-k_{z}^{2}-k_{-}^{2}\right)\left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)}} \delta\left(p_{z}-p_{z}^{(s)}\right). \tag{29}\]
Finally, by integrating over \(p_{z}\) in Eq. (20), we derive the expression for the imaginary part of the scalar boson self-energy in the form of a convergent series over Landau levels:
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right] = \frac{g^{2}}{2\pi\ell^{2}}\sum_{n>n^{\prime}}^{\infty}\frac{ \theta\left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)-\theta\left(k_{-}^{2}+k_{z} ^{2}-\Omega^{2}\right)}{\sqrt{\left(\Omega^{2}-k_{z}^{2}-k_{-}^{2}\right) \left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)}}h\left(n,n^{\prime}\right) \tag{30}\] \[\times\bigg{[}\left(\left(n+n^{\prime}\right)|qB|-\frac{1}{2} \left(\Omega^{2}-k_{z}^{2}\right)+2m^{2}\right)\left(\mathcal{I}_{0}^{n,n^{ \prime}}(\xi)+\mathcal{I}_{0}^{n-1,n^{\prime}-1}(\xi)\right)-\frac{2}{\ell^{2} }\mathcal{I}_{2}^{n-1,n^{\prime}-1}(\xi)\bigg{]}\] \[+ \frac{g^{2}}{4\pi\ell^{2}}\sum_{n=0}^{\infty}\frac{\theta\left( \Omega^{2}-k_{z}^{2}-4m^{2}-8n|qB|\right)}{\sqrt{\left(\Omega^{2}-k_{z}^{2} \right)\left(\Omega^{2}-k_{z}^{2}-4m^{2}-8n|qB|\right)}}h_{0}\left(n\right)\] \[\times\bigg{[}\left(2n|qB|-\frac{1}{2}\left(\Omega^{2}-k_{z}^{2} \right)+2m^{2}\right)\left(\mathcal{I}_{0}^{n,n}(\xi)+\mathcal{I}_{0}^{n-1,n- 1}(\xi)\right)-\frac{2}{\ell^{2}}\mathcal{I}_{2}^{n-1,n-1}(\xi)\bigg{]}.\]
Here we introduced the following functions made of the Fermi-Dirac distributions:
\[h\left(n,n^{\prime}\right) \equiv 2-\sum_{s_{1},s_{2}=\pm}n_{F}\left(\frac{\Omega}{2}+s_{1}\frac{ \Omega\left(n-n^{\prime}\right)|qB|}{\Omega^{2}-k_{z}^{2}}+s_{2}\frac{|k_{z}|}{ 2\left(\Omega^{2}-k_{z}^{2}\right)}\sqrt{\left(\Omega^{2}-k_{z}^{2}-k_{-}^{2} \right)\left(\Omega^{2}-k_{z}^{2}-k_{+}^{2}\right)}\right), \tag{31}\] \[h_{0}\left(n\right) \equiv h(n,n)=2-2\sum_{s_{2}=\pm}n_{F}\left(\frac{\Omega}{2}+s_{2} \frac{|k_{z}|}{2}\sqrt{1-\frac{4\left(m^{2}+2n|qB|\right)}{\Omega^{2}-k_{z}^{2} }}\right). \tag{32}\]
Notice that the second term in Eq. (30) is the contribution due to annihilation processes with \(n=n^{\prime}\).
The expression for the imaginary part of self-energy (30) is one of the main analytical results of this study. By substituting it into the definition in Eq. (2), we can calculate the differential emission rate of neutral bosons from a magnetized plasma. The corresponding numerical results will be presented and analyzed in the next section.
Note that the general structure of the expression in Eq. (30) resembles the photon polarization tensor obtained in Ref. [38]. However, there are some profound differences. Unlike spin-one photons, the bosons are spin-zero particles in the model at hand. As we discuss later in detail, the spinless nature of bosons strongly affects the angular dependence of the self-energy and, in turn, the corresponding angular distribution of boson emission. For example, the differential rate due to particle-splitting processes will be suppressed in the direction parallel to the magnetic field. In the case of photons, such emission was not only allowed but played a dominant role at small energies.
Before concluding this subsection, it is instructive to consider a simplified kinematic regime with \(\mathbf{k}_{\perp}=0\) (i.e., for \(\theta=0\) or \(\theta=\pi\)). It is the only case that was analyzed previously in the literature, see Ref. [32]. It corresponds to scalar boson emission in the direction of the magnetic field. Substituting \(\mathbf{k}_{\perp}=0\), the general result for self-energy in Eq. (30) reduces down to
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{0},k_{z})\right]=-\frac{g^{2}}{8 \pi\ell^{2}}\frac{\left(\Omega^{2}-k_{z}^{2}-4m^{2}\right)}{\sqrt{\Omega^{2} -k_{z}^{2}}}\sum_{n=0}^{\infty}\alpha_{n}\frac{\theta\left(\Omega^{2}-k_{z}^{2 }-4m^{2}-8n|qB|\right)}{\sqrt{\Omega^{2}-k_{z}^{2}-4m^{2}-8n|qB|}}h_{0}\left(n \right), \tag{33}\]
where we introduced \(\alpha_{n}=2-\delta_{n,0}\) and took into account that \(\lim_{\xi\to 0}\left[\mathcal{I}_{0}^{n,n^{\prime}}(\xi)\right]=\delta_{n,n^{ \prime}}\) and \(\lim_{\xi\to 0}\left[\mathcal{I}_{2}^{n,n^{\prime}}(\xi)\right]=2(n+1)\delta_{n, n^{\prime}}\)[38]. Compared to the general result in Eq. (30), this expression for the self-energy is much simpler. More importantly, from a physics viewpoint, the kinematics of allowed processes is very restrictive at \(\mathbf{k}_{\perp}=0\). In particular, no one-to-two particle-splitting processes contribute in this case at all. Only the particle-antiparticle annihilation processes do (and only if \(M>2m\)). Since the same does not hold at any nonzero \(\mathbf{k}_{\perp}\), such a simplified regime is an exceptional outlier. Furthermore, as we will see in the next section, the particle-splitting processes contribute substantially to the total emission rate in some kinematic regimes.
### Zero magnetic field limit
Here we verify that the result for the self-energy in Eq. (30) is consistent with the known zero-field limit. For our purposes, it is sufficient to consider only the case with \(\mathbf{k}_{\perp}=0\).
To consider the limit of vanishing magnetic field in Eq. (33), we introduce a continuous variable \(v=2n|qB|\) and replace the sum over \(n\) with an integral over \(v\). Then, we have
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]=-\frac{g^{2}}{8\pi} \frac{\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)}{\sqrt{\Omega^{2}-| \mathbf{k}|^{2}}}\theta\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)\int_{0 }^{v_{0}}\frac{dv}{\sqrt{v_{0}-v}}\left[1-\sum_{s_{2}=\pm}n_{F}\left(\frac{ \Omega}{2}+s_{2}\frac{|\mathbf{k}|\sqrt{v_{0}-v}}{\sqrt{\Omega^{2}-|\mathbf{k }|^{2}}}\right)\right], \tag{34}\]
where the upper limit of integration is \(v_{0}=\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)/4\). In the last expression, we also replaced \(|k_{z}|\) with \(|\mathbf{k}|\) in view of the Lorentz symmetry, which is restored in the absence of a magnetic field.
After introducing the new integration variable \(u=|\mathbf{k}|\sqrt{v_{0}-v}/\sqrt{\Omega^{2}-|\mathbf{k}|^{2}}\), we obtain
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]=-\frac{g^{2}}{4\pi} \frac{\left(\Omega^{2}-k_{z}^{2}-4m^{2}\right)}{|k_{z}|}\theta\left(\Omega^{2} -k_{z}^{2}-4m^{2}\right)\int_{0}^{u_{0}}du\left[1-\sum_{s_{2}=\pm}n_{F}\left( \frac{\Omega}{2}+s_{2}u\right)\right], \tag{35}\]
where
\[u_{0}=\frac{|\mathbf{k}|\sqrt{\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}}}{2\sqrt{ \Omega^{2}-|\mathbf{k}|^{2}}}. \tag{36}\]
Finally, after integrating over \(u\), we derive
\[\mathrm{Im}\left[\Sigma^{R}(\Omega,\mathbf{k})\right]=-\frac{g^{2}}{8\pi} \left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)\left[\frac{\sqrt{\Omega^{2}-| \mathbf{k}|^{2}}-4m^{2}}{\sqrt{\Omega^{2}-|\mathbf{k}|^{2}}}+\frac{2T}{| \mathbf{k}|}\ln\frac{1+e^{-E_{+}/T}}{1+e^{-E_{-}/T}}\right]\theta\left(\Omega ^{2}-|\mathbf{k}|^{2}-4m^{2}\right). \tag{37}\]
Note that \(E_{\pm}\equiv\Omega/2\pm u_{0}\) coincide with the definitions in Eq. (101) in Appendix A. The final result for the imaginary part of self-energy in Eq. (37) also agrees with the \(B=0\) expression given in Eq. (112).
When the scalar bosons are on the mass shell, i.e., \(\Omega^{2}=M^{2}+|{\bf k}|^{2}\), one has
\[\mathrm{Im}\left[\Sigma^{R}({\bf k})\right]\Big{|}_{\Omega^{2}=M^{2}+|{\bf k}|^ {2}}=-\frac{g^{2}}{8\pi}\left(M^{2}-4m^{2}\right)\left[\frac{\sqrt{M^{2}-4m^{2 }}}{\sqrt{M^{2}}}+\frac{2T}{|{\bf k}|}\ln\frac{1+e^{-E_{+}/T}}{1+e^{-E_{-}/T}} \right]\theta\left(M^{2}-4m^{2}\right). \tag{38}\]
As we see, this expression is nonvanishing only when \(M^{2}\geq 4m^{2}\). From a physics viewpoint, it indicates that the annihilation processes are the only ones contributing. It is not surprising since one-to-two particle-spitting processes (\(\psi\rightarrow\psi+\phi\) and \(\bar{\psi}\rightarrow\bar{\psi}+\phi\)) are forbidden without a background magnetic field. The latter is evident when considering the process in the rest frame of the boson. (Curiously, such one-to-two processes may be allowed when the masses of the initial and final fermions are different [56].) In the case of a nonzero magnetic field, in contrast, particle-spitting processes are allowed because the momentum conservation constraint in the plane perpendicular to the field is relaxed.
## IV Numerical results
Here, we use the imaginary part of self-energy derived in the previous section to analyze the differential emission rate of neutral bosons from a magnetized plasma. Because of an elaborate expression in Eq. (30) and the complications due to the sum over Landau levels, the angular dependence of the rate in Eq. (2) is hard to comprehend. Therefore, here we study it with the help of numerical methods.
In the model at hand, two qualitatively different regimes exist. They are determined by the value of the scalar boson mass \(M\), which can be either greater or less than the fermion-antifermion threshold \(2m\). In the subthreshold regime (\(M<2m\)), no scalar boson production can occur without a background magnetic field at the leading order in coupling. The situation changes when \(B\neq 0\). The annihilation becomes possible when the scalar boson energy exceeds the threshold of \(2m\). More interestingly, the boson production via particle-splitting processes is allowed in the whole range of energies \(\Omega>M\).
Below, we will study both regimes by considering the following two representative cases: \(M=3m\) (suprathreshold) and \(M=m/3\) (subthreshold). In each case, we will study the angular dependence of the rate in detail for several representative values of the magnetic field and temperature. As we will see, the behavior of the differential rates will be very different, especially at small values of the polar angle \(\theta\).
To reduce the number of free parameters and simplify the analysis, we will express all dimensionful quantities in units of the fermion mass \(m\). We will consider two different values of the magnetic field, i.e., \(|qB|=(2m)^{2}\) (moderate field) and \(|qB|=(5m)^{2}\) (strong field), and two different temperatures, i.e., \(T=5m\) and \(T=15m\), that correspond to moderately relativistic and ultrarelativistic plasmas, respectively. Without loss of generality, we will use the Yukawa coupling \(g=1\) in numerical calculations below.
When calculating numerically the imaginary part of self-energy (30), one needs to sum over Landau-level indices \(n\) and \(n^{\prime}\). Since the corresponding double-series is convergent, one may truncate the summation at sufficiently large finite \(n_{\mathrm{max}}\). Its value will be determined by the largest energy scale in the problem, which will be set by either the temperature or the scalar boson energy \(\Omega\). The latter will be varied in a range from \(\Omega=M\) up to about \(\Omega\simeq 35m\) (for \(|qB|=4m^{2}\)) and \(\Omega\simeq 90m\) (for \(|qB|=25m^{2}\)). Thus, from general considerations, one should include at least sufficient number of Landau levels to open the phase space for the processes with the largest energies. This leads to the bound from below:
\[n_{\mathrm{max}}\gtrsim\left[\mathrm{max}\left\{\frac{T^{2}}{2|qB|},\frac{ \Omega^{2}}{2|qB|}\right\}\right], \tag{39}\]
where the square brackets represent the integer part.
### Moderate magnetic field, \(|{\bf qB}|=4{\bf m^{2}}\)
Let us start the study of the differential rate as a function of the angular coordinate \(\theta\) in the case of a moderately strong magnetic field \(|qB|=(2m)^{2}\). To achieve a high angular resolution, we will use the discretization steps of \(\Delta\theta=\pi/(2n_{\theta})\) with \(n_{\theta}=10^{3}\). The direction along the magnetic field corresponds to \(\theta=0\), and the perpendicular direction is \(\theta=\pi/2\). There is no need to consider \(\theta>\pi/2\), as the corresponding rates can be obtained using the symmetry with respect to mirror reflection in the \(xy\)-plane. Indeed, such a symmetry remains unbroken in the presence of a constant background magnetic field.
Representative numerical results for the differential rates are shown in Fig. 3 for two fixed temperatures, i.e., \(T=5m\) (left panels) and \(T=15m\) (right panels), as well as two choices of the scalar boson mass, i.e., \(M=3m\) (top panels)
and \(M=m/3\) (bottom panels). Different lines correspond to different energies of neutral scalar bosons. They satisfy the mass-shell condition: \(\Omega=\sqrt{M^{2}+k_{\perp}^{2}+k_{z}^{2}}\), where \(k_{\perp}=k\sin\theta\) and \(k_{z}=k\cos\theta\).
By comparing the results for two different temperatures in the left and right panels of Fig. 3, we see that the rates tend to grow with temperature, as expected. In the case of \(M=3m\), the growth is relatively week at first when the energy exceeds the threshold \(\Omega\gtrsim M\) only slightly. It becomes more pronounced at higher values of energy. From a different perspective, the average rates decrease with increasing the scalar boson energy. However, one sees a substantial suppression only after the energy exceeds the plasma's temperature. To a large degree, such a behavior is dominated by the annihilation processes.
It is worth noting that the contribution of the lowest Landau level to the total rate remains relatively modest across the whole range of scalar boson energies. It plays a significant role only in the suprathreshold case (\(M=3m\)) at small temperatures, when the scalar boson's energy is only slightly higher than its minimum value \(\Omega_{\rm min}=M\). This observation underscores the limitations of the so-called lowest Landau level approximation, which is often employed to obtain simple estimates in the strong field regime. As we see, in relativistic plasmas, relying on such an approximation would yield unreliable results.
The growth of rates with increasing temperature is more pronounced in the case of a subthreshold scalar boson mass, i.e., \(M=m/3\), as seen from the two bottom panels of Fig. 3. The qualitative behavior is also different, especially at small values of the polar angle \(\theta\). To understand this subthreshold regime, it is important to remember that the scalar production is possible only because of a nonzero magnetic field. Since \(M<2m\), neither annihilation nor (anti)particle-splitting processes can occur at \(\theta=0\), see Eq. (33) and related discussion. This is in drastic difference to the suprathreshold case in the two top panels of Fig. 3.
For both temperatures and both values of the scalar mass, the differential rates tend to grow on average as a function of \(\theta\). It implies that the scalar bosons are emitted predominantly in the directions perpendicular to the magnetic field. We can easily visualize the corresponding emission profiles using the polar plots in Fig. 4. According to our convention, the magnetic field points upwards. The six individual panels show the polar plots for emission rates of bosons with several fixed energies and the two mass choices: \(M=3m\) (top panels) and \(M=m/3\) (bottom panels). The red lines represent the partial contributions of the annihilation rates, the green lines represent the particle-splitting rates, and the blue lines give the total rates. We show the results only for one temperature, \(T=15m\). The results for another
Figure 3: Neutral scalar boson differential production rates for several different energies and two fixed temperatures: \(T=5m\) (left panels) and \(T=15m\) (right panels). The magnetic field is \(|qB|=4m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
temperature (\(T=5m\)) are qualitatively similar but have different magnitudes and contain slightly different admixture of annihilation and particle-splitting processes. Their relative contributions will become clear when we discuss the total rates below.
As seen from Fig. 4, both annihilation (red lines) and particle-splitting (green lines) processes tend to provide higher rates of the scalar boson production in the directions perpendicular to the magnetic field. While having similar butterfly-shaped profiles, relative magnitudes of the two types of contributions vary with model parameters. In the suprathreshold case \(M=3m\), annihilation dominates almost at all energies. In the subthreshold case \(M=m/3\), however, the particle-splitting processes contribute more at small energies, but annihilation overtakes them at large energies. It is interesting to draw attention to the spacial case of \(\Omega=1.5m\) when the boson mass is \(M=m/3\), which falls into the subthreshold regime with \(M<\Omega<2m\). In this case, particle-splitting processes are the only ones contributing to the total rate. It is the reason why the corresponding (bottom left) panel in Fig. 4 has only blue lines visible. (Technically, the green lines, with the exact same profile, hide behind the blue ones.)
Let us now turn to the total rate \(dR/d\Omega\) integrated over all angular directions, as defined in Eq. (4). It describes production rate (per unit time and unit volume) of scalar bosons with energies between \(\Omega\) and \(\Omega+d\Omega\). Unlike the differential rate, its expression contains an extra power of momentum, which accounts for the available phase space. Clearly, such a phase space collapses when \(\Omega\) approaches \(M\) from above. Then, the rate \(dR/d\Omega\) should also vanish when \(\Omega\to M\). We will see below that it is indeed the case. The extra power of the momentum in the definition will also explain why \(dR/d\Omega\) does not start decreasing with energy until \(\Omega\) becomes several times the plasma's temperature.
For the case of the moderately strong field \(|qB|=4m^{2}\), the corresponding rates as functions of the energy are summarized in the two left panels of Fig. 5. The other two panels on the right show the ellipticity measure \(v_{2}\) for the scalar boson emission, formally defined by Eq. (5). In all the panels, the color coding represents temperature, with the blue for \(T=5m\) and the red for \(T=15m\). In addition to the total rates (filled circles) shown in the panels on the left, we also display the separate partial contributions due to annihilation (open diamonds) and particle-splitting
Figure 4: The angular profile of the scalar boson production rates for several different energies and fixed temperature \(T=15m\). The magnetic field is \(|qB|=4m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels). Each panel contains separate contributions due to annihilation (red lines) and particle-splitting (green lines) processes, as well as the total rates (blue lines).
(open squares) processes. For guiding the eye, we connected the points with different lines: solid (total rate), dotted (annihilation part) and dashed (particle-splitting part), respectively. For comparison, the dash-dotted lines represent the rates in the limit of the vanishing magnetic field. As we argued before, such a limit is meaningful only for \(M=3m\) (suprathreshold case). For subthreshold scalar mass \(M=m/3\), the rates vanish without a magnetic field.
The rates for all model parameters represented in Fig. 5 share many similar features. Overall, they have a tendency to grow with increasing the temperature. It is easy to understand since the number densities of both occupied positive-energy states and unoccupied negative-energy states increase with temperature. The availability (anti)particles in such states, in turn, opens the phase space for all relevant processes producing scalar bosons. On the other hand, as a function of energy, the rates grow at first, reach a maximum value around \(\Omega\sim 1.7T\), and then decrease. After passing the peak value, the behavior at high energies gradually approaches an exponential asymptote, i.e., \(dR/d\Omega\sim\exp{(-\Omega/T)}\).
By comparing the partial contributions of different types of processes in the two left panels of Fig. 5, we see that it is the annihilations rather than the particle splittings that dominate at sufficiently large energies. The interplay of the two is more subtle at low energies, where the relative contributions depend on the scalar boson mass. For the suprathreshold mass, \(M=3m\), the annihilation is more likely to dominate the total rate for (almost) all energies. For the subthreshold mass, \(M=m/3\), on the other hand, the particle-splitting processes give larger contributions in a range of small energies, \(\Omega\lesssim 1.7T\). Still, even for \(M=m/3\), the annihilation eventually takes over at higher energies.
Now let us turn to the results for the ellipticity parameter \(v_{2}\), shown in the two right panels of Fig. 5. In general, as we see, the values of \(v_{2}\) are positive and relatively large. At high energies, typical values of \(v_{2}\) are of the order of 0.2 to 0.3. The values tend to go down with increasing the temperature, though. There are some qualitative differences between the cases of \(M=3m\) (suprathreshold) and \(M=m/3\) (subthreshold), especially in the range of small energies, i.e., \(\Omega\lesssim 1.7T\). In particular, for \(M=3m\), the ellipticity parameter \(v_{2}\) shows a wide range of variations at small energies. It can even take negative values. These variations come from a finite number of quantum transitions between Landau levels that produce large threshold spikes in some directions and, thus, dramatically affecting \(v_{2}\). In contrast, for \(M=m/3\), the ellipticity parameter tends to grow by a factor of two or more with decreasing the energy from \(\Omega=2m\) down to \(\Omega=m/3\). Recall that, in this energy range, many particle-splitting processes contribute. They
Figure 5: The total rates and ellipticity of scalar boson emission from a magnetized plasma at two different temperatures: \(T=5m\) (blue lines) and \(T=15m\) (red lines). The magnetic field is \(|qB|=4m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
do not allow scalar boson production in the direction \(\theta=0\) and, thus, tend to give large \(v_{2}\).
### Strong magnetic field, \(|\mathbf{qB}|=\mathbf{25m^{2}}\)
Now let us consider the case of a strong field, i.e., \(|qB|=(5m)^{2}\). As in the previous subsection, we will start from the representative numerical results for the differential rates as functions of the angular coordinate \(\theta\). The rates for several fixed values of the scalar boson energy are displayed in Fig. 6. It includes four panels with the results for two different temperatures, \(T=5m\) (left panels) and \(T=15m\) (right panels), and two different scalar boson masses, \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
The strong field results in Fig. 6 are qualitatively similar to those in the weaker field, presented earlier in Fig. 3. As before, the rates generally grow with temperature. Also, their dependence on the angular coordinate \(\theta\) is similar too: (i) on average, the rates tend to increase with \(\theta\) and (ii) the functional dependence around \(\theta=0\) changes in the same way when one goes from the suprathreshold (\(M=3m\)) to the subthreshold (\(M=m/3\)) scalar boson mass. By comparing the results in Figs. 3 and 6, we also find that the rates are considerably higher in the case of stronger field.
The emission profiles and relative contributions of the annihilation and particle-splitting processes in the case of strong field, \(|qB|=25m^{2}\), remain about the same as in the weaker field, \(|qB|=4m^{2}\). Several representative profiles with characteristic butterfly shapes are displayed in six polar plots in Fig. 7. For the scalar mass \(M=3m\), the angular distribution of emission is particularly simple at small energies. One of such cases for \(\Omega=6m\) is displayed in the top left panel of Fig. 7. At such low energy, the only allowed annihilation processes are those between the lowest Landau levels. As a results, the corresponding rate visualized by the red line has a very smooth profile. Interestingly, it is one of those special cases when the annihilation has a slightly higher probability of producing scalar boson in the direction parallel to the magnetic field. Nevertheless, the particle-splitting processes overcompensate due to their much higher probability to produce scalar bosons in the directions perpendicular to the magnetic field.
There are no surprises in the case of the subthreshold boson mass, \(M=m/3\). When \(M<\Omega<2m\), again only the particle-splitting processes contribute. It explains why only the blue-line profile is shown in the bottom left panel of Fig. 7, which corresponds to \(\Omega=1.5m\). With increasing the energy, the role of annihilation processes grows and they
Figure 6: Neutral scalar boson differential production rates for several different energies and two fixed temperatures: \(T=5m\) (left panels) and \(T=15m\) (right panels). The magnetic field is \(|qB|=25m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
eventually dominate the total rate even for the subthreshold values of the boson mass. In fact, the emission profiles and relative contributions of different processes become very similar at large energies irrespective of the boson mass.
For the case of \(|qB|=25m^{2}\), the total rates \(dR/d\Omega\) integrated over the angular directions are shown in the two left panels of Fig. 8. The two right panels contain the data for the ellipticity measure \(v_{2}\) of the scalar boson production. As before, the results for the lower temperature, \(T=5m\), are represented by the blue lines and those for the higher temperature, \(T=15m\), are represented by the red lines. Additionally, the filled circles are used as plot markers for the total rate, the open diamonds for annihilation contributions, and the open squares for particle-splitting processes. In the suprathreshold case, \(M=3m\), we show additionally the zero-field rates, represented by the dash-dotted lines. (Recall that no nontrivial zero-field limit exists in the subthreshold case with \(M=m/3\).)
The energy dependence of the total rates in Fig. 8 is very similar to the weaker field case in Fig. 5. The rates grow with increasing the temperature. The dependence on the scalar boson energy vaguely resembles the black body radiation: the rates grow from zero to its maximum value around the energy \(\Omega\sim 1.7T\) and then decrease by gradually approaching the exponential asymptote, \(dR/d\Omega\sim\exp{(-\Omega/T)}\).
The relative contributions of the annihilation and particle-splitting processes can be read off from Fig. 8 too. While particle-splittings dominate in a range of small energies, \(\Omega\lesssim 1.7T\), the annihilation overwhelms the total rate at high energies, \(\Omega\gtrsim 1.7T\). In the case of larger (smaller) scalar mass \(M=3m\) (\(M=m/3\)), the corresponding switch of the two regimes occurs at slightly lower (higher) energies. Such a correlation is not surprising since the relative role of annihilation processes is larger (smaller) in the suprathreshold (subthreshold) case.
The ellipticity measure \(v_{2}\) of the scalar boson production is again positive and relatively large. Its values are in the same ballpark of \(0.2\) to \(0.3\). As in the case of the weaker field, \(v_{2}\) gets slightly suppressed with increasing the temperature. The prominent differences between the cases of \(M=3m\) (suprathreshold) and \(M=m/3\) (subthreshold) appear only at small energies, i.e., \(\Omega\lesssim 1.7T\).
Figure 7: The angular profile of the scalar boson production rates for several different energies and fixed temperature \(T=5m\). The magnetic field is \(|qB|=25m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels). Each panel contains separate contributions due to annihilation (red lines) and particle-splitting (green lines) processes, as well as the total rates (blue lines).
## V Conclusions
In this paper, we have derived an analytic expression for the imaginary (absorptive) part of the scalar boson's self-energy within a strongly magnetized relativistic plasma. The model we consider involves a neutral scalar field that interacts with charged fermions through a Yukawa-type coupling. We use the expression for the imaginary part of self-energy to calculate the differential production rate of scalar bosons. In view of the principle of detailed balance, this same quantity also determines the absorption rate of scalar boson in the magnetized plasma.
As evident from the explicit expression we have derived, the production rate is determined by three distinct types of processes: particle-splitting (\(\psi\rightarrow\psi+\phi\)), antiparticle-splitting (\(\bar{\psi}\rightarrow\bar{\psi}+\phi\)), and particle-antiparticle annihilation (\(\psi+\bar{\psi}\rightarrow\phi\)). All such processes have been thoroughly analyzed, with careful consideration given to the effects of Landau-level quantization of charged fermions. In the context of a high-temperature relativistic plasma (i.e., \(T\gtrsim\sqrt{|qB|}\)), our findings reveal that a large number of Landau levels contributes to the rate. In essence, this implies that one cannot rely on the commonly employed lowest Landau level approximation even when the magnetic field is very strong compared to the scale set by the fermion mass.
The energy dependence of the rates exhibits a resemblance to a black body spectrum, featuring a peak at an intermediate energy level comparable to the plasma's temperature. In our study of several representative cases, we have found that the peak typically occurs at approximately \(\Omega\simeq 1.7T\). Also, the rates grow with increasing temperature. The influence of thermal effects can be readily understood. As the temperature rises, the number of occupied positive-energy states and unoccupied negative-energy states grows. It leads to a larger phase space for all the processes contributing to the scalar boson production.
The rates also exhibit growth with an increasing magnetic field, but the underlying physics is more subtle. One key aspect is the substantial relaxation of momentum conservation constraints provided by the background field. The case in point is the production of bosons through (anti)particle-splitting processes, which are prohibited in the absence of a magnetic field. Additionally, the high degeneracy of Landau levels likely plays a role in enhancing scalar boson production. As in the case of magnetic catalysis [57], one may argue that such degeneracy increases the average density of quantum states near small energies. In the case of a hot plasma, this effect translates into an increased
Figure 8: The total rates and ellipticity of scalar boson emission from a magnetized plasma at two different temperatures: \(T=5m\) (blue lines) and \(T=15m\) (red lines). The magnetic field is \(|qB|=25m^{2}\) and the scalar boson masses are \(M=3m\) (top panels) and \(M=m/3\) (bottom panels).
phase space for annihilation processes. By comparing the results for two representative field strengths, \(\left|qB\right|=4m^{2}\) and \(\left|qB\right|=25m^{2}\), as well as for \(B=0\), we see that the presence of a magnetic field enhances the average rates.
We also studied in detail the dependence of the differential production rate on the angular coordinate and the scalar boson energy. The butterflylike emission profiles indicate a higher likelihood of boson production in directions perpendicular to the magnetic field. This preference for perpendicular emission is reflected in the ellipticity measure, denoted as \(v_{2}\), which typically assumes positive values in the range of \(0.2\) to \(0.3\) at high scalar boson energies. At small energies, on the other hand, the values of \(v_{2}\) exhibit greater variability due to energy quantization of the low-lying Landau-level states. In this regime, isolated energy thresholds can lead to abrupt changes in the \(v_{2}\) values, rendering this characteristics less informative and of limited utility.
As stated in the Introduction, we do not try to address phenomenological applications in this study. Nevertheless, we cannot help but note that our findings regarding the production (or decay) rate of scalar bosons may have important implications for cosmology. In particular, they suggest that the primordial magnetic field might exert an even stronger influence on the magnetic warm inflation scenario than previously reported in Refs. [53; 54]. Indeed, now we can fully substantiate the claim that the presence of the magnetic field significantly amplifies the total boson decay rate. Furthermore, the rate far exceeds the contribution from the lowest Landau level, which was employed as an estimate in Ref. [54].
###### Acknowledgements.
The visit of J. J.-U. to Arizona State University was supported by the Universidad Nacional Autonoma de Mexico through Facultad de Ciencias, CGEP-AANILD, and DGAPA-UNAM under Grant No. PAPIIT-IN108123. The work of I. A. S. was supported by the U.S. National Science Foundation under Grant No. PHY-2209470.
## Appendix A Zero magnetic field
In this appendix, for comparison purposes, we derive the imaginary part of the scalar boson self-energy in the limit of vanishing magnetic field. Similar results at nonzero temperature can be found in the literature, e.g., see Refs. [56; 58].
At the leading order, the scalar boson self-energy is given by
\[\Sigma(k)=ig^{2}\int\frac{d^{4}p}{(2\pi)^{4}}\mathrm{Tr}\left[S(p)S(p-k)\right], \tag{101}\]
which is the momentum space representation of a definition analogous to Eq. (7). In the absence of a background field, the fermion propagator reads
\[S(p)=i\frac{p\!\!\!/+m}{p^{2}-m^{2}+i\epsilon}. \tag{102}\]
After calculating the Dirac trace and replacing the energy integration with the Matsubara sum, we derive
\[\Sigma\left(i\Omega_{m},\mathbf{k}\right)=4g^{2}T\sum_{k=-\infty}^{\infty} \int\frac{d^{3}p}{(2\pi)^{3}}\frac{i\omega_{n}\left(i\omega_{n}-i\Omega_{m} \right)-\mathbf{p}\cdot\left(\mathbf{p}-\mathbf{k}\right)+m^{2}}{\left[\left( i\omega_{n}\right)^{2}-E_{p}^{2}\right]\left[\left(i\omega_{n}-i\Omega_{m} \right)^{2}-E_{p-k}^{2}\right]}, \tag{103}\]
where we have introduced the notation for the fermion energies \(E_{p}=\sqrt{\mathbf{p}^{2}+m^{2}}\) and \(E_{p-k}=\sqrt{(\mathbf{p}-\mathbf{k})^{2}+m^{2}}\).
The zero-field result above is analogous to Eq. (16) in the main text. Similarly, we use Eq. (18) to compute the Matsubara sum and arrive at the following result:
\[\Sigma^{R}\left(\Omega,\mathbf{k}\right)=g^{2}\sum_{\eta,\lambda=\pm 1}\int \frac{d^{3}p}{(2\pi)^{3}}\frac{n_{F}\left(E_{p}\right)-n_{F}\left(\lambda E_{ p-k}\right)}{\lambda E_{p}E_{p-k}\left(E_{p}-\lambda E_{p-k}+\eta\Omega+i \eta\epsilon\right)}\left[\lambda E_{p}E_{p-k}-\mathbf{p}\cdot\left(\mathbf{ p}-\mathbf{k}\right)+m^{2}\right], \tag{104}\]
where we performed the analytical continuation to Minkowski space by replacing \(i\Omega_{m}\longrightarrow\Omega+i\epsilon\). To separate the real and imaginary parts, we utilize the Sokhotski formula,
\[\frac{1}{E_{p}-\lambda E_{p-k}+\eta\Omega+i\eta\epsilon}=\mathcal{P}\frac{1}{ E_{p}-\lambda E_{p-k}+\eta\Omega+i\eta\epsilon}-i\eta\pi\delta\left(E_{p}- \lambda E_{p-k}+\eta\Omega\right). \tag{105}\]
Then, the imaginary part of the self-energy is given by
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right]=-g^{2}\pi\sum_{ \eta,\lambda=\pm 1}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{n_{F}\left(E_{p}\right)-n_{F} \left(\lambda E_{p-k}\right)}{\eta\lambda E_{p}E_{p-k}}\left[\lambda E_{p}E_{p- k}-\mathbf{p}\cdot\left(\mathbf{p}-\mathbf{k}\right)+m^{2}\right]\delta\left(E_{p}- \lambda E_{p-k}+\eta\Omega\right). \tag{39}\]
The remaining integration over the loop momenta can be performed by switching to spherical coordinates,
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right] = -g^{2}\pi\sum_{\eta,\lambda=\pm 1}\int_{0}^{\infty}\int_{-1}^{1} \int_{0}^{2\pi}\frac{\mathbf{p}^{2}\ dp\ dx\ d\varphi}{(2\pi)^{3}}\frac{n_{F} \left(E_{p}\right)-n_{F}\left(\lambda E_{p-k}\right)}{\eta\lambda E_{p}E_{p-k}} \tag{40}\] \[\times\left[\lambda E_{p}E_{p-k}-\mathbf{p}^{2}+|\mathbf{p}|| \mathbf{k}|x+m^{2}\right]\delta\left(E_{p}-\lambda E_{p-k}+\eta\Omega\right)\] \[= -g^{2}\pi\sum_{\eta,\lambda=\pm 1}\int_{0}^{\infty}\int_{-1}^{1} \frac{\mathbf{p}^{2}\ dp\ dx}{(2\pi)^{2}}\frac{n_{F}\left(E_{p}\right)-n_{F} \left(\lambda E_{p-k}\right)}{\eta\lambda E_{p}E_{p-k}}\] \[\times\left[\lambda E_{p}E_{p-k}-\mathbf{p}^{2}+|\mathbf{p}|| \mathbf{k}|x+m^{2}\right]|E_{p}+\eta\Omega|\frac{\delta\left(x-x_{0}\right)}{ |\mathbf{p}||\mathbf{k}|},\]
where we used the properties of the Dirac \(\delta\)-function and took into account the following solution to the energy-conservation equation:
\[x_{0}=-\frac{\Omega^{2}-|\mathbf{k}|^{2}+2\eta E_{p}\Omega}{2|\mathbf{p}|| \mathbf{k}|}. \tag{41}\]
Changing the integration variable from \(p\) to energy \(E_{p}\), we derive
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right]=-\frac{g^{2} \pi}{(2\pi)^{2}}\int_{E_{-}}^{E_{+}}dE_{p}\frac{n_{F}\left(E_{p}\right)-n_{F} \left(E_{p}-\Omega\right)}{|\mathbf{k}|}\left(2m^{2}-\frac{\Omega^{2}-| \mathbf{k}|^{2}}{2}\right)\Theta\left(\Omega-E_{p}\right)\Theta\left(\Omega^ {2}-|\mathbf{k}|^{2}-4m^{2}\right), \tag{42}\]
where the integration limits are defined by
\[E_{\pm}\equiv\frac{\Omega}{2}\pm\frac{|\mathbf{k}|}{2}\sqrt{1-\frac{4m^{2}}{ \Omega^{2}-|\mathbf{k}|^{2}}}. \tag{43}\]
These were obtained by requiring that \(-1<x_{0}<1\). After integrating over the energy, the final result reads
\[\mathrm{Im}\left[\Sigma^{R}\left(\Omega,\mathbf{k}\right)\right]=-\frac{g^{2} }{8\pi}\left(\Omega^{2}-|\mathbf{k}|^{2}-4m^{2}\right)\left[\sqrt{1-\frac{4m^{ 2}}{\Omega^{2}-|\mathbf{k}|^{2}}}+\frac{2T}{|\mathbf{k}|}\ln\left(\frac{1+e^{- \beta E_{+}}}{1+e^{-\beta E_{-}}}\right)\right]\Theta\left(\Omega^{2}-| \mathbf{k}|^{2}-4m^{2}\right). \tag{44}\]
|
2309.12537 | Run-and-tumble oscillator: moment analysis of stationary distributions | When it comes to active particles, even an ideal-gas model in a harmonic
potential poses a mathematical challenge. An exception is a run-and-tumble
model (RTP) in one-dimension for which a stationary distribution is known
exactly. The case of two-dimensions is more complex but the solution is
possible. Incidentally, in both dimensions the stationary distributions
correspond to a beta function. In three-dimensions, a stationary distribution
is not known but simulations indicate that it does not have a beta function
form. The current work focuses on the three-dimensional RTP model in a harmonic
trap. The main result of this study is the derivation of the recurrence
relation for generating moments of a stationary distribution. These moments are
then used to recover a stationary distribution using the Fourier-Lagrange
expansion. | Derek Frydel | 2023-09-21T23:29:00Z | http://arxiv.org/abs/2309.12537v1 | # Run-and-tumble oscillator: moment analysis of stationary distributions
###### Abstract
When it comes to active particles, even an ideal-gas model in a harmonic potential poses a mathematical challenge. An exception is a run-and-tumble model (RTP) in one-dimension for which a stationary distribution is known exactly. The case of two-dimensions is more complex but the solution is possible. Incidentally, in both dimensions the stationary distributions correspond to a beta function. In three-dimensions, a stationary distribution is not known but simulations indicate that it does not have a beta function form. The current work focuses on the three-dimensional RTP model in a harmonic trap. The main result of this study is the derivation of the recurrence relation for generating moments of a stationary distribution. These moments are then used to recover a stationary distribution using the Fourier-Lagrange expansion.
## I Introduction
Ideal-gas of active particles in a harmonic trap at first glance appears like a simple toy model with ready solutions and useful insights. The fact that such solutions are still lacking, or in the making, highlights the fact that active matter, even at the most basic level, is a challenge and experimentation with alternative formulations is needed and justified.
In this work we focus on stationary marginal distributions, that we designate as \(p\), of run-and-tumble particles (RTP) in a harmonic trap. In one- [1; 2; 3; 4] and two-dimensions [5], those distributions have a beta functional form. In three-dimensions, no exact expression for a distribution is available. In this work, instead of obtaining an expression for \(p\) directly, we calculate moments of that distribution. The moments are generated by the recurrence relation obtained by transforming the Fokker-Planck equation. A stationary distribution \(p\) is then recovered from those moments using the Fourier-Legendre series expansion.
The main analysis in this work is carried out for a system at zero temperature and for a harmonic potential in a single direction \(u=\frac{1}{2}Kx^{2}\) (embedded in a higher dimension). This makes the analysis simpler since the system is effectively one-dimensional. To extend the results to finite temperatures, we use the convolution construction [5; 6], which is equivalent to Gaussian smearing of a distribution at zero temperature. It also turns out that the moments of a stationary distribution for a potential \(u=\frac{1}{2}Kx^{2}\) can be related to the moments of a stationary distribution for an isotropic potential \(u=\frac{1}{2}Kr^{2}\). This permits us to extend our analysis in s straightforward way to isotropic potentials.
To place the current work in a larger context, we mention a number of previous contributions to active particles in a harmonic potential. An extension of the RTP model in 1D and zero temperature to three discrete swimming velocities was considered in [3]. The RTP model in two-dimensions (2D) with four discrete swimming velocities was investigated in [7]. A stationary distribution of active Brownian particles in 2D and for a finite temperature represented as a series expansion was considered in [8]. Dynamics of active Brownian particles (ABP) was recently investigated in [11]. A unifying approach to theoretical treatment of ABP and AOUP models in a harmonic trap was carefully investigated in [10]. Rare events in the context of active particles in a harmonic potential were considered in [12]. Active particles in a harmonic chains was considered in [13; 14; 15]. Experimental realizations of active particles in a harmonic trap are found in [16], using acoustic traps, and in [17], using optical tweezers. Entropy production rate of active particles in a harmonic trap was considered in [6; 18; 19; 20; 21; 22].
As an exact analysis of the RTP and ABP particles in a harmonic trap can be challenging, the active Ornstein-Uhlenbeck particles (AOUP) is more straightforward. The AOUP model has been developed to capture a behavior of a passive particle in a bath of active particles [23; 9]. Stationary distributions of this model in a harmonic trap have a Gaussian functional form, the same as that for passive Brownian particles, but with an effective temperature. Theoretical aspects of the AOUP model have been extensively investigated in [24; 25].
This paper is organized as follows. In Sec. (II) we consider RTP particles in a harmonic trap in 1D. We consider distributions in a position and a velocity space to identify the presence of "nearly immobile" particles. In Sec. (III) we consider the RTP particles in a harmonic trap \(u=\frac{1}{2}Kx^{2}\) embedded in 2D. In Sec. (IV) we consider RTP particles embedded in 3D. By transforming the Fokker-Planck equation, we obtain a recurrence relation for generating moments of the stationary distribution. From the moments we then recover distributions using the Fourier-Legendre expansion. In Sec. (V) we extend the previous results to finite temperatures then in Sec. (VI) to isotropic harmonic potentials. In Sec. (VII) we summarize the work and provide concluding remarks.
## II RTP particles in 1D
We start with the simple case: RTP particles in a harmonic trap \(u=\frac{1}{2}Kx^{2}\) in 1D [1; 2; 3; 4; 5]. Apart for looking into stationary distributions in a velocity space, Sec. (II.1), the section reviews previously derived results.
In one-dimension, swimming orientations are limited to two values, \(v_{swim}=\mp v_{0}\) and the Fokker-Planck formulation
yields two coupled differential equations [5]:
\[\dot{p}_{+} =\frac{\partial}{\partial x}\left[\left(\mu Kx-v_{0}\right)p_{+} \right]+\frac{1}{2\tau}\left(p_{-}-p_{+}\right)\] \[\dot{p}_{-} =\frac{\partial}{\partial x}\left[\left(\mu Kx+v_{0}\right)p_{-} \right]+\frac{1}{2\tau}\left(p_{+}-p_{-}\right), \tag{1}\]
where \(p_{+}\) and \(p_{-}\) are the distributions of particles with forward and backward direction, \(\tau\) is the persistence time (that determines the average time a particle persists in a given direction), and \(\mu\) is the mobility. No thermal fluctuations are taken into account.
The two equations in a stationary state \(\dot{p}_{\pm}=0\) and dimensionless units become
\[0 =\frac{\partial}{\partial z}\left[\left(z-1\right)p_{+}\right]+ \frac{\alpha}{2}\left(p_{-}-p_{+}\right)\] \[0 =\frac{\partial}{\partial z}\left[\left(z+1\right)p_{-}\right]+ \frac{\alpha}{2}\left(p_{+}-p_{-}\right). \tag{2}\]
where
\[z=\frac{\mu Kx}{v_{0}},\]
is dimensionless distance and
\[\alpha=\frac{1}{\tau\mu K}, \tag{3}\]
is the dimensionless rate of an orientational change. Note that in one-dimension the new direction of motion is chosen at the rate \(\alpha/2\) (rather than \(\alpha\)). The reason for this is that at an instance that a particle changes its direction, there is \(1/2\) probability it will select the same orientation. This problem does not arise for higher dimensions and \(\alpha\) is the actual rate at which a particle changes its orientation. This should be kept in mind when we later compare the results for different dimensions.
The two coupled equations in Eq. (2) can be combined into a single differential equation for the total distribution \(p=p_{+}+p_{-}\),
\[0=(2-\alpha)zp-(1-z^{2})p^{\prime}, \tag{4}\]
which when solved yields [1; 2; 3; 4; 5]
\[p=A(1-z^{2})^{\frac{\alpha}{2}-1}, \tag{5}\]
and where the normalization factor, that assures \(\int_{-1}^{1}dz\,p(z)=1\), is given by
\[A=\frac{\Gamma\left(\frac{\alpha}{2}+\frac{1}{3}\right)}{\sqrt{\pi}\Gamma \left(\frac{\alpha}{2}\right)}. \tag{6}\]
Note that in the absence of thermal fluctuations \(p\) is defined on \([-1,1]\) as a result of a swimming velocity having a fixed magnitude, which restricts how far a particle can move away from a trap center.
The distribution \(p\) in Eq. (5) can be either concave, with a majority of particles accumulated at the trap borders as a result of slow orientational change, or convex, with a majority of particles concentrated around a trap center as a result of fast orientational change. The crossover between the two behaviors occurs at \(\alpha=2\), at which point \(p\) is uniform on the interval \([-1,1]\).
In addition to \(p\), it is possible to obtain distributions for a specific swimming direction:
\[p_{\pm}=\frac{A}{2}\left(1\pm z\right)\left(1-z^{2}\right)^{\frac{\alpha}{2}- 1}. \tag{7}\]
The expression above can be verified if inserted into Eq. (2).
### distribution in \(w\)-space
For slow rates of orientational change, that is, for \(\alpha<2\), the accumulation of particles near the trap border takes the form of a divergence at \(z=\pm 1\), see Eq. (5). That divergence can be linked to the presence of "nearly immobile" particles accumulated at the trap border.
The existence of "nearly immobile" particles can be verified from a velocity distribution, manifested as a divergence at \(v=0\). In the overdamped regime, the two contributions to a velocity are a swimming velocity plus a contribution of a linear force of a harmonic trap, \(v=-\mu Kx\pm v_{0}\), in the dimensionless units given by
\[w=-z\pm 1, \tag{8}\]
where \(w=v/v_{0}\) is the dimensionless velocity.
A distribution in \(w\)-space can be inferred from a positional distribution in Eq. (7) by applying the change of variables suggested by Eq. (8). For particles with forward orientation, the substitution \(z=-w+1\) into \(p_{+}(z)\) leads to
\[p_{w}^{+}=\frac{A}{2}w^{\frac{\alpha}{2}-1}(2-w)^{\frac{\alpha}{2}},\ \ \ \mbox{defined on}\ \ \ 0<w<2.\]
The reason why the distribution for the forward swimming velocity is defined on \([0,2]\) can be understood from Eq. (8) and the fact that \(z\) is defined on \([-1,1]\). Given that \(p_{w}^{-}(w)=p_{w}^{+}(-w)\), a complete distribution defined on \([-2,2]\) is
\[p_{w}=\frac{A}{2}|w|^{\frac{\alpha}{2}-1}(2-|w|)^{\frac{\alpha}{2}}. \tag{9}\]
The divergence at \(w=0\) signals the presence of "nearly immobile" particles. We characterize these particles as "nearly immobile" since \(\lim_{e\to 0}\int_{-e}^{e}dw\,p_{w}=0\), which implies that there are no particles with zero velocity. Rather, there are particles whose velocity slowly converges to zero without ever attaining it.
Comparing Eq. (9) with Eq. (5) confirms that divergences in both distributions appear/disappear for the same value of \(\alpha\). This coextensiveness implies that the "nearly immobile" particles are concentrated around trap borders at \(z=\pm 1\). Only at \(z=\pm 1\) a particle can attain zero velocity, and since \(\lim_{e\to 0}\int_{1-e}^{1}dz\,p(z)=\lim_{e\to 0}\int_{-1}^{-1+e}dz\,p(z)=0\), no particle can reach this position, except for \(\alpha=0\).
In Fig. (1) we plot \(p\) and \(p_{w}\) for three values of \(\alpha\). At the crossover, \(\alpha=2\), both distributions reduce to simple shapes: \(p\) is flat and \(p_{w}\) is triangular. Then for \(\alpha<2\), both distributions develop divergences.
### moment analysis
In this section, we briefly analyze even moments \(\langle z^{2n}\rangle\) of a distribution \(p(z)\) in Eq. (5). (Odd moments are zero due to even symmetry of \(p\)). The moments can be calculated directly from \(p\) using \(\langle z^{2n}\rangle=\int_{-1}^{1}dz\,p(z)z^{2n}\):
\[\langle z^{2n}\rangle=\frac{\Gamma\left(n+\frac{1}{2}\right)\Gamma\left( \frac{1}{2}+\frac{\alpha}{2}\right)}{\sqrt{\pi}\,\Gamma\left(\frac{1}{2}+ \frac{\alpha}{2}+n\right)},\quad\text{for }n=1,2,\ldots \tag{10}\]
The moments are monotonically decreasing with increasing \(n\). The infinite sum \(\sum_{n=0}^{\infty}\langle z^{2n}\rangle\) can be evaluated exactly and is given by
\[\sum_{n=0}^{\infty}\langle z^{2n}\rangle=\begin{cases}\frac{\alpha-1}{\alpha- 2}&\text{if }\alpha>2\\ \infty&\text{if }\alpha\leq 2,\end{cases} \tag{11}\]
where at the crossover the sum is seen to diverge. By representing the infinite sum by its generating function, we can connect this behavior to the divergence in \(p\):
\[\sum_{n=0}^{\infty}\langle z^{2n}\rangle=\left\langle\frac{1}{1-z^{2}}\right\rangle.\]
As particles accumulate at the borders, \(z\rightarrow\pm 1\) and the above expression diverges. This explains the divergence of the sum in Eq. (11).
## III RTP oscillator in 2D
We next consider an RTP oscillator with 1D geometry, \(u=\frac{1}{2}Kx^{2}\), but embedded in 2D space. To set up the problem, we start with a Fokker-Planck equation for RTP particles in an arbitrary confinement:
\[\hat{\rho}=-\nabla\cdot\left[\left(\mu\mathbf{F}+v_{0}\mathbf{n}\right)\rho \right]+\hat{L}\rho, \tag{12}\]
where \(\mathbf{n}\) is the unit vector designating an orientation of the swimming velocity \(v_{swim}=v_{0}\mathbf{n}\), in 2D defined as \(\mathbf{n}=(\cos\theta,\sin\theta)\), where \(\theta\) is the angle of the orientation. The evolution of \(\mathbf{n}\) is governed by the operator \(\hat{L}\) given by
\[\hat{L}\rho=\frac{1}{\tau}\left[-\rho+\frac{1}{2\pi}\int_{0}^{2\pi}d\theta\, \rho(x,\theta)\right].\]
The two terms imply that particles with a given orientation \(\theta\) vanish with the rate \(\tau^{-1}\) and reappear with the same rate at another location that depends on the marginal distribution
\[p=\int_{0}^{2\pi}d\theta\,\rho(x,\theta).\]
Note that \(\int_{0}^{2\pi}d\theta\,\hat{L}\rho=0\). This condition is necessary if the total number of particles is to be conserved.
For \(u=\frac{1}{2}Kx^{2}\), the external force is \(\mathbf{F}=-Kx\mathbf{e}_{x}\). In a steady-state, only the component of \(\mathbf{v}_{swim}\) in the \(x\) direction is relevant, \(\mathbf{v}_{swim}\cdot\mathbf{e}_{x}=v_{0}\cos\theta\). This results in an effectively one-dimensional system governed by the following stationary Fokker-Planck equation:
\[0=\frac{\partial}{\partial z}\left[\left(z-\cos\theta\right)\rho\right]- \alpha\rho+\frac{\alpha}{2\pi}p, \tag{13}\]
given in dimensionless units.
The above Fokker-Planck equation can be interpreted as representing RTP model in 1D with continuous distribution of velocities, and what constitutes a generalized RTP model [26]. For a truly 1D RTP model, the distribution of swimming velocities is \(P\propto\delta(v_{swim}+v_{0})+\delta(v_{swim}-v_{0})\). The Fokker-Planck equation in Eq. (13) represents a system for the following distribution of swimming velococies: \(P\propto 1/\sqrt{v_{0}^{2}-v_{swim}^{2}}\). See Appendix A in [27].
There is no straightforward procedure to reduce Eq. (14) to a differential equation for \(p\) but it is possible to infer it from the moments of \(p\). (How to calculate such moments will be demonstrated when we analyze a system in 3D). Because a moment formula in 2D was determined to have a similar structure to that in 1D, it was, in turn, possible to infer that a differential equation for \(p\) should have the same structure as that for a system in 1D. See Eq. (4). For 2D, the differential equation for \(p\) was determined to be [5]
\[0=(1-2\alpha)zp-(1-z^{2})p^{\prime}, \tag{14}\]
where the solution is a beta distribution
\[p(z)=A(1-z^{2})^{\alpha-\frac{1}{2}}, \tag{15}\]
and where the normalization constant is given by
\[A=\frac{\Gamma(\alpha+1)}{\sqrt{\pi}\Gamma\left(\alpha+\frac{1}{2}\right)}. \tag{16}\]
### distribution in \(w\)-space
For a system embedded in 2D, the velocity component in the \(x\)-direction is \(v=-\mu Kx+v_{0}\cos\theta\), in reduced units given
Figure 1: Stationary distributions in \(z\)- and \(w\)-space for different values of \(\alpha\). Divergence emerges at the crossover \(\alpha=2\) and is linked to the presence of immobile particles at trap borders.
by
\[w=-z+\cos\theta. \tag{17}\]
Compare this with Eq. (8).
The distribution \(p_{w}\) can be obtained from Eq. (13) by substituting for \(p\) with expression given in Eq. (15), followed by the change of variables \(z=-w+\cos\theta\), followed by the integration over all orientations. This yields an inhomogeneous first order differential equation
\[(1-\alpha)p_{w}+wp_{w}^{\prime}=-\frac{\alpha}{2\pi}I, \tag{18}\]
where
\[I(w)=2A\int_{-1}^{1-w}ds\,\left[1-s^{2}\right]^{\alpha-\frac{1}{2}}\left[1-(s +w)^{2}\right]^{-\frac{1}{2}}. \tag{19}\]
The solution is a distribution defined on \([-2,2]\)
\[p_{w}=\frac{\alpha}{2\pi}|w|^{\alpha-1}\left[\int_{|w|}^{2}dw^{\prime}\,w^{ \prime-\alpha}I(w^{\prime})\right]. \tag{20}\]
Although \(p_{w}\) has a more complicated form compared to that in Eq. (9) for a system in 1D, its general structure remains similar. Divergence at \(w=0\) comes from the factor \(|w|^{\alpha-1}\), which signals the existence of nearly immobile particles for \(\alpha>1\) and suggests the crossover at \(\alpha=1\). This, however, is not corroborated by \(p\) in Eq. (15), in which case divergences at the trap border emerge for \(\alpha>1/2\).
The reason why divergences in \(p\) disappear at a lower value of \(\alpha\) is a result of averaging procedure used to obtain \(p\) from \(\rho\). Even if a distribution \(\rho\) exhibits divergences up to \(\alpha=1\), the averaging procedure \(p=\int_{0}^{\alpha}d\theta\,\rho\) smooths those divergences and effectively makes them disappear for \(\alpha<1/2\). In Appendix (A) we analyze distributions \(\rho\) in more detail to back up these claims.
In Fig. (2) we plot a number of different distributions \(p_{w}\) for different values of \(\alpha\) calculated using Eq. (20), where the integral is evaluated numerically. Those distributions are compared with those obtained from simulations. Simulations were carried out using the Euler method for updating particle positions:
\[x(t+\Delta t)=\left[v_{0}\cos\theta(t)-\mu Kx(t)\right]\Delta t,\]
with new orientation \(\theta(t)\) selected after a time period selected randomly from the Poisson distribution \(P\propto e^{-t/\tau}\). For comparison, in Fig. (2) we also plot the corresponding distributions \(p\) below each \(p_{w}\).
Unlike the distributions in Fig. (1) for the system in 1D, a peak at \(w=0\) does not fall to zero for \(\alpha\) above the crossover. We can calculate the height of \(p_{w}\) at \(w=0\) from Eq. (18) by setting \(w\) to zero. This yields \(p_{w}(0)=\frac{A}{2\pi}\frac{\alpha}{\alpha-1}I(0)\) which simplifies to
\[p_{w}(0)=\frac{1}{\alpha-1}\left[\frac{4^{\alpha}}{\pi}\frac{\alpha!\alpha!} {(2\alpha)!}\right]^{2}. \tag{21}\]
Rather than suddenly falling to zero for \(\alpha>1\), the peak height at \(w=0\) approaches zero algebraically as a function of \(\alpha\).
For half integer values of \(\alpha\), it is possible to obtain exact expression for \(p_{w}\). Those expressions are derived in Appendix (B).
Note that the crossover in 2D occurs at \(\alpha=1\) while that in 1D at \(\alpha=2\). Yet if we recall the discussion below Eq. (3), the actual rate for a system in 1D is not \(\alpha\) but \(\alpha/2\). Therefore, considering the actual rates, the crossover in both dimensions occurs for the same rates.
### moment analysis
The moments of the distribution \(p(z)\) in Eq. (15) are obtained directly from the formula \(\langle z^{2n}\rangle=\int_{-1}^{1}dz\,p(z)z^{2n}\):
\[\langle z^{2n}\rangle=\frac{\Gamma\left(n+\frac{1}{2}\right)\Gamma(\alpha+1) }{\sqrt{\pi}\Gamma(\alpha+n+1)}. \tag{22}\]
The moments generate a monotonically decreasing sequence whose infinite sum is
\[\sum_{n=0}^{\infty}\langle z^{2n}\rangle=\left\langle\frac{1}{1-z^{2}}\right\rangle =\begin{cases}\frac{2\alpha}{2\alpha-1}&\text{if }\alpha>\frac{1}{2}\\ \infty&\text{if }\alpha\leq\frac{1}{2}.\end{cases} \tag{23}\]
The sum diverges at \(\alpha=\frac{1}{2}\). This is linked to divergences in \(p\) and does not represent a crossover. An actual crossover is determined from the behavior of \(p_{w}\).
## IV RTP particles in 3D: linear harmonic trap
For RTP particles in a harmonic trap in 3D, there is no available solution for a stationary distribution \(p\). Instead of trying to obtain such a distribution directly, in this section we focus on how to obtain exact expressions fo the moments of
Figure 2: Distributions \(p_{w}\) for different values of \(\alpha\). Exact distributions (represented by lines) are obtained from Eq. (20). The circular symbols represent simulation data points. In addition, below each plot for \(p_{w}\) we plot a corresponding distribution \(p\) to emphasize qualitatively different behavior.
\(p\). The results of this section are the most important results of this work.
### moment analysis
The stationary Fokker-Planck equation for RTP particles in a harmonic potential \(u=\frac{K}{\pi}x^{2}\) embedded in 3D is obtained from Eq. (12) for the force \(\hat{\mathbf{F}}=-Kx\mathbf{e}_{x}\) and
\[\hat{L}\rho=\frac{1}{\tau}\left[-\rho+\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin \theta\,\rho(x,\theta)\right].\]
The resulting equation in reduced units is
\[0=\frac{\partial}{\partial z}\left[(z-\cos\theta)\,\rho\right]-\alpha\rho+ \frac{\alpha}{2}p, \tag{24}\]
with the marginal distribution defined as
\[p(z)=\int_{0}^{\pi}d\theta\,\sin\theta\,\rho(z,\theta). \tag{25}\]
The Fokker-Planck equation in Eq. (24) can be transformed into the following recurrence relation:
\[A_{l,m}=\frac{\alpha}{l+\alpha}A_{l,0}A_{0,m}+\frac{l}{l+\alpha}A_{l-1,m+1} \tag{26}\]
where
\[A_{l,m}=\langle z^{l}\cos^{m}\theta\rangle,\]
and the angular brackets indicate averaging procedure defined as \(\langle\dots\rangle=\int_{-1}^{1}dz\int_{0}^{\pi}d\theta\,\rho\sin\theta( \dots)\).
The recurrence relation reflects the structure of the differential equation from which it was obtained: it is first order, linear, with variable coefficients. The relation was obtained by multiplying the Fokker-Planck equation by \(z^{l}\cos^{m}\theta\) followed by integration \(\int_{-1}^{1}dz\int_{0}^{\pi}d\theta\,\sin\theta\) and written in its final form using integration by parts.
Since \(A_{l,0}=\langle z^{l}\rangle\), solving the recurrence relation would permit us to obtain moments. The initial condition of the recurrence relation is provided by the terms \(A_{0,m}\) which are easily evaluated
\[A_{0.m}=\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin\theta\cos^{m}\theta=\begin{cases} \frac{1}{m+1}&\text{if $m$ even}\\ 0&\text{if $m$ odd}.\end{cases}\]
The recurrence relation cannot be solved for an arbitrary \(A_{l,m}\). Nonetheless, it is possible to reduce the relation to another recurrence relation in terms of \(A_{2n,0}=\langle z^{2n}\rangle\) only:
\[\langle z^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\frac{\langle z^{2k} \rangle}{2n-2k+1}\frac{(2k+1)_{\alpha-1}}{(2n+1)_{\alpha-1}}. \tag{27}\]
where \((x)_{n}=\frac{\Gamma(x+n)}{\Gamma(x)}\) is the falling factorial.
The recurrence relation in Eq. (27) is the central result of this section. Although it does not provide an exact expression for an arbitrary moment, it provides an analytically tractable procedure for obtaining such an expression recursively. A number of initial even moments generated from the recurrence relation in Eq. (27) are given in Table (1).
By examining Table (1), we can verify that for \(\alpha=0\) the moments reduce to a simple general formula \(\langle z^{2n}\rangle=\frac{1}{2n+1}\) that can be linked to a uniform distribution where \(\langle z^{2n}\rangle=\frac{1}{2}\int_{-1}^{1}dz^{2n}\). This means that for finite \(\alpha\), \(p\) can only be convex which, in turn, implies the absence of divergences at the trap borders.
To understand how a flat distribution arises for \(\alpha=0\) we should understand that for \(\alpha=0\) all particles are immobile and trapped at \(z=\cos\theta\), where the swimming velocity and the velocity due to harmonic potential cancel one another. As a result \(\rho=\frac{1}{2}\delta(z-\cos\theta)\). Averaged over all orientations, this yields:
\[\lim_{\alpha\to 0}p(z)=\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin\theta\,\delta(z-\cos \theta)=\frac{1}{2}. \tag{28}\]
The averaging procedure completely smooths out the delta distribution.
In the Table (2), we compare the second and fourth moments calculated for different dimension. For 1D and 2D, these moments are obtained from the formulas in Eq. (10) and Eq. (22). The tendency is that an increased dimensionality reduces the value of a given moment. The best way to understand this reduction is to think of each system as one-dimensional with different distributions of swimming velocities (for a potential \(u=\frac{K}{2}x^{2}\) we actually consider the projection of a swimming velocity along the \(x\)-axis and not the true swimming velocity).
\begin{table}
\begin{tabular}{l l l} \hline \hline & \(\langle z^{2}\rangle\) & \(\langle z^{4}\rangle\) \\ \hline
1D & \(\frac{1}{1+\alpha}\) & \(\frac{3}{(1+\alpha)(3+\alpha)}\) \\
2D & \(\frac{1}{2}\frac{1}{1+\alpha}\) & \(\frac{1}{4}\frac{3}{(1+\alpha)(2+\alpha)}\) \\
3D & \(\frac{1}{3}\frac{1}{1+\alpha}\) & \(\frac{1}{15}\frac{18+5\alpha}{(1+\alpha)(2+\alpha)(3+\alpha)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Second and fourth moments of the distribution \(p\) in different dimensions.
\begin{table}
\begin{tabular}{l l} \hline \hline \(\langle z^{2}\rangle=\frac{1}{3}\frac{1}{1+\alpha}\) \\ \(\langle z^{4}\rangle=\frac{1}{15}\frac{18+5\alpha}{(1+\alpha)(2+\alpha)(3+ \alpha)}\) \\ \(\langle z^{6}\rangle=\frac{1}{6}\frac{1}{(3+\alpha)(4+\alpha)(4+\alpha)(4+ 5)}\) \\ \(\langle z^{8}\rangle=\frac{1}{15}\frac{75600+284046+378004^{2}+175\alpha^{3}}{( 4+1)+2(\alpha+3)(4+\alpha)(4+5)}\) \\ \(\langle z^{10}\rangle=\frac{1}{99}\frac{3256592+12592804+196442+1386049+385 \alpha^{4}}{(4+1)(\alpha+2)(\alpha+3)(4+4)(4+5)(4+6)(\alpha+7)(\alpha+8)( \alpha+9)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Moments of a stationary distribution \(p\) obtained from a recurrence relation in Eq. (27).
### distribution in \(w\)-space
To obtain a distribution in \(w\)-space we follow a similar procedure to that used in Sec. (III.1). We transform Eq. (24) using the change of variables \(z=-w+\cos\theta\). The resulting equation is then integrated over all orientations. The procedure yields the first order inhomogeneous differential equation:
\[0=(1-\alpha)p_{w}+wp_{w}^{\prime}+\frac{\alpha}{2}\int_{-1}^{1-w}ds\,p(s), \tag{29}\]
for which the solution is
\[p_{w}=\frac{\alpha}{2}|w|^{\alpha-1}\left[\int_{|w|}^{2}dw^{\prime}\,w^{ \prime-\alpha}\int_{-1}^{1-w^{\prime}}ds\,p(s)\right]. \tag{30}\]
The solution permits us to obtain \(p_{w}\) from \(p\). The difference between this result and that in Eq. (20) is that here we do not have the exact expression for \(p\).
Even without knowing \(p\), Eq. (29) can be used to calculate \(p_{w}(0)\) by setting \(w=0\). This yields
\[p_{w}(0)=\frac{1}{2}\frac{\alpha}{\alpha-1}. \tag{31}\]
The expression diverges at \(\alpha=1\), indicating a divergence in \(p_{w}\) and the presence of nearly immobile particles. Compared with a similar result in Eq. (21) for a system in 2D, we see that the divergence occurs at the same value of \(\alpha\). This means that the location of the crossover is independent of the system dimension.
In Fig. (3) we plot \(p_{w}\) for different values of \(\alpha\) obtained using Eq. (30). The integrals are calculated numerically and \(p\) is calculated from the moments as explained in the next section.
### recovering \(p\) from moments
The recurrence relation in Eq. (27) permits fast computation of an arbitrary number of even moments of \(p\). In this section we present a procedure for recovering a distribution \(p\) from the moments based on the Fourier-Legendre expansion,
\[p(z)=\sum_{n=0}^{\infty}a_{n}P_{2n}(z), \tag{32}\]
where \(P_{m}\) are Legendre (orthogonal) polynomials. Like \(p\), Legendre polynomials are defined on \([-1,1]\). Due to even symmetry of \(p\), only even Legendre polynomials \(P_{2n}\) are required:
\[P_{2n}=2^{2n}\sum_{k=0}^{n}\frac{z^{2k}\,\Gamma\left(k+n+\frac{1}{2}\right)}{ \left(2k\right)!(2n-2k)!\Gamma\left(k-n+\frac{1}{2}\right)}. \tag{33}\]
The coefficients \(a_{n}\) in Eq. (32) can be determined from the orthogonality relation \(\int_{-1}^{1}dz\,P_{n}P_{m}=\frac{2}{2n+1}\delta_{mn}\) which leads to
\[a_{n}=\frac{4n+1}{2}\int_{-1}^{1}dz\,P_{2n}(z)p(z), \tag{34}\]
and in combination with Eq. (33) yields
\[a_{n}=\frac{4n+1}{2}\left[2^{2n}\sum_{k=0}^{n}\frac{\left\langle z^{2k} \right\rangle\Gamma\left(k+n+\frac{1}{2}\right)}{\left(2k\right)!(2n-2k)! \Gamma\left(k-n+\frac{1}{2}\right)}\right]. \tag{35}\]
The expansion in Eq. (32) together with the coefficients in Eq. (35) provides an exact formula for recovering \(p\) in terms of moments obtained from Eq. (27). Initial coefficients \(a_{n}\) are listed in Table (3).
Note that by setting \(\alpha=0\), \(a_{0}=1/2\) and all the remaining coefficients \(a_{n}\) are zero. This implies a uniform distribution in agreement with the results in Eq. (28) for the same limit.
Recovered distributions obtained from a truncated Fourier-Legendre series \(p=\sum_{n=0}^{N_{c}}a_{n}P_{2n}(z)\) for \(N_{c}=10\) are shown in Fig. (4).
The truncated series shows a very good agreement with \(p\) obtained from simulations. The larger the \(\alpha\), the less terms of expansion are needed. It is generally recognized that the delta and square distributions are not well approximated by the series. But since only the distribution at \(\alpha=0\) is flat (and does not need to be approximated), this limitation does not apply to our situation.
Figure 3: Distributions \(p_{w}\) calculated from \(p\) using numerically evaluated Eq. (30). The height of the peak at \(w=0\) for \(\alpha>1\) is given by Eq. (21).
\begin{table}
\begin{tabular}{c} \hline \(a_{0}=\frac{1}{2}\) \\ \(a_{1}=-\frac{5}{4}\frac{\alpha}{\alpha+1}\) \\ \(a_{2}=-\frac{3}{16}\frac{166-24\alpha^{2}-9\alpha^{3}}{(\alpha+1)(\alpha+2)( \alpha+3)}\) \\ \(a_{3}=-\frac{13}{96}\frac{288-96\alpha^{2}120\alpha^{2}+1200^{3}+15\alpha^{4}}{ (\alpha+1)(\alpha+2)(\alpha+3)(\alpha+4)(\alpha+5)}\) \\ \(a_{4}=-\frac{17}{768}\frac{55296-105984\alpha+94250\alpha^{2}+8512\alpha^{3}-67 20\alpha^{4}-1680\alpha^{6}-105\alpha^{6}}{(\alpha+1)(\alpha+2)(\alpha+3)( \alpha+4)(\alpha+5)(\alpha+6)(\alpha+7)}\) \\ \hline \end{tabular}
\end{table}
Table 3: Coefficients for the Fourier-Legendre series in in Eq. (32).
## V Thermal fluctuations
Up to this point, all the analysis and results were done at zero temperature. To incorporate thermal fluctuations, we use a known result that for particles in a harmonic trap a stationary distribution of active particles in a thermal bath can be represented as convolution [5], for a system with 1D geometry given by
\[p_{T}(x)=\int dx^{\prime}\,p(x^{\prime})p_{eq}(x-x^{\prime}), \tag{36}\]
where \(p_{eq}(x)=\sqrt{\frac{\mu K}{2\pi D}}e^{-\mu Kx^{2}/2D}\) is the Boltzmann distribution of passive particles in a harmonic trap, and \(p\) is the stationary distribution at zero temperature.
A convolution construction applies to any combination of independent random processes [28; 29]. Generally, however, confinement leads to correlations between random processes as it gives rise to nonlinear terms in the Langevin equation, even if these processes are originally independent. The exception is a harmonic potential whose force is linear and it does not introduce nonlinear terms. See Appendix (C) for further discussion regarding this point.
Using Eq. (36), the moments of the distribution \(p_{T}\) are defined as
\[\langle z^{2n}\rangle\tau=\int_{-\infty}^{\infty}dzz^{2n}\int_{-1}^{1}dz^{ \prime}\,p(z^{\prime})p_{eq}(z-z^{\prime}), \tag{37}\]
assuming dimensionless units. Then using the identity \(z^{2n}=\left[(z-z^{\prime})+z^{\prime}\right]^{2n}\) together with binomial expansion, Eq. (37) yields
\[\langle z^{2n}\rangle_{T}=\sum_{k=0}^{n}\frac{(2n)!}{(2k)!(2n-2k)!}\langle z^ {2n-2k}\rangle\langle z^{2k}\rangle_{eq}. \tag{38}\]
And since moments \(\langle z^{2k}\rangle_{eq}\) can be calculated using the Boltzmann distribution, we finally get
\[\langle z^{2n}\rangle_{T}=\sum_{k=0}^{n}\frac{(2n)!B^{k}}{2^{k}k!(2n-2k)!} \langle z^{2n-2k}\rangle, \tag{39}\]
where \(B=\frac{\mu K}{\nu_{0}^{2}}D\) is the dimensionless diffusion constant.
Note that the moments for a finite temperature are given as an expansion in terms of moments at zero temperature. Since all terms in the expansion are positive, the effect of temperature is to increase the value of all the moments.
Using Eq. (38), the initial moments are given by
\[\langle z^{2}\rangle_{T} =\langle z^{2}\rangle+\langle z^{2}\rangle_{eq}\] \[\langle z^{4}\rangle_{T} =\langle z^{4}\rangle+6\langle z^{2}\rangle\langle z^{2}\rangle_{ eq}+\langle z^{4}\rangle_{eq}\] \[\langle z^{6}\rangle_{T} =\langle z^{6}\rangle+15\langle z^{4}\rangle\langle z^{2}\rangle _{eq}+15\langle z^{2}\rangle\langle z^{4}\rangle_{eq}+\langle z^{6}\rangle_{eq}\]
Note that the two contributions of the second moment are completely additive. Using Eq. (39), the initial moments in the actual units are given by
\[\langle x^{2}\rangle_{T} =\langle x^{2}\rangle+\frac{k_{B}T}{K}\] \[\langle x^{4}\rangle_{T} =\langle x^{4}\rangle+6\langle x^{2}\rangle\frac{k_{B}T}{K}+3 \left(\frac{k_{B}T}{K}\right)^{2}\] \[\langle x^{6}\rangle_{T} =\langle x^{6}\rangle+15\langle x^{4}\rangle\frac{k_{B}T}{K}+45 \langle z^{2}\rangle\left(\frac{k_{B}T}{K}\right)^{2}+15\left(\frac{k_{B}T}{K} \right)^{3}\]
where we used \(D=\mu k_{B}T\). This result shows more clearly contribution of the temperature.
## VI Isotropic harmonic trap
The previous analysis was done for a harmonic potential in a single direction, \(u=\frac{1}{2}Kx^{2}\), and it is not clear how and if the obtained results apply to an isotropic potential \(u=\frac{1}{2}Kr^{2}\). In this section we extend the previous results to such an isotropic potential.
To establish a relation between moments for a linear potential, \(\langle x^{2n}\rangle\), and the moments of an isotropic potential, \(\langle r^{2n}\rangle\), we consider first the Boltzmann distribution, \(p_{eq}(r)\propto e^{-\frac{\mu K^{2}}{2D^{2}}}\) to see how respective moments are related in this case. For an arbitrary dimension \(d\), the moments are easily evaluated and are given by
\[\langle r^{2n}\rangle=\left(\frac{2D}{\mu K}\right)^{n}\frac{\Gamma\left(\frac {d}{2}+n\right)}{\Gamma\left(\frac{d}{2}\right)}. \tag{42}\]
Note that \(\langle x^{2n}\rangle=\langle r^{2n}\rangle_{d=1}\) so that we can write
\[\langle x^{2n}\rangle=\left(\frac{2D}{\mu K}\right)^{n}\frac{\Gamma\left( \frac{1}{2}+n\right)}{\Gamma\left(\frac{1}{2}\right)}.\]
This permits us to represent Eq. (42) as
\[\langle r^{2n}\rangle=\frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left(n+\frac{d}{ 2}\right)}{\Gamma\left(\frac{d}{2}\right)\Gamma\left(n+\frac{1}{2}\right)} \langle x^{2n}\rangle, \tag{43}\]
which establishes a relation between \(\langle x^{2n}\rangle\) and \(\langle r^{2n}\rangle\).
The relation in Eq. (43) was derived by considering the equilibrium distribution. We next verify that the same relation applies for RTP particles. Since we know that a stationary distribution of RTP particles in an isotropic harmonic potential in 2D is [5]
\[p(s)=\frac{\alpha}{\pi}(1-s^{2})^{\alpha-1}, \tag{44}\]
where we introduce a dimensionless variable \(s=\mu Kr/v_{0}\), we can calculate the moments \(\langle s^{2n}\rangle=\int_{0}^{1}ds\,2\pi sp(s)s^{2n}\) which can then compared with the moments in Eq. (22) for a linear harmonic potential. The comparison recovers the formula in Eq. (43) for \(d=2\).
The verification of Eq. (43) for \(d=3\) is more intricate and the details are relegated to Appendix. But it leads to the following relation
\[\langle r^{2n}\rangle=(2n+1)\langle x^{2n}\rangle \tag{45}\]
which also agrees with Eq. (43) for \(d=3\). Consequently, the relation in Eq. (43) is general and applies to passive and RTP particles in a harmonic potential.
Combining the relation in Eq. (45) with the recurrence relation in Eq. (27), we next get the recurrence relation for the moments of a stationary distribution \(p(s)\) in an isotropic harmonic potential in 3D
\[\langle s^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\frac{\langle s^{2k} \rangle}{2n-2k+1}\frac{(2k+2)_{\alpha-2}}{(2n+2)_{\alpha-2}}. \tag{46}\]
The central result of this section is Eq. (43), which establishes a relation between moments \(\langle r^{2n}\rangle\) and \(\langle x^{2n}\rangle\). This relation is then used to determine the recurrence relation in Eq. (46).
### recovering \(p\) from moments
To recover a distribution \(p(s)\) for an isotropic harmonic trap in 3D from the moments in Eq. (46), we are going to use the Fourier-Legendre expansion, as was done in Sec. (IV.3). However, since the normalized function is \(4\pi s^{2}p(s)\), we expand this quantity rather than \(p(s)\). The resulting expansion is
\[4\pi s^{2}p(s)=2\sum_{n=0}^{\infty}a_{n}P_{2n}(s). \tag{47}\]
The factor \(2\) in front of the sum comes from the fact that \(p(s)\) is defined on \([0,1]\) while the polynomials \(P_{n}\) are defined on \([-1,1]\).
The coefficients \(a_{n}\) in the expansion are the same as those in Eq. (35) but defined in terms of \(\langle s^{2n}\rangle\):
\[a_{n}=\frac{4n+1}{2}\left[2^{2n}\sum_{k=0}^{n}\frac{\langle s^{2k}\rangle \,\Gamma\left(k+n+\frac{1}{2}\right)}{(2k)!(2n-2k)!\,\Gamma\left(k-n+\frac{1}{ 2}\right)}\right]. \tag{48}\]
Fig. (5) compares distributions \(p(s)\) obtained using the truncated Fourier-Legendre expansion with those obtained from a simulation.
The plots indicate that for \(\alpha>1\) the distributions \(p(s)\) vanish at \(s=1\), confirming \(\alpha=1\) to be a point of crossover.
## VII Summary and conclusion
The central result of this work is the recurrence relation for generating moments of a stationary distribution \(p\) for RTP particles in a harmonic trap in three-dimensions. As there is no available exact expression for \(p\) in this dimension, this approach provides an alternative route with analytical tractability.
For the potential \(u=\frac{1}{2}Kx^{2}\) the recurrence relation in dimensionless parameters is given in Eq. (27). This result is specific for a system embedded in 3D space but it can be generalized to any dimension. A generalized form, valid for any dimension \(d\), is given by
\[\langle z^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\langle z^{2k}\rangle \frac{(2k+1)_{\alpha-1}}{(2n+1)_{\alpha-1}}c_{2n-2k}, \tag{49}\]
where \(c_{2n}=A_{0,2n}\) and is given by
\[c_{2n}=\frac{\Gamma\left(\frac{d}{2}\right)\Gamma\left(n+\frac{1}{2}\right)} {\Gamma\left(\frac{1}{2}\right)\Gamma\left(n+\frac{d}{2}\right)}. \tag{50}\]
Note that the parameter of dimensionality only enters via the coefficient \(c_{2n}\).
The general formula in Eq. (49) can be verified. For \(d=3\), it recovers the result in Eq. (27). For \(d=1\) and \(d=2\), Eq. (49) can be solved for \(\langle z^{2n}\rangle\), with solutions found to agree with Eq. (10) and Eq. (22).
Using the relation in Eq. (43), we can obtain a similar general recurrence relation for the moments of a stationary distri
bution for an isotropic harmonic potential:
\[\langle s^{2n}\rangle=\frac{\alpha}{2n}\sum_{k=0}^{n-1}\langle s^{2k}\rangle\frac {(2k+2)_{\alpha-2}}{(2n+2)_{\alpha-2}}c_{2n-2k}. \tag{51}\]
The general recurrence formulas in Eq. (49) and in Eq. (51) permit us to better understand the role of system dimension. The fact that in \(d=3\) the recurrence relation cannot be solved implies a more complex functional form of \(p(z)\). This might help to explain why in this dimension there is no corresponding trivial differential equation for \(p(z)\)[5]. The idea of a function without a corresponding differential equation was first put forward in 1887 by Holder [30] who considered an Euler gamma function. In 1900 Hilbert conjectured that the Riemann zeta function is another example [31]. In 1920's this conjecture was proven to be correct [32], and in 2015, it was shown that the Riemann zeta function formally satisfies an infinite order linear differential equation [33].
Another important aspect of this work is the determination of a crossover at \(\alpha=1\), regardless of a system dimension. The importance of a crossover is that it indicates when to expect the presence of "nearly immobile" particles accumulated near a trap border. Since \(\alpha=\frac{1}{\tau\mu K}\) is the ratio of two time scales, the persistence time \(\tau\) during which an active particle retains its orientation, and the time a particle needs to reach a trap border \(1/\mu K\), the crossover value gives a way to predict the shape of a distribution once we know \(\alpha\). If a typical persistence time for a E. Coli is \(\tau\sim 1s\) and a typical velocity \(v_{0}=40\mu s^{-1}\)[34], then we should expect the trap size to be \(v_{0}/\mu K\lesssim 40\mu\) in order to see accumulation of particles at a trap border.
The most obvious extension of the "recurrence" method is to apply it to other types of active particles, for example, the ABP model. Since the Fokker-Planck equation for the ABP system is different, one expects a different recurrence relation. It is not clear if the methodology is extendeable to other types of external potentials or simple interactive systems such as the Kuramoto model [35], known to undergo a phase transition, or the one-dimensional asymmetric exclusion process mode [36; 37].
###### Acknowledgements.
D.F. acknowledges financial support from FONDECYT through grant number 1201192.
## VIII Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A exact distributions \(\rho\) for RTP particles in a harmonic trap in 2D
In this section we solve the Fokker-Planck equation in Eq. (13) for \(\rho\) by substituting for \(p\) the expression in Eq. (15). This permits us to posit Eq. (13) as a first order inhomogeneous differential equation:
\[0=(z-\cos\theta)\rho^{\prime}+(1-\alpha)\rho+\frac{\alpha A}{2\pi}(1-z^{2})^{ \alpha-\frac{1}{2}}. \tag{52}\]
By multiplying the above equation by the integrating factor
\[e^{\int_{-z}^{z}dy\frac{1-\alpha}{y-\cos\theta}}=\left(\frac{\cos\theta+1}{ \cos\theta-z}\right)^{\alpha-1},\]
the solution can be represented as
\[\rho_{L}=A\frac{\alpha}{2\pi}\left|\cos\theta-z\right|^{\alpha-1}\int_{-1}^{z }dz^{\prime}\left(1-z^{\prime 2}\right)^{\alpha-\frac{1}{2}}\left|\cos\theta-z^{ \prime}\right|^{-\alpha}, \tag{53}\]
for the domain \(-1<z<\cos\theta\), and
\[\rho_{R}=A\frac{\alpha}{2\pi}\left|\cos\theta-z\right|^{\alpha-1}\int_{z}^{1 }dz^{\prime}\left(1-z^{\prime 2}\right)^{\alpha-\frac{1}{2}}\left|\cos\theta-z^{ \prime}\right|^{-\alpha}, \tag{54}\]
for the domain \(\cos\theta<z<1\).
The difference between the solutions in each domain lies in the limits of an integral. The limits ensure that the distribution in Eq. (53) vanishes at \(z=-1\) and the distribution in Eq. (54) vanishes at \(z=1\). Except for \(\rho(z,0)\) and \(\rho(z,\pi)\), for any other orientation \(\theta\), \(\rho\) vanishes at both \(z=\pm 1\). Divergence in \(\rho\) comes from the pre-factor \(\left|\cos\theta-z\right|^{\alpha-1}\). This implies that the divergence for each \(\rho\) is localized at \(z=\cos\theta\) and the crossover corresponds to \(\alpha=1\) -- as verified by the behavior of \(p_{w}\).
The height of the distribution at \(z=\cos\theta\) for \(\alpha>1\), when a divergence disappears, can easily be evaluated from Eq. (52):
\[\rho(z=\cos\theta,\theta)=\frac{A}{2\pi}\frac{\alpha}{\alpha-1}(1-\cos^{2} \theta)^{\alpha-\frac{1}{2}}. \tag{55}\]
Note that the point \(z=\cos\theta\) does not represent a maximal value of \(\rho\) for \(\alpha>1\). The actual peak is shifted toward \(z=0\) as shown in Fig. (6).
The solutions in Eq. (53) and Eq. (54) for \(\cos\theta=0\) become
\[\rho\propto\left|z\right|^{\alpha-1}\left(1-z^{2}\right)^{\alpha+\frac{1}{2} }{}_{2}F_{1}\left(\frac{\alpha+1}{2},\frac{2\alpha+1}{2},\frac{2\alpha+3}{2},1-z^{2}\right), \tag{56}\]
and for \(\cos\theta=\pm 1\)
\[\rho\propto\left(1-z^{2}\right)^{\alpha-1}(1\pm z)^{\frac{3}{2} }{}_{2}F_{1}\left(\frac{1}{2},\frac{2\alpha+1}{2},\frac{2\alpha+3}{2},\frac{1 \pm z}{2}\right). \tag{57}\]
Both solutions are for the full domain \([-1,1]\).
It is interesting to compare the distributions given above with the distributions for the three-state RTP model in [3]. The three-state RTP model is an extension of the RTP model in 1D considered in Sec. (II) that includes the zero swimming velocity, \(v_{swim}=-v_{0},0,v_{0}\). The resulting stationary distribution \(p\) has three divergences at \(z=-1,0,1\). The divergences at different position correspond to different swimming velocities, for example, the divergence at \(z=0\) is linked to particles
with zero velocity. The exact solutions for \(p_{\pm}\) and \(p_{0}\) are expressed in terms of the hypergeometric functions, similar to the solutions in Eq. (43) and Eq. (44). This suggests that the analytical complexity quickly rises if we move away from the two-state model.
In Sec. (III.1) we calculated distributions in \(w\)-space for all particles, that is, averaged over all orientations. But knowing \(\rho\), it is now possible to calculate distributions in \(w\)-space corresponding to a given orientation. To obtain such distributions, we transform \(\rho\) using the change of variables \(z=-w+\cos\theta\).
The distributions \(\rho(w,\theta)\) are plotted in Fig. (7).
Note that all divergences, regardless of the orientation of a motion, are at \(w=0\) which signals the presence of nearly immobile particles. For \(\alpha>1\), nearly immobile particles disappear, manifested by the disappearance of divergences. Another observation is that for \(\alpha>1\), all the peaks, originally at \(w=0\), start to shift away from \(w=0\) toward \(w\to\cos\theta\), that is, the value of a swimming velocity of particles with orientation \(\theta\). Only distribution with orientation \(\theta=\pm\pi/2\) are centered around \(w=0\).
Note that the domain of a distribution \(\rho(w,\theta)\) depends on \(\cos\theta\) and is given by \(w\in(-1+\cos\theta,1+\cos\theta)\).
## Appendix B exact results for \(p_{w}\)
For half integer values of \(\alpha\), the integral in Eq. (20) can be evaluated exactly. For the first three values, \(\alpha=\frac{1}{2},\frac{3}{2},\frac{5}{2}\), the distribution can be represented as
\[p_{w}=\frac{a_{\alpha}}{\pi}\sqrt{\frac{2-w}{w}}+b_{\alpha}\left[\frac{\sin^{ -1}(1-w)}{2\pi}+\frac{1}{4}\right], \tag{45}\]
where \(a_{\alpha}\) and \(b_{\alpha}\) are polynomials given in Table (4).
Fig. (8) compares these analytical results with simulation data points.
## Appendix C Convolution of probability distributions
To show explicitly that the convolution of probability distributions works for particles in a harmonic potential \(u=Kr^{2}/2\) whose dynamics is governed by \(N\) independent forces \(\mathbf{f}_{i}\), we consider the Langevin equation. For an unconfined system the
Figure 8: Distributions \(p_{w}\) for different values of \(\alpha\). Exact distributions (represented by lines) are obtained from Eq. (45). The circular symbols represent simulation data points.
Figure 6: Distributions \(\rho(z,\theta)\) for three swimming orientations: \(\cos\theta=0,\frac{1}{2},1--\) red, blue, green points, respectively. Each distribution integrates to \(\int_{-1}^{1}dz\rho(z,\theta)=\frac{1}{2\pi}\). All circular symbols represent simulation data points, and the lines represent exact results obtained using Eq. (46) and Eq. (47).
Figure 7: Distributions \(\rho(w,\theta)\) for three swimming orientations: \(\cos\theta=0,\frac{1}{2},1--\) red, blue, green points, respectively. All circular symbols represent simulation data points, and the lines represent exact results obtained using Eq. (47) and Eq. (47).
Langevin equation is
\[\dot{\mathbf{r}}=\mu\sum_{i=1}^{N}\mathbf{f}_{i}, \tag{10}\]
where \(\langle\mathbf{f}_{i}(t)\cdot\mathbf{f}_{j}(t^{\prime})\rangle=f_{i}^{2}\delta_ {ij}\,\tau_{i}^{-1}e^{-|t-t^{\prime}|/\tau_{i}}\), where we assumed exponential memory such that in the limit \(\tau_{i}\to 0\), \(\tau_{i}^{-1}e^{-|t-t^{\prime}|/\tau_{i}}\rightarrow\delta(t-t^{\prime})\). For particles in a harmonic trap the Langevin equation is
\[\dot{\mathbf{r}}=\mu\sum_{i=1}^{N}\mathbf{f}_{i}-\mu K\mathbf{r}, \tag{11}\]
and can be solved to yield
\[\mathbf{r}=\mu\sum_{i=1}^{N}\int_{-\infty}^{t}dse^{-\mu K(t-s)}\mathbf{f}_{i}( s). \tag{12}\]
Using the above solution, the Langevin equation in Eq. (11) can be represented as
\[\dot{\mathbf{r}}=\mu\sum_{i=1}^{N}\mathbf{\tilde{f}}_{i} \tag{13}\]
where the new forces are defined as
\[\mathbf{\tilde{f}}_{i}=\mathbf{f}_{i}-\mu K\int_{-\infty}^{t}dse^{-\mu K(t-s) }\mathbf{f}_{i}(s). \tag{14}\]
The new forces can be shows to be independent:
\[\langle\mathbf{\tilde{f}}_{i}(t)\cdot\mathbf{\tilde{f}}_{j}(t^{\prime}) \rangle=f_{i}^{2}\delta_{ij}M(t-t^{\prime}), \tag{15}\]
where \(M\) is a resulting new memory function.
## Appendix D computation of moments for an isotropic harmonic potential in 3D
In this section we provide an explicit verification of the relation in Eq. (43) for the RTP particles in a harmonic trap in 3D. We start with the stationary FP equation
\[0=\nabla\cdot[(\mu K\mathbf{r}-v_{0}\mathbf{n})\,\rho]+\hat{L}\rho,\]
where the operator \(\hat{L}\) for the RTP model in 3D is given by
\[\hat{L}\rho=\frac{1}{\tau}\left[-\rho+\frac{1}{2}\int_{0}^{\pi}d\theta\,\sin \theta\rho(x,\theta)\right].\]
Since \(\rho\) depends on the relative orientation of the vectors \(\mathbf{r}\) and \(\mathbf{n}\), we fix \(\mathbf{n}\) along the \(x\)-axis, \(\mathbf{n}=\mathbf{e}_{x}\), and the FP equation becomes
\[0=\mu K\mathbf{r}\cdot\nabla\rho+\mu K\rho(\nabla\cdot\mathbf{r})-v_{0}\frac {\partial\rho}{\partial x}+\hat{L}\rho. \tag{16}\]
In spherical coordinates: \(\frac{\partial\rho}{\partial x}=\cos\theta\frac{\partial\rho}{\partial r}- \frac{\sin\theta}{r}\frac{\partial\rho}{\partial\theta}\), \(\mathbf{r}\cdot\nabla\rho=r\frac{\partial\rho}{\partial r}\), and \(\nabla\cdot\mathbf{r}=d\). The FP equation in reduced units \(s=\mu Kr/v_{0}\) can now be expressed
\[0=s\rho^{\prime}+3\rho-\cos\theta\rho^{\prime}+\frac{\sin\theta}{s}\frac{ \partial\rho}{\partial\theta}-\alpha\rho+\frac{\alpha}{2}p, \tag{17}\]
where
\[p=\int_{0}^{\pi}d\theta\,\sin\theta\rho(z,\theta).\]
By multiplying the above equation by \(s^{l}\cos^{m}\theta\) and then integrating it as \(4\pi\int_{0}^{\infty}dss^{2}\int_{0}^{\pi}d\theta\,\sin\theta\), we get the following recurrence relation:
\[B_{l,m}=\frac{\alpha}{l+\alpha}B_{l,0}B_{0,m}+\frac{l-m}{l+\alpha}B_{l-1,m+1} +\frac{m}{l+\alpha}B_{l-1,m-1}. \tag{18}\]
where \(B_{l,m}=\langle s^{l}\cos^{m}\theta\rangle\), and the initial condition is \(B_{0,0}=1\). The recurrence relation in Eq. (18) is used to solve for \(B_{2n,0}=\langle s^{2n}\rangle\) that satisfies the relation
\[\langle s^{2n}\rangle=(2n+1)\langle z^{2n}\rangle, \tag{19}\]
where \(\langle z^{2n}\rangle\) is defined in Eq. (27). Consequently, the relation in Eq. (43) is verified for the RTP model.
|
2309.06542 | In-plane structure of the electric double layer in the primitive model
using classical density functional theory | The electric double layer (EDL) has a pivotal role in screening charges on
surfaces as in supercapacitor electrodes or colloidal and polymer solutions.
Its structure is determined by correlations between the finite-sized ionic
charge carriers of the underlying electrolyte and, this way, these correlations
affect the properties of the EDL and of applications utilizing EDLs. We study
the structure of EDLs within classical density functional theory (DFT) in order
to uncover whether a structural transition in the first layer of the EDL that
is driven by changes in the surface potential depends on specific particle
interactions or has a general footing. This transition has been found in
full-atom simulations. Thus far, investigating the in-plane structure of the
EDL for the primitive model (PM) using DFT proved a challenge. We show here
that the use of an appropriate functional predicts the in-plane structure of
EDLs in excellent agreement with molecular dynamics (MD) simulations. This
provides the playground to investigate how the structure factor within a layer
parallel to a charged surface changes as function of both the applied surface
potential and its separation from the surface. We discuss pitfalls in properly
defining an in-plane structure factor and fully map out the structure of the
EDL within the PM for a wide range of electrostatic electrode potentials.
However, we do not find any signature of a structural crossover and conclude
that the previously reported effect is not fundamental but rather occurs due to
the specific force field of ions used in the simulations. | Peter Cats, Andreas Härtel | 2023-09-12T19:35:12Z | http://arxiv.org/abs/2309.06542v2 | In-plane structure of the electric double layer in the primitive model using classical density functional theory
###### Abstract
The electric double layer (EDL) has a pivotal role in screening charges on surfaces as in supercapacitor electrodes or colloidal and polymer solutions. Its structure is determined by correlations between the finite-sized ionic charge carriers of the underlying electrolyte and, this way, these correlations affect the properties of the EDL and of applications utilizing EDLs. We study the structure of EDLs within classical density functional theory (DFT) in order to uncover whether a structural transition in the first layer of the EDL that is driven by changes in the surface potential depends on specific particle interactions or has a general footing. This transition has been found in full-atom simulations. Thus far, investigating the in-plane structure of the EDL for the primitive model (PM) using DFT proved a challenge. We show here that the use of an appropriate functional predicts the in-plane structure of EDLs in excellent agreement with molecular dynamics (MD) simulations. This provides the playground to investigate how the structure factor within a layer parallel to a charged surface changes as function of both the applied surface potential and its separation from the surface. We discuss pitfalls in properly defining an in-plane structure factor and fully map out the structure of the EDL within the PM for a wide range of electrostatic electrode potentials. However, we do not find any signature of a structural crossover and conclude that the previously reported effect is not fundamental but rather occurs due to the specific force field of ions used in the simulations.
## I Introduction
Electrolytes can be found almost anywhere, and attract therefore a lot of interest across a wide variety of disciplines [1; 2; 3; 4; 5; 6]. They have been studied for more than a century [7; 8; 9; 10], but there is still uncharted territory to discover [11; 12; 13]. A particular focus lays on investigating the electric double layer (EDL), _i.e._ the electrode-electrolyte interface, where surface charges are screened by mobile ionic charges from the electrolyte. This ability of ions to screen each other is closely related to structure formation in the EDL that is driven by an interplay between electrostatic and steric particle interactions [14; 15]. Thus, understanding EDLs implies understanding their structure, which we study in this work.
The structure of a liquid electrolyte can be seen from its particle's and charge's distributions. These density profiles measure the local density and, in the electrolyte's bulk, they do not show any structure, thus, they are constant. This changes in the vicinity of a flat electrode, where the flat boundary induces layering and ordering of particles and charges. Here, density profiles are typically resolved perpendicular to the flat electrode, reflecting translational symmetry along the wall. Classical density functional theory (DFT) predicts these perpendicular density profiles of ions very well [16; 17; 18; 19; 20; 21]. In this manuscript we take another perspective and focus on the in-plane structure of the EDL, _i.e._ the distribution of ions parallel to the interface.
Merlet and co-workers [22] studied the in-plane structure of the EDL for a model capacitor consisting of liquid butylmethylimidazolium hexafluorophosphate and graphite electrodes. They performed molecular dynamics (MD) simulations using sophisticated force fields and found hexagonal in-plane ordering for certain electrode potentials by considering an "in-plane" structure factor. We will discuss possible pitfalls in introducing a well-defined in-plane structure factor in this work. To clarify whether this ordering effect is a consequence of the specific force field or a more fundamental property that also occurs in the primitive model of electrolytes, the same system has been mapped onto the primitive model a few years later and the in-plane structure has been studied by employing both DFT and MD [23]. For their DFT study, however, the authors used a rather simple mean-field description that did not perform well across almost all parameters considered. The question, therefore, remains whether the observed ordering effect can be found in the primitive model of electrolytes as well and whether DFT, when using a more sophisticated approach, can accurately predict the in-plane structure of the EDL in the primitive model. These questions will be answered shortly.
In this work, we first precisely define an in-plane structure factor in section II. In section III, then, we introduce the physical systems of interest and the model we use. We consider the same system as studied in Ref. [23], described in section III, but we now consider a DFT functional constructed from the mean-spherical approximation (MSA) that has been proven to work well for primitive model aqueous electrolytes at room temperature [24; 18; 20]. Using this functional, we calculate the density profiles and in-plane structure factor for our system and discuss their agreement with results from previous DFT work and MD simulations in section IV. In section V we present structure factors across the whole
EDL for a wide region of electrode potentials and discuss structural changes that depend on the potential, as found previously [22]. We conclude in section VI.
## II The structure factor
The structure factor for a multi-component inhomogeneous system can be defined as \(S_{ij}(\vec{k})=\langle\frac{1}{N}\rho_{\mathbf{k}}^{i}\rho_{-\mathbf{k}}^{j}\rangle\) with the Fourier components \(\rho_{\mathbf{k}}^{i}=\sum_{n=1}^{N_{i}}\exp(-i\mathbf{k}\cdot\mathbf{r}_{n})\) of the microscopic density. Equivalently, it can be written in the form [25]
\[S_{ij}(\mathbf{k})=\frac{N_{i}}{N}\delta_{ij}+ \tag{1}\] \[\frac{1}{N}\int\mathrm{d}\mathbf{r}\int\mathrm{d}\mathbf{r}^{ \prime}\rho_{i}(\mathbf{r})h_{ij}(\mathbf{r},\mathbf{r}^{\prime})\rho_{j}( \mathbf{r}^{\prime})\exp(-i\mathbf{k}\cdot(\mathbf{r}-\mathbf{r}^{\prime})),\]
where \(N_{i}\) is the number of particles of species \(i\) in the system, \(N=\sum_{j}N_{j}\) is the total number of particles, \(\rho_{j}(\mathbf{r})\) is the density profile of species \(j\) as function of the position \(\mathbf{r}\), and \(h_{ij}(\mathbf{r},\mathbf{r}^{\prime})\) is the total correlation function between particles of species \(i\) and \(j\) located at positions \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\), respectively. The structure factor depends on a wave vector \(\mathbf{k}\) that is the scattering vector if \(S\) describes the scattering of a beam. In the bulk with constant bulk density \(\rho_{i}(\mathbf{r})\equiv\rho_{i}=N_{i}/V\) in a volume \(V\), Eq. (1) reduces to
\[S_{ij}(k)=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij}+\rho_{j}\int \mathrm{d}\mathbf{r}\,h_{ij}(r)e^{-i\mathbf{k}\cdot\mathbf{r}}\right], \tag{2}\]
where \(\rho_{\mathrm{tot}}=\sum_{i}\rho_{i}\), \(r=|\mathbf{r}|\), and \(k=|\mathbf{k}|\) is the radial wave number in spherical coordinates.
As we aim to study in-plane structure in the presence of a flat wall, it is useful to adopt Eq. 2 to a cylindrical geometry with coordinates \((u,\theta,z)\) that respect the presence of a wall perpendicular to the \(z\) direction. With \(q\) denoting the radial wave number corresponding to the radial in-plane coordinate \(u\) in cylindrical coordinates and \(k_{z}\) the wave number in the \(z\)-direction, Eq. 2 can equivalently be written as
\[S_{ij}(q,k_{z})=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij}+\rho_{j }\int_{-\infty}^{\infty}\mathrm{d}z\,\hat{h}_{ij}(z,q)e^{-ik_{z}z}\right], \tag{3}\]
where we defined the Hankel-transformed total correlation function
\[\hat{h}_{ij}(z,q)=\int_{0}^{2\pi}\mathrm{d}\theta\int_{0}^{\infty}\mathrm{d} u\,u\,h_{ij}(u,z)e^{-iqu\cos\theta}, \tag{4}\]
which is a Fourier-transformed version of the total correlation function \(h_{ij}(\mathbf{r})\) in only two dimensions.
In a homogeneous isotropic bulk system, one can argue that
\[S_{ij}(k) =S_{ij}(q=k,k_{z}=0)\equiv S_{ij}^{*}(q); \tag{5}\] \[h_{ij}(r) =h_{ij}(u=r,z=0)\equiv h_{ij}^{*}(u), \tag{6}\]
_i.e._ it does not matter in which direction one is looking. However, one can use these relations and combine Eqs. (5) and (3) to find
\[S_{ij}^{*}(q)=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij}+\rho_{j} \int_{-\infty}^{\infty}\mathrm{d}z\,\hat{h}_{ij}(z,q)\right] \tag{7}\]
in bulk.
### Structure Factor in the Plane
Let us now confine the integration limits in Eq. (7) and consider a slab of thickness \(L^{\prime}\) in the \(z\)-direction around (any) \(z_{0}\) in the bulk, _i.e._\(z\in[z_{0}-L^{\prime}/2,z_{0}+L^{\prime}/2]\). Then Eq. 7 becomes
\[S_{ij}^{*}(q;L^{\prime})=\frac{\rho_{i}}{\rho_{\mathrm{tot}}}\left[\delta_{ij} +\rho_{j}\int_{-L^{\prime}/2}^{L^{\prime}/2}\mathrm{d}z\,\hat{h}_{ij}(z+z_{0}, q)\right]. \tag{8}\]
This equation represents the in-plane structure factor calculated only within the slab. Clearly, in the limit
\[\lim_{L^{\prime}\to\infty}S_{ij}^{*}(q;L^{\prime})=S_{ij}(k), \tag{9}\]
_i.e._\(S_{ij}^{*}(q;L^{\prime})\) converges to the bulk structure factor \(S_{ij}(k)\), as expected. In a similar fashion, we can confine the integration limits in Eq. (1). For this purpose, we first restrict the particle number to the same slab and replace \(N_{i}\) and \(N\) by \(n_{i}\) and \(n_{\mathrm{tot}}\), respectively, where
\[n_{i}(z_{0},L^{\prime})=\int_{z_{0}-L^{\prime}/2}^{z_{0}+L^{\prime}/2}\mathrm{d }z\,\rho_{i}(z) \tag{10}\]
is the number of particles of species \(i\) in the slab of thickness \(L^{\prime}\) centered around \(z_{0}\) per unit area, and \(n_{\mathrm{tot}}(z_{0},L^{\prime})=\sum_{i}n_{i}(z_{0},L^{\prime})\). Then, confining the integration limits in the \(z\)-direction, the in-plane structure factor for an inhomogeneous system (still homogeneous in \(x\) and \(y\) directions) reads
\[S_{ij}^{*}(q,k_{z};z_{0},L^{\prime})=\frac{n_{i}(z_{0},L^{\prime})}{n_{\mathrm{ tot}}(z_{0},L^{\prime})}\delta_{ij}+\frac{1}{n_{\mathrm{tot}}(z_{0},L^{ \prime})}\int_{z_{0}-L^{\prime}/2}^{z_{0}+L^{\prime}/2}\mathrm{d}z\int_{z_{0}-L ^{\prime}/2}^{z_{0}+L^{\prime}/2}\mathrm{d}z^{\prime}\rho_{i}(z)\hat{h}_{ij}(z, z^{\prime},q)\rho_{j}(z^{\prime})e^{-ik_{z}(z-z^{\prime})}, \tag{11}\]
where \(\hat{h}_{ij}(z,z^{\prime},q)\) is the inhomogeneous form of \(\hat{h}_{ij}(z,q)\), because in bulk \(\hat{h}(z,z^{\prime},q)=\hat{h}(z-z^{\prime},q)\equiv\hat{h}(z,q)\). Important to notice is that Eq. (11) is the three-dimensional structure factor determined only in a finite slab of thickness \(L^{\prime}\) around \(z_{0}\). Taking the limit of vanishing \(L^{\prime}\) causes the integral to vanish, _i.e._ the integral is naturally dependent on the integration limits. Naturally, taking the limit of \(L^{\prime}\) to infinity returns Eq. (1).
In order to make practical use of Eq. (11), we first need to simplify it. Motivated by Eq. (5) and (8), we further study the case in which one sets \(k_{z}=0\). We have in mind the idea of a test particle placed at \(z_{0}\) in the center of the slab. Accordingly, we replace the first density profile in the integrand by \(\rho_{i}(z)=n_{i}(z_{0},L^{\prime})\delta(z-z_{0})\). By doing so, we can reduce Eq. (11) to
\[S^{*}_{ij}(q;z_{0},L^{\prime})= \frac{n_{i}(z_{0},L^{\prime})}{n_{\rm tot}(z_{0},L^{\prime})}H_{ ij}(q;z_{0},L^{\prime}); \tag{12}\] \[H_{ij}(q;z_{0},L^{\prime})= \delta_{ij}+\int_{z_{0}-L/2}^{z_{0}+L^{\prime}/2}{\rm d}z\,\hat{h }_{ij}(z_{0},z,q)\rho_{j}(z), \tag{13}\]
which in the limit \(z_{0}\to\infty\) (in bulk) is identical to Eq. (8). The quantity \(H_{ij}\), the normalized structure factor, is introduced for convenience, which will become clear in section IV and in appendix B. Note that Eq. (12) is an approximation of Eq. (11), made for practical purposes.
### The Total Correlation Function in DFT
In order to calculate any of the previous in-plane structure factors, one has to have access to the total correlation function \(\hat{h}_{ij}(z,z^{\prime},q)\). This quantity can be determined via the Hankel-transformed Ornstein-Zernike (OZ) equation [26; 27; 23]
\[\hat{h}_{ij}(z,z^{\prime},q)=\hat{c}_{ij}^{(2)}(z,z^{\prime},q)+ \tag{14}\] \[\sum_{k}\int{\rm d}z^{\prime\prime}\hat{h}_{ik}(z,z^{\prime\prime },q)\rho_{k}(z^{\prime\prime})\hat{c}_{kj}^{(2)}(z^{\prime\prime},z^{\prime},q),\]
in which the Hankel transform is defined by
\[\hat{f}(q) =\int_{0}^{\infty}{\rm d}u\,u\int_{0}^{2\pi}{\rm d}\theta f(u)e^ {iqu\cos\theta} \tag{15}\] \[=2\pi\int_{0}^{\infty}{\rm d}u\,uJ_{0}(qu)f(u), \tag{16}\]
with \(J_{0}(qu)\) the zeroth-order Bessel function of the first kind. Eq. (14) can be solved iteratively via Picard iterations.
The question whether we can detect certain (crystalline-like) in-plane structures is already answered in Eq. (14), because, although we have an inhomogeneous system on the \(z\)-axis, we assume translational and rotational symmetry in the plane perpendicular to the z-axis. Hence, with this approach, one is not able to find a respective order in the \(xy\)-plane. However, the approach gives access to structure in the \(xy\)-plane and, thus, would allow for detecting a precursor of an ordering transition, similar to the total correlation structure predicted by the OZ equation in a homogeneous bulk system: For hard-disk systems it is known that there is a signature in the second peak of the ("homogeneous bulk") total correlation function close to the freezing point [28; 29; 30]. In essence, when we restrict our study to only the first layer next to the surface, we have a hard-disk like system and the question therefore is whether we can observe these subtle signatures in the in-plane structure factor and total correlation function.
### Charge-charge and density-density structure
For convenience, and for future reference, let us define the charge-charge (ZZ) and number-number (NN) structure factors by, respectively, [25]
\[S_{\rm ZZ}=\sum_{ij}Z_{i}Z_{j}S_{ij} \tag{17}\]
and
\[S_{\rm NN}=\sum_{ij}S_{ij}, \tag{18}\]
where the summations are over the number of species.
## III Modelling the system
The system under consideration is an electrolyte at temperature \(T\) that is confined between two parallel flat hard walls located at \(z=0\) and \(z=H\) at which electrostatic surface potentials \(\Phi_{0}\) and \(\Phi_{H}\) are applied, respectively, as depicted in Fig. 1. The electrolyte is in chemical equilibrium with an ion reservoir at chemical potentials \(\mu_{\pm}\) (corresponding to bulk densities \(\rho_{0,\pm}\)) for the cations and anions, respectively [31]. The respective non-electrostatic ion-electrode interaction potential between ions of species \(i\) and the wall at \(z=0\) is
\[\beta V_{\rm ext}^{i}(z)=\begin{cases}\infty&\quad\text{for }z\leq d_{i}/2;\\ 0&\quad\text{for }z>d_{i}/2,\end{cases} \tag{19}\]
where \(\beta=1/k_{\rm B}T\) with \(k_{\rm B}\) Boltzmann's constant. The electrostatic part of the ion-electrode interactions is treated via the Poisson equation
\[\partial_{z}^{2}\Phi(z)=-\frac{\rho_{\rm 2tot}(z)}{\varepsilon_{\rm r} \varepsilon_{0}}, \tag{20}\]
where \(\rho_{\rm 2tot}(z)=\rho_{\rm Z}(z)+\delta(z)\sigma_{0}+\delta(z-H)\sigma_{H}\) denotes the total charge density in the system, _i.e._ the charge density of the mobile ions \(\rho_{\rm Z}(z)\) plus the surface charge densities \(\sigma_{0}\) and \(\sigma_{H}\) of the respective walls. Note already that we use an external potential that only depends on
the out-of-plane coordinate \(z\) without variations in the \(xy\)-plane.
For the electrolyte we use the Primitive Model (PM), in which the ions are modelled as charged hard spheres of diameters \(d_{j}\) residing in a continuum dielectric medium, characterized by the Bjerrum length \(\lambda_{\rm B}=\beta e^{2}/4\pi\varepsilon_{r}\varepsilon_{0}\), where \(e\) is the proton charge and \(\varepsilon_{r}\varepsilon_{0}\) the dielectric permittivity of the medium. The ion-ion interaction potential for two ions separated by a distance \(r\) is then given by
\[\beta u_{ij}^{\rm PM}(r)=\begin{cases}\infty&\text{for $r\leq d_{ij}/2$;}\\ Z_{i}Z_{j}\frac{\lambda_{\rm B}}{r}&\text{for $r>d_{ij}/2$,}\end{cases} \tag{21}\]
where \(Z_{j}\) denotes the valency of the ions of species \(j\), and \(d_{ij}=(d_{i}+d_{j})/2\) the common diameter of species \(i\) and \(j\). Specifically, we consider a system with ion diameters \(d_{-}=0.506\) nm and \(d_{+}=0.618\) nm, valencies \(Z_{\pm}=\pm 0.785\), and Bjerrum length \(\lambda_{\rm B}=4.17\) nm (\(T=400\) K and \(\epsilon_{r}=10\)), mimicking the ionic liquid of Ref. [22]. The system size we use for most of the DFT calculations is \(H=10d_{-}\), the only exception being the calculation of the density profiles when comparing to the simulation data for which we consider the system size \(H=12.32\) nm, in accordance with Ref. [23]. Typically, the density profiles in our system decay to bulk values within \(5\)\(d_{-}\) (\(\approx 2.5\) nm), as can be seen from Fig. 2.
We tackle this model using DFT with the MSAc implementation, which details can be found in our previous works [17; 18; 19; 20] and are summarized in appendix A. In short, DFT is an exact theoretical framework to access the structure and thermodynamics of a given system in an external potential \(V_{\rm ext}^{j}\)[32]. The main object within DFT is the grand potential functional \(\Omega[\{\rho\}]\) of the densities \(\{\rho\}\), which typically is not known exactly. A main problem in DFT is setting up accurate functionals for specific systems. The MSAc functional reads
\[\Omega[\{\rho\}]=\mathcal{F}_{\rm id}[\{\rho\}]+\mathcal{F}_{\rm ex }^{\rm HS}[\{\rho\}]+\mathcal{F}_{\rm ex}^{\rm MFC}[\{\rho\}]+\mathcal{F}_{ \rm ex}^{\rm MSAc}[\{\rho\}]-A\sum_{j}\int\mathrm{d}z\,\rho_{j}(z)(\mu_{j}-V _{\rm ext}^{j}(z))-A\Phi_{0}\sigma_{0}-A\Phi_{H}\sigma_{H}, \tag{22}\]
where \(\mathcal{F}_{\rm id}[\{\rho\}]\) is the ideal Helmholtz free energy functional, \(\mathcal{F}_{\rm ex}^{\rm HS}[\{\rho\}]\) the excess Helmholtz free energy functional that deals with the hard-sphere interactions for which we use the White Bear mark II (WBII) version of fundamental measure theory (FMT) [33; 34], \(\mathcal{F}_{\rm ex}^{\rm MFC}[\{\rho\}]\) is the mean-field functional for the electrostatic (Coulombic) interactions, and \(\mathcal{F}_{\rm ex}^{\rm MSAc}[\{\rho\}]\) is the beyond mean-field functional for the electrostatic interactions that makes use of the direct correlation functions of MSA [24; 35; 36; 37; 38; 20]. The surface area \(A\) is eventually factored out and does not play a role in the DFT calculations. The grand potential functional is minimized by the equilibrium density profiles \(\rho_{j}^{\rm eq}(z)\),
\[\frac{\delta\Omega[\{\rho\}]}{\delta\rho(z)}\bigg{|}_{\rho_{j}(z)=\rho_{j}^{ \rm eq}(z)}=0. \tag{23}\]
From minimizing the grand potential, one only gains access to \(\rho_{i}(z)\) and no information to what happens in the
Figure 1: The system under consideration. Two planar hard walls are located at \(z=0\) and \(z=H\) with surface potentials \(\Phi_{0}\) and \(\Phi_{H}\), respectively. The surfaces confine an electrolyte at temperature \(T\) in chemical equilibrium with a reservoir at chemical potentials \(\mu_{\pm}\) with which it can exchange ions. The electrolyte is modelled in the Primitive Model (PM) with ions as charged hard spheres of diameters \(d_{\pm}\) that carry charges \(\pm 0.785\)e in their centre. The ions move in a continuum medium, characterized by the relative dielectric permittivity \(\varepsilon_{r}\).
plane, because all applied external potentials have translational symmetry and we do not break this symmetry in our calculations. In general, DFT could predict also density profiles that are inhomogeneous in the \(xy\) plane but the numerical calculations would be too expensive [39]. We obtain in-plane structure via the OZ equation as explained in the previous section, for which we need the direct pair-correlation function. This, however, follows from the grand potential function via
\[c_{ij}^{(2)}(\mathbf{r},\mathbf{r}^{\prime})=-\beta\frac{\delta ^{2}\mathcal{F}_{\rm ex}[\{\rho\}]}{\delta\rho_{i}(\mathbf{r}),\rho_{j}( \mathbf{r}^{\prime})} \tag{24}\] \[= c_{ij}^{(2),{\rm HS}}(\mathbf{r},\mathbf{r}^{\prime})+c_{ij}^{( 2),{\rm MFC}}(\mathbf{r},\mathbf{r}^{\prime})+c_{ij}^{(2),{\rm MSAc}}(\mathbf{ r},\mathbf{r}^{\prime}),\]
which naturally contains a HS contribution, a MFC contribution, and a MSAc contribution. Specifically, we are interested in the Hankel-transformed direct-correlation function (see Eq. (14)). The Hankel-transformed \(c_{ij}^{(2),{\rm HS}}(\mathbf{r},\mathbf{r}^{\prime})\) for the Rosenfeld-version of FMT is well described in Ref. [27] for a single-component system. We straightforwardly generalized that approach for the WBII-version of FMT including a tensor correction [40] to more accurately incorporate dense multicomponent systems. The MFC contribution is simply given by
\[c_{ij}^{(2),{\rm MFC}}(z,z^{\prime},q)=-2\pi\lambda_{\rm B}Z_{i}Z_{j}\frac{e^ {-q|z-z^{\prime}|}}{q}, \tag{25}\]
and for the MSAc contribution we numerically Hankel transformed the MSAc direct correlation function, which explicit form can, for example, be found in Ref. [19].
To summarize our approach, we construct a grand potential functional for the PM electrolyte confined between two planar hard surfaces at which we apply surface potentials. This gives us both access to the density profiles perpendicular to the surfaces \(\rho_{j}(z)\) as well as the direct correlation function \(c_{ij}^{(2)}(z,z^{\prime},u)\), where \(u\) is the in-plane radial component. Both these quantities are used to calculate the total correlation function in Hankel space \(\hat{h}_{ij}(z,z^{\prime},q)\), where \(q\) is the radial component of the wave vector in the plane. Consequently, we can calculate the in-plane structure factor \(S(q,z_{0},L^{\prime})\) within a slab of thickness \(L^{\prime}\) centered around \(z_{0}\).
## IV Success of our approach
In Ref. [23], the authors consider an ionic liquid in the PM but similar to the one studied in Ref. [22], where in-plane structure was found in simulations using specific force fields. If in-plane structure would form, then one would expect this to show up in the "in-plane" structure factor (as demonstrated in Ref. [22]). For the following discussion, we consider \(S_{ij}^{*}(q;z)\) from Eq. (12) as the in-plane structure factor and \(H_{ij}(q;z)\) from Eq. (13) as the normalized in-plane structure factor [41].
First let us show the density profiles at the positive-charged hard wall and compare those with the MD simulation data given in Ref. [23]. This comparison is presented in Fig. 2. Note here that the MD simulations were performed within the canonical ensemble, while the DFT calculations are performed within the grand-canonical ensemble. Hence, we matched the number of particles in the system and the charge on the walls within DFT with those given from the simulations. The corresponding numbers used as input parameters for DFT (the concentration \(c\) and surface potential \(\Phi_{0}\) and \(\Phi_{H}\) at the walls located at \(z=0\) and \(z=H\)) are given within each panel in Fig. 2. [42] Overall, the MD and the MSAc DFT density profiles agree reasonably well.
Let us then consider the structure factors, which are calculated in MD [23] via the definition
\[S_{ij}^{\rm MD}(\mathbf{q};z_{0},L^{\prime})=\left\langle\frac{1}{N_{\rm tot}} \sum_{n\in\Gamma_{i}}\sum_{m\in\Gamma_{j}}\exp[-i\mathbf{q}\cdot(\mathbf{r}_{ n}-\mathbf{r}_{m})]\right\rangle, \tag{26}\]
where \(N_{\rm tot}\) denotes the total number of particles within the slab \([z_{0}-L^{\prime}/2,z_{0}+L^{\prime}/2]\), and \(\Gamma_{i}\) the set of particles of species \(i\) within the slab. We define \(\mathbf{q}=(q_{x},q_{y},0)\) as the in-plane wave vector with \(q=|\mathbf{q}|\) the radial wave number defined before. Eq. (26) is comparable to the structure factor calculated from Eq. (12). To demonstrate this, we compare results from MD using Eq. (26) (circles) with MSAc using Eq. (13) (solid lines) in Fig. 3. For this purpose, we define, in accordance with Eq. (12), \(H_{ij}^{\rm MD}=\frac{N_{\rm tot}}{N_{i}}\,S_{ij}^{\rm MD}\). The blue lines depict the \(H_{++}\) component, the orange line the \(H_{--}\) component, and the yellow line the \(H_{+-}\) component. Note that we cannot show the \(H_{-+}\) values for MD, because it was omitted in Ref. [23], due to poor statistics. The MSAc functional performs a lot better than the MFC functional (not shown here, but can be found in Ref. [23]), and the agreement between MD and MSAc is satisfying. Since MD did not provide the \(S_{-+}\) components due to poor statistics, we cannot properly compare the \(S_{\rm ZZ}\) and \(S_{\rm NN}\) against simulations. However, given the agreement between DFT and MD, as seen in Fig. 3, we presume satisfying agreement for \(S_{\rm ZZ}\) and \(S_{\rm NN}\) as well. The corresponding charge-charge and number-number structure factors \(S_{\rm ZZ}\) and \(S_{\rm NN}\) from DFT, respectively, are plotted in Fig. 4. Note the shift in the first maximum in \(S_{\rm ZZ}\), which will be discussed in the following section V.
## V The structure changes with the surface potential
Now we have a well-performing formalism at hand to investigate the structure factor as a function of the surface potential \(\Phi_{0}\). In Fig. 5(a)-(d) we show the normalized in-plane structure factors \(H_{ij}(q)\) for varying surface potential \(\Phi_{0}\). The surface plots show \(H_{ij}(q)\) on the \(z\)-axis, the wave number \(q\) on the \(x\)-axis, and the surface
Figure 3: Structure factors \(H_{ij}(q)\) from DFT (MSAc functional and MD via Eqs. (13) and (26), both for the same parameters and density profiles as given in Fig. 2; the potential increases from (a) \(\Phi_{0}=-0.0104\) V via (b) \(\Phi_{0}=0.0402\) V to (c) \(\Phi_{0}=0.2524\) V. The values \(z_{0}\) and \(L^{\prime}\) (values according to the numerical grid) are chosen such that \(z_{0}\) lies in between the contact value of the smallest ions (at \(d_{-}/2\)) and the first minimum (closest to the wall) in the total density profile (see Fig. 2). In addition, values for the prefactors \(n_{i}(z_{0},L^{\prime})/n_{\rm tot}(z_{0},L^{\prime})\) are presented (the prefactors appear in Eq. (12)).
Figure 2: Density profiles \(\rho_{i}\), normalized to bulk density \(\rho_{\rm b}\), for separations \(z\) from the wall located at \(z_{0}=0\); \(d_{-}\) is the diameter of the negatively charged ions. In blue the cation density profile \(\rho_{+}\) is given, in red the anion density profile \(\rho_{-}\), and in yellow the total density profile \(\rho_{\rm tot}\). The inset in each panel shows the corresponding system parameters, _i.e._, particle numbers \(N_{\pm}\) in the volume \(HA\) with wall separation \(H\), wall charge densities \(\sigma_{0}\) and \(\sigma_{H}\) at the respective walls, and corresponding bulk concentration \(c\) and wall potentials \(\Phi_{0}\) and \(\Phi_{H}\) used in the MSAc functional. The surface potential is smallest in (a), increased a bit in (b), and is largest in (c). The Bjerrum length is \(\lambda_{\rm B}=4.17\) nm.
potential \(\Phi_{0}\) on the \(y\)-axis. By plotting \(H_{ij}\), one emphasizes the role that the cat- and anions play at a given surface potential; at positive potentials \(\Phi_{0}\gtrsim 0.5\) V the (negative) anions dominate (see \(H_{+-}\) and \(H_{--}\)), while at negative potentials \(\Phi_{0}\lesssim-0.5\) V the (positive) cations dominate (see \(H_{-+}\) and \(H_{++}\)). To exemplify this, we plotted the average packing fraction of the cations and anions in the first layer to the electrode in Fig. 6 as function of the applied surface potential \(\Phi_{0}\).
In Ref. [22], the authors found an ordered phase at small surface potentials resulting in large values for the peak height of the structure factor. As mentioned before, we cannot find this ordered structure directly due to the construction of our theoretical approach. Nevertheless, we would expect to find precursors of an ordering transition, if it exists. In Fig. 7(a) and (b) we plot the number-number and charge-charge structure factor \(S_{\rm NN}\) and \(S_{\rm ZZ}\), respectively, as function of the surface potential \(\Phi_{0}\) and wave number \(q\). Similar as in Ref. [22], there is a small bump in \(S_{\rm ZZ}\) at small surface potentials (\(q\approx\pi/d_{-}\)). Interestingly, the location of this maximum (or bump) in Fig. 7(b) is located around \(q=0.45*2\pi/d_{-}=2\pi/(d_{-}+d_{+})\) for vanishing surface potentials, and shifts to \(q=2\pi/d_{-}\) and \(q=0.81*2\pi/d_{-}=2\pi/d_{+}\) at positive and negative potentials, respectively. The reason being that at small surface potentials, the difference in the number of cations and anions within the first layer is small (see Fig.2(a)) and therefore, within the plane, each cation (anion) is approximately surrounded by an anion (cation) and the charge-charge structure spans two ion diameters resulting in a peak at \(q=2\pi/(d_{-}+d_{+})\). At larger (absolute) surface potentials, the first layer gets fully filled with counterions, and therefore the charge-charge structure couples to the number-number structure, causing a peak in \(S_{\rm ZZ}\) at the inverse ion diameter \(2\pi/d_{j}\). Hence, we do find a structural change in \(S_{\rm ZZ}\) in the first layer near the electrode from a diffuse EDL at small surface potentials to a dense-packed EDL at larger surface potentials.
It might be interesting to compare the first layer of the EDL next to the electrode with a two-dimensional system of hard disks. The size ratio between the positive and negative ions is \(d_{+}/d_{-}=0.618/0.506\approx 1.22\). For this size ratio, binary hard disks are not expected to form a crystalline-like phase [43]. However, Fig. 6 shows that at sufficiently large potentials mainly one species of particles fills the first layer to the electrode. At large absolute potentials the system exceeds the packing fraction where freezing would be expected: For monodisperse hard disks in two dimensions this is \(\eta^{\rm 2d}=0.68\)[44], which would correspond to \(\eta\approx 0.46\) for an ideal layer of hard spheres where all spheres are located exactly in the center of the slab of size \(L^{\prime}=d_{+}/2\). This is further reflected in Fig. 8, where the height of the first peak \(S_{ij}(q^{*})\) is shown for both \(S_{\rm NN}\) and \(S_{\rm SS}\) from Fig. 7. At sufficiently large potentials \(\Phi_{0}\) the peak exceeds values of \(2.8-3.1\), which, for hard spheres in bulk, typically happens close to the freezing transition [29]. This can be seen as a precursor of an ordering transition as it would be expected at high packing fractions.
## VI Conclusion
In continuation of previous work [22; 23], we investigated the in-plane structure of EDLs at charged electrodes. We properly derived the equation for the in-plane structure factor; we used a slab of a certain thickness in which the structure factor is calculated. In comparison to previous work, satisfying agreement was found within the primitive model of electrolytes (PM) between MD simulations [23] and our approach, where we made use of DFT with the MSAc functional. This allowed us to further explore the features of the in-plane structure factor as a function of the electrostatic surface potential. At positive potentials the structure factor is dominated by the anions, while at negative potentials it is dominated by the cations. This finding is in line with expectations and with previous work from Ref. [20] in which the same trend was shown in the differential capacitance. Less trivially, we found that for the charge-charge structure factor
Figure 4: The number-number (NN) and charge-charge (ZZ) structure factor calculated via Eqs. (18) and (17), respectively, corresponding to the structure factors shown in Fig. 2, and obtained from MSAc for the system parameters given in Figs. 2 and 3.
\(S_{\rm ZZ}\) at low surface potentials, where neither the cations nor anions dominate, the location of the maximum of the structure factor is located around \(2\pi/(d_{-}+d_{+})\), _i.e._ each coion is surrounded by a counterion. At larger surface potentials, however, the location of the maximum converges to \(2\pi/d_{j}\), _i.e._ the first layer is filled with only counterions. This clearly demonstrates that one is able to distinguish between a diffuse EDL and a dense-packed EDL, using our approach.
The primitive model that we studied here has its known shortcomings. One of the crucial simplifications of the model is considering an implicit solvent. The reason hereof is simplicity, as the solvent molecules, in particular water molecules, have extremely complex interactions and are therefore hard to model accurately. One implication of considering an implicit solvent in our case is that the dielectric permittivity is constant throughout the system, which is certainly not true for real electrolytes near charged surfaces where solvent molecules (and ions to some extend) have the tendency to orient along the electric field [45; 46]. However, there is not a simple theory to take this effect into account. In this manuscript, we want to keep the model as simple as possible while still capturing important features of the real system. Note that the simulations by Merlet et.al [22] do not have a solvent. Another simplification in our model is to not consider explicitly an inhomogeneous surface-charge-density distribution. Doing so would imply us to invoke three-dimensional DFT calculations, which would defeat the purpose of doing DFT in the first place, as three-dimensional DFT is computationally very expensive.
Overall, we explored the EDL from a novel perspec
Figure 5: (a)-(d) The normalized structure factor \(H_{ij}(q)\) for varying surface potential \(\Phi_{0}\), obtained via Eq. (13) from DFT using the MSAc functional. Note the different color schemes in each panel.
tive and presented how to get access to the in-plane structure within the framework of DFT. Although the in-plane structure factor is a tedious object, we showed that it can provide interesting insight in the EDL. One of the most prominent features of the bulk structure factor, which we have not yet mentioned, is that it is a measurable quantity. The main question, therefore, is whether the in-plane structure factor studied in this manuscript theoretically, is also measurable. If so, we established a direct connection between the density profiles, which are very hard (if not impossible) to measure [47], to the measurable in-plane structure factor. This would open a doorway for a much more thorough understanding of the EDL.
## Acknowledgements
We acknowledge funding from the German Research Foundation (DFG) through Project Number 406121234. We acknowledge support by the state of Baden-Wurttemberg through bwHPC and the German Research Foundation (DFG) through Grant No. INST 39/963-1 FUGG (bwForCluster NEMO).
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.07884 | Single-soft emissions for amplitudes with two colored particles at three
loops | We compute the three-loop correction to the universal single-soft emission
current for the case of scattering amplitudes with two additional color-charged
partons. We present results valid for QCD and $\mathcal{N}=4$ super-symmetric
Yang-Mills theory. To achieve our results we develop a new integrand expansion
technique for scattering amplitudes in the presence of soft emissions.
Furthermore, we obtain contributions from single final-state parton matrix
elements to the Higgs boson and Drell-Yan production cross section at
next-to-next-to-next-to-next-to leading order (N$^4$LO) in perturbative QCD in
the threshold limit. | Franz Herzog, Yao Ma, Bernhard Mistlberger, Adi Suresh | 2023-09-14T17:30:41Z | http://arxiv.org/abs/2309.07884v1 | # Single-soft emissions for amplitudes with two colored particles at three loops
###### Abstract
We compute the three-loop correction to the universal single-soft emission current for the case of scattering amplitudes with two additional color-charged partons. We present results valid for QCD and \(\mathcal{N}=4\) super-symmetric Yang-Mills theory. To achieve our results we develop a new integrand expansion technique for scattering amplitudes in the presence of soft emissions. Furthermore, we obtain contributions from single final-state parton matrix elements to the Higgs boson and Drell-Yan production cross section at next-to-next-to-next-to-next-to leading order (N\({}^{4}\)LO) in perturbative QCD in the threshold limit.
## 1 Introduction
A remarkable property of gauge theory scattering amplitudes is that they factorize in infrared limits. Infrared limits are generally characterized by soft and/or collinear momentum configurations, and typically lead to singularities or poles in the amplitude. In turn these singularities are responsible for the infrared divergences encountered in both loop and phase space integrals, which typically appear in intermediate stages of the computation of physical quantities.
The infrared limits are of great interest from both a practical as well as theoretical perspective. For one, they are an important ingredient for building infrared subtraction schemes for higher-order QCD cross section calculations [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. They are also responsible for potentially large logarithmic corrections in a variety of observables, and as such enter as crucial ingredients for resummations [15; 16; 17; 18]. Finally, infrared limits are of fundamental interest in the study of the mathematical properties of scattering amplitudes as they constrain their analytic structure. In this context, infrared limits have played an important role in the analytic bootstrap program [19; 20; 21].
In this work, we will focus on the limit of scattering amplitudes involving three colored partons in which a single external gluon momentum becomes soft. It is well known that
the all-order \(n\)-point amplitude factorizes in this limit into an \(n-1\)-point amplitude times the action of a soft current as an operator in color space on the latter. The corresponding universal factorization limit has been known for a long time at the tree level [22]. At the one-loop level the soft limit was first extracted from color-ordered amplitudes [23; 24; 25] before the full color operator structure of the soft current was uncovered [26]. The two-loop soft current was extracted to finite order in the dimensional regulator \(\epsilon\) in ref. [27] by taking the soft limit of the two-loop splitting function. These results were extended to higher orders in \(\epsilon\) in ref. [28; 29], allowing the two-loop soft current to be used in the calculation of the threshold approximation of the N\({}^{3}\)LO Higgs boson cross section [30]. The two-loop order is also the first order where the soft limit can lead to color correlations of three hard partons, and the calculation of the corresponding current was presented in ref. [31]. Beyond the single-soft emission current, the double-soft current has also been known at tree level for quite some time [32] and more recently also at the one-loop level [33; 34; 35]. Finally, the triple soft limit is known at tree-level [36; 37].
The main methods used so far for calculating soft currents have been either extractions from amplitude calculations, or direct calculations via the Wilson line/SCET formalism. In this work, we introduce a new infrared subgraph expansion approach which we employ directly at the integrand level of the full amplitude. Using this new technique we circumvent the very challenging task of computing a full three-loop scattering amplitude. An infrared-subgraph-finding algorithm, which can be seen as a generalization of the expansion-by-subgraph approach [38; 39] from Euclidean space to Minkowski space, was recently developed in the context of on-shell expansions for wide-angle scattering [40]. Here we outline how to employ the same algorithm to find the set of infrared subgraphs contributing to the soft expansion. We present a general strategy for the soft expansion at arbitrary loop order with emphasis on the single-soft case. In particular, we provide criteria to identify infrared subgraphs which are in one-to-one correspondence with regions identified by the method-of-regions approach in parametric space, as implemented in [41; 42; 43; 44; 45]. The calculation of the three-loop soft current not only represents a new result, but also serves as a proof of concept demonstrating the potential of the expansion-by-subgraph approach in a highly non-trivial example. Indeed, this approach has been employed before in next-to-soft expansions of the Drell-Yan cross section at NNLO [46], as well as at higher orders in the soft expansion in double-real-virtual and real-virtual squared corrections to the N\({}^{3}\)LO Higgs boson cross section [47; 48; 49]; however this marks the first application of a fully systematic and automated approach in momentum space.
Our approach facilitates the generation of an integrand in the soft limit of scattering amplitudes. The challenging task of actually performing the loop integration remains. To this end, we employ integration-by-parts (IBP) identities [50; 51; 52] to express our amplitudes in terms of soft master integrals. We then introduce another scale into these soft MIs by completing the square of a certain subset of propagators, which are linear in the loop momentum. The resulting integrals can be identified as _collinear_ MIs and contain the original soft MIs in their soft limits. We solve these collinear MIs via the method of differential equations [53; 54; 55; 56; 57] in terms of harmonic polylogarithms [58; 59] up to weight eight. Apart from one simple integral, we find that the soft boundary integrals are fully
determined from regularity and consistency conditions [60; 61; 62] of the system of differential equations.
The main result of this article is the extraction of three-loop QCD corrections to the single-soft emission current acting on two additional colored partons. In addition, we use our techniques to perform a computation of the single-soft limit of the stress tensor multiplet three-point form factor in \(\mathcal{N}=4\) sYM theory based on an integrand provided in ref. [63]. Our calculation explicitly confirms the principle of maximal transcendentality [64; 65] for the contribution to the single-soft emission current considered in this article. Furthermore, we use our newly obtained results for the soft current at three loops to derive contributions to the Higgs boson and Drell-Yan production cross section at N\({}^{4}\)LO in perturbative QCD due to single real emission contributions in the soft limit.
The remainder of this work is organized as follows. In section 2 we introduce notation and the main results for the three-loop single-soft current with two colored partons. We describe the steps of our calculation in section 3. In section 4 we provide a detailed description of our new integrand expansion technique. In section 5 we discuss the universal pole structure of the soft limit of our newly computed scattering amplitudes as a consistency check. Next, we discuss results for the three-loop form factor in \(\mathcal{N}=4\) sYM theory in section 6. Furthermore, we present the threshold limit to single-real emission contributions to the Higgs boson and Drell-Yan production cross section at threshold at N\({}^{4}\)LO in QCD perturbation theory in section 7. Finally, we conclude in section 8.
## 2 Single-soft current up to three loops in QCD
We consider a scattering amplitude \(\mathcal{A}\) in which the momentum \(q\) of a single gluon is very low-energetic (i.e. the gluon is soft). In this single-soft limit, scattering amplitudes factorize into a universal operator acting on the scattering amplitude without the soft gluon [66; 22].
\[\lim_{q\to 0}\mathcal{A}_{p_{1}p_{2}\ldots p_{n}q}=\mathbf{J}(q)\mathcal{A}_{ p_{1}p_{2}\ldots p_{n}}. \tag{1}\]
The operator \(\mathbf{J}\) is referred to as the _single-soft emission current_ and acts on the colored degrees of freedom of the scattering amplitude \(\mathcal{A}_{p_{1}p_{2}\ldots p_{n}}\). In general, this operator will correlate emissions from all color-charged particles of the scattering amplitude. In this article, we determine the contribution to the soft-emission current that correlates two color-charged particles through three loops in perturbative QCD and \(\mathcal{N}=4\) sYM theory. Our result is exact for scattering amplitudes involving only two color-charged external particles on top of the emitted soft gluon. Furthermore, our results represent an important first step in determining \(\mathbf{J}(q)\) to third order in the coupling constant.
Up to three-loop order, the single-soft emission current can be decomposed as follows.
\[\mathbf{J}(q) = \frac{ig_{S}}{C_{A}}\epsilon_{\mu}^{a}(q)\sum_{i\neq j}\left( \frac{p_{i}^{\mu}}{p_{i}\cdot q}-\frac{p_{j}^{\mu}}{p_{j}\cdot q}\right)\left[ f^{abc}\mathbf{T}_{i}^{b}\mathbf{T}_{j}^{c}K_{2}(q,p_{i},p_{j})\right. \tag{2}\] \[\left.+iC_{A}\left(d_{4A}^{abcd}K_{4A}(q,p_{i},p_{j})+n_{f}d_{4F}^ {abcd}K_{4F}(q,p_{i},p_{j})\right)\left\{\mathbf{T}_{i}^{b},\mathbf{T}_{i}^{c }\right\}\mathbf{T}_{j}^{d}\right].\]
The bold faced notation above indicates an operator acting on the color structure of the amplitude. Note that the general structure of the \({\bf J}(q)\) can be more complex if its action on amplitudes with more than two colored particles is considered [31]. We work using dimensional regularization as a framework to regulate ultraviolet and infrared singularities, with \(\epsilon\) as the dimensional regulator related to the spacetime dimension via \(d=4-2\epsilon\). The number of quark flavors is given by \(n_{f}\). The index \(i\) sums over all color-charged particles of the scattering amplitude (to which the current is applied). The factors \(K_{X}\) are scalar quantities that can be expanded in perturbative QCD as follows.
\[K_{X}(q,p_{i},p_{j})=\sum_{o=0}^{\infty}a_{S}^{o}\left(\frac{(-2qp_{i}-i0)(-2 qp_{j}-i0)}{(-2p_{i}p_{j}-i0)\mu^{2}}\right)^{-o\epsilon}K_{X}^{(o)}. \tag{3}\]
The scalar products of the momenta appearing above are equipped with an infinitesimal imaginary part, inherited from Feynman's \(i0\) prescription. It is our convention that all momenta are treated as incoming. Consequently, all scalar products are positive such that the term in the bracket in eq. (3) above introduces imaginary parts to the scattering amplitude. If one computes other configurations of scattering of particles (incoming and outgoing), then the corresponding soft-emission current can be derived by the appropriate crossing and analytic continuation according to the \(i0\) prescription indicated earlier. Above, \(a_{S}\) is related to the bare strong coupling constant \(\alpha_{S}^{0}\) by some universal factors.
\[a_{S}=\frac{\alpha_{S}^{0}}{\pi}\left(\frac{4\pi}{\mu^{2}}\right)^{\epsilon}e^ {-\gamma_{E}\epsilon},\hskip 28.452756pt\gamma_{E}=0.577216\ldots. \tag{4}\]
The coefficients \(K_{X}^{(o)}\) have been computed in perturbative QCD at one loop [23; 24; 25; 26] and two loops [27; 28; 29; 31], where only the terms \(K_{2}^{(o)}\) are non-zero. From three loops, non-vanishing contributions from \(K_{4A}^{(3)}\) and \(K_{4F}^{(3)}\) emerge. The color tensors \(d_{4A}^{abcd}\) and their contractions are defined as follows:
\[C_{4}^{R_{1}R_{2}}=d_{R1}^{abcd}d_{R2}^{abcd},\hskip 28.452756ptd_{R}^{abcd}= \frac{1}{4!}\left[\text{Tr}\big{(}T_{R}^{a}T_{R}^{b}T_{R}^{c}T_{R}^{d}\big{)}+ \text{symmetric permutations}\right]\,. \tag{5}\]
Above \(T_{R}^{a}\) are the generators in representation \(R\) of a general compact semi-simple Lie algebra; explicitly for the fundamental and adjoint representation we find:
\[T_{F,\,ij}^{a}=T_{ij}^{a}\,,\hskip 28.452756ptT_{A,\,ij}^{a}=-if^{aij}. \tag{6}\]
The labels \(A\) and \(F\) refer to the adjoint and fundamental representations respectively. For \(SU(n_{c})\) we can express the quartic Casimirs in terms of the number of colors \(n_{c}\):
\[C_{4}^{AA}=\frac{n_{c}^{2}}{24}(n_{c}^{2}-1)(36+n_{c}^{2}),\quad C_{4}^{AF}= \frac{n_{c}}{48}(n_{c}^{2}-1)(6+n_{c}^{2}),\quad C_{4}^{FF}=\frac{(n_{c}^{2}-1 )(18-6n_{c}^{2}+n_{c}^{4})}{96n_{c}^{2}}. \tag{7}\]
One of the main results of this article is the explicit computation of the coefficients \(K_{X}^{(o)}\) through three-loop order. These coefficients are stated here explicitly and are also provided as ancillary files to the arXiv submission of this article. Their computation will be discussed in more detail later.
\(K_{2}^{(0)}\) = 1.
\[K_{2}^{(1)} = -C_{A}\frac{e^{\gamma_{5}\Gamma(1-\epsilon)^{3}\Gamma(\epsilon+1)^{2}} }{4\epsilon^{2}\Gamma(1-2\epsilon)}\] \[= C_{A}\Bigg{[}-\frac{1}{4\epsilon^{2}}-\frac{\zeta_{2}}{8}+\frac{7 \zeta_{3}\epsilon}{12}+\frac{39\zeta_{4}\epsilon^{2}}{64}+\left(\frac{7\zeta_{2 }\zeta_{3}}{24}+\frac{31\zeta_{5}}{20}\right)\epsilon^{3}+\left(\frac{1555 \zeta_{6}}{512}-\frac{49\zeta_{3}^{2}}{72}\right)\epsilon^{4}\] \[+\left(-\frac{91}{64}\zeta_{3}\zeta_{4}+\frac{31\zeta_{2}\zeta_{5 }}{40}+\frac{127\zeta_{7}}{28}\right)\epsilon^{5}+\left(-\frac{49}{144}\zeta_ {2}\zeta_{3}^{2}-\frac{217\zeta_{5}\zeta_{3}}{60}+\frac{37009\zeta_{8}}{4096} \right)\epsilon^{6}\Bigg{]}+{\cal O}(\epsilon^{7}).\]
\[K_{2}^{(2)} = C_{A}^{2}\Bigg{[}\frac{1}{32\epsilon^{4}}-\frac{11}{192\epsilon ^{3}}+\left(\frac{\zeta_{2}}{16}-\frac{67}{576}\right)\frac{1}{\epsilon^{2}}+ \left(-\frac{11\zeta_{2}}{192}-\frac{11\zeta_{3}}{96}-\frac{193}{864}\right) \frac{1}{\epsilon}\] \[-\frac{67\zeta_{2}}{576}+\frac{341\zeta_{3}}{288}+\frac{7\zeta_ {4}}{128}-\frac{571}{1296}\] \[+\left(-\frac{7}{96}\zeta_{3}\zeta_{2}-\frac{139\zeta_{2}}{864}+ \frac{2077\zeta_{3}}{864}+\frac{2035\zeta_{4}}{768}-\frac{247\zeta_{5}}{160}- \frac{1705}{1944}\right)\epsilon\] \[+\left(-\frac{205\zeta_{3}^{2}}{288}+\frac{341\zeta_{2}\zeta_{3} }{288}+\frac{1597\zeta_{3}}{324}-\frac{109\zeta_{2}}{324}+\frac{12395\zeta_{4 }}{2304}+\frac{5621\zeta_{5}}{480}-\frac{3307\zeta_{6}}{768}-\frac{5107}{2916 }\right)\epsilon^{2}\] \[+\left(-\frac{10571\zeta_{3}^{2}}{864}+\frac{2077\zeta_{2}\zeta_ {3}}{864}-\frac{509\zeta_{4}\zeta_{3}}{384}+\frac{37427\zeta_{3}}{3888}-\frac {2411\zeta_{2}}{3888}+\frac{41105\zeta_{4}}{3456}-\frac{219\zeta_{2}\zeta_{5}} {160}\right.\] \[+\frac{34237\zeta_{5}}{1440}+\frac{42361\zeta_{6}}{1024}-\frac{4 573\zeta_{7}}{224}-\frac{15313}{4374}\right)\epsilon^{3}\] \[+\epsilon^{4}\left(-\frac{5\zeta_{5,3}}{2}-\frac{845}{288}\zeta_ {2}\zeta_{3}^{2}-\frac{64387\zeta_{3}^{2}}{2592}+\frac{1381\zeta_{2}\zeta_{3} }{324}-\frac{63085\zeta_{4}\zeta_{3}}{1152}-\frac{29\zeta_{5}\zeta_{3}}{240}\right.\] \[\left.+\frac{226405\zeta_{3}}{11664}-\frac{14785\zeta_{2}}{11664} +\frac{119135\zeta_{2}}{5184}+\frac{5621\zeta_{2}\zeta_{5}}{480}+\frac{27187 \zeta_{5}}{540}+\frac{258017\zeta_{6}}{3072}+\frac{90101\zeta_{7}}{672}\right.\] \[\left.-\frac{1264777\zeta_{8}}{18432}-\frac{45931}{6561}\right) \Bigg{]}\] \[+n_{f}C_{A}\Bigg{[}\frac{1}{96\epsilon^{3}}+\frac{5}{288 \epsilon^{2}}+\left(\frac{\zeta_{2}}{96}+\frac{19}{864}\right)\frac{1}{ \epsilon}+\frac{5\zeta_{2}}{288}-\frac{31\zeta_{3}}{144}+\frac{65}{2592}\] \[+\left(-\frac{35\zeta_{2}}{864}-\frac{155\zeta_{3}}{432}-\frac{18 5\zeta_{4}}{384}+\frac{211}{7776}\right)\epsilon\] \[+\left(-\frac{31}{144}\zeta_{3}\zeta_{2}-\frac{367\zeta_{2}}{2592 }-\frac{497\zeta_{3}}{648}-\frac{925\zeta_{4}}{1152}-\frac{511\zeta_{5}}{240} +\frac{665}{23328}\right)\epsilon^{2}\] \[+\left(\frac{961\zeta_{3}^{2}}{432}-\frac{155\zeta_{2}\zeta_{3} }{432}-\frac{5255\zeta_{3}}{388}-\frac{3083\zeta_{2}}{7776}-\frac{8915\zeta_{ 4}}{3456}-\frac{511\zeta_{5}}{144}-\frac{3851\zeta_{6}}{512}+\frac{2059}{69 984}\right)\epsilon^{3}\] \[+\left(\frac{4805\zeta_{3}^{2}}{1296}-\frac{65\zeta_{2}\zeta_{3}}{ 648}+\frac{5735\zeta_{4}\zeta_{3}}{576}-\frac{15623\zeta_{3}}{5832}-\frac{20503 \zeta_{2}}{23328}-\frac{55225\zeta_{4}}{10368}-\frac{511\zeta_{2}\zeta_{5}}{240}\right.\] \[\left.-\frac{9917\zeta_{5}}{1080}-\frac{19255\zeta_{6}}{1536}-\frac {8191\zeta_{7}}{336}+\frac{6305}{209952}\right)\epsilon^{4}\Bigg{]}+{\cal O}( \epsilon^{5}).\]
\[K_{2}^{(3)} = C_{A}^{3}\Bigg{[}-\frac{1}{384\epsilon^{6}}+\frac{11}{768 \epsilon^{5}}+\left(\frac{119}{20736}-\frac{3\zeta_{2}}{256}\right)\frac{1}{ \epsilon^{4}}+\left(\frac{649\zeta_{2}}{13824}+\frac{\zeta_{3}}{96}-\frac{151 7}{31104}\right)\frac{1}{\epsilon^{3}} \tag{11}\]
\[+\left(\frac{2501\zeta_{2}}{41472}-\frac{2101\zeta_{3}}{6912}-\frac{14 87\zeta_{4}}{18432}-\frac{7271}{31104}\right)\frac{1}{\epsilon^{2}}\] \[+\left(\frac{11\zeta_{3}\zeta_{2}}{1152}+\frac{437\zeta_{2}}{62208 }+\frac{2575\zeta_{3}}{2304}-\frac{22583\zeta_{4}}{36864}+\frac{49\zeta_{5}}{1 60}-\frac{446705}{559872}\right)\frac{1}{\epsilon}+\frac{293\zeta_{3}^{2}}{2304}\] \[-\frac{2453\zeta_{2}\zeta_{3}}{4608}+\frac{203705\zeta_{3}}{31104} -\frac{12911\zeta_{2}}{186624}+\frac{493381\zeta_{4}}{110592}-\frac{26543\zeta_{ 5}}{3840}+\frac{445679\zeta_{6}}{442368}-\frac{8206861}{3359232}\] \[+\left(-\frac{17149\zeta_{3}^{2}}{13824}+\frac{21031\zeta_{2} \zeta_{3}}{13824}+\frac{43\zeta_{4}\zeta_{3}}{288}+\frac{2330483\zeta_{3}}{93 312}-\frac{403379\zeta_{2}}{1119744}+\frac{1228523\zeta_{4}}{55296}\right.\] \[+\frac{9773\zeta_{2}\zeta_{5}}{5760}+\frac{262597\zeta_{5}}{11520 }-\frac{25965643\zeta_{6}}{884736}+\frac{151631\zeta_{7}}{16128}-\frac{48027739 }{6718464}\right)\epsilon\] \[+\left(-\frac{469\zeta_{5,3}}{90}+\frac{10045\zeta_{2}\zeta_{3}^ {2}}{4608}-\frac{920995\zeta_{3}^{2}}{13824}+\frac{71831\zeta_{2}\zeta_{3}}{6 912}+\frac{388289\zeta_{4}\zeta_{3}}{36864}-\frac{9907\zeta_{5}\zeta_{3}}{1920}\right.\] \[+\frac{15854467\zeta_{3}}{186624}-\frac{5363867\zeta_{2}}{6718464 }+\frac{42678481\zeta_{4}}{497664}-\frac{71533\zeta_{2}\zeta_{5}}{7680}+\frac {82837\zeta_{5}}{640}\] \[+\frac{112195243\zeta_{6}}{884736}-\frac{1343045\zeta_{7}}{8064}+ \frac{3738034847\zeta_{8}}{53084160}-\frac{2482106477}{120932352}\right) \epsilon^{2}\Bigg{]}\] \[+\ C_{A}^{2}n_{f}\Bigg{[}-\frac{1}{384\epsilon^{5}}+\frac{43}{10 368\epsilon^{4}}+\left(\frac{895}{31104}-\frac{59\zeta_{2}}{6912}\right) \frac{1}{\epsilon^{3}}+\left(-\frac{31\zeta_{2}}{20736}+\frac{239\zeta_{3}}{3 456}+\frac{2603}{31104}\right)\frac{1}{\epsilon^{2}}\] \[+\left(\frac{3265\zeta_{2}}{62208}-\frac{4945\zeta_{3}}{10368}+ \frac{2437\zeta_{4}}{18432}+\frac{24169}{139968}\right)\frac{1}{\epsilon}\] \[+\frac{271\zeta_{3}\zeta_{2}}{2304}-\frac{3925\zeta_{2}}{186624}- \frac{2513\zeta_{3}}{1152}-\frac{33109\zeta_{4}}{18432}+\frac{7799\zeta_{5}}{5 760}+\frac{397699}{1679616}\] \[+\left(-\frac{4969\zeta_{3}^{2}}{6912}-\frac{1595\zeta_{2}\zeta_ {3}}{2304}-\frac{720299\zeta_{3}}{93312}-\frac{228895\zeta_{2}}{279936}-\frac {1168171\zeta_{4}}{165888}-\frac{187753\zeta_{5}}{17280}+\frac{2476865\zeta_{6} }{442368}\right.\] \[\left.-\frac{22273}{373248}\right)\epsilon+\left(\frac{404075 \zeta_{3}^{2}}{20736}-\frac{78295\zeta_{2}\zeta_{3}}{20736}-\frac{121555\zeta_ {4}\zeta_{3}}{18432}-\frac{3316207\zeta_{3}}{139968}-\frac{17477627\zeta_{2}}{3 359232}\right.\] \[\left.-\frac{15232813\zeta_{4}}{497664}+\frac{7063\zeta_{2}\zeta _{5}}{3840}-\frac{52115\zeta_{5}}{1152}-\frac{7659793\zeta_{6}}{1327104}+ \frac{13871\zeta_{7}}{448}-\frac{125652667}{60466176}\right)\epsilon^{2}\Bigg{]}\] \[+\ C_{F}C_{A}n_{f}\Bigg{[}\frac{1}{576\epsilon^{3}}+\left(\frac{ 55}{3456}-\frac{\zeta_{3}}{72}\right)\frac{1}{\epsilon^{2}}+\left(\frac{ \zeta_{2}}{384}-\frac{19\zeta_{3}}{432}-\frac{\zeta_{4}}{48}+\frac{1819}{20736 }\right)\frac{1}{\epsilon}\] \[-\frac{1}{48}\zeta_{3}\zeta_{2}+\frac{67\zeta_{2}}{2304}-\frac{13 85\zeta_{3}}{5184}-\frac{19\zeta_{4}}{288}-\frac{7\zeta_{5}}{72}+\frac{45967} {124416}\] \[+\left(\frac{17\zeta_{3}^{2}}{18}-\frac{19\zeta_{2}\zeta_{3}}{288}- \frac{50495\zeta_{3}}{31104}+\frac{3547\zeta_{2}}{13824}-\frac{16237\zeta_{4}}{2 7648}-\frac{133\zeta_{5}}{432}-\frac{101\zeta_{6}}{384}+\frac{1007179}{74696} \right)\epsilon\] \[+\left(\frac{323\zeta_{3}^{2}}{108}-\frac{809\zeta_{2}\zeta_{3}}{3 456}+\frac{599\zeta_{4}\zeta_{3}}{128}-\frac{1661303\zeta_{3}}{186624}+ \frac{99931\zeta_{2}}{82944}-\frac{635899\zeta_{4}}{165888}\right.\] \[+\left.-\frac{7\zeta_{2}\zeta_{5}}{48}-\frac{70417\zeta_{5}}{25920 }-\frac{1919\zeta_{6}}{2304}-\frac{49\zeta_{7}}{72}+\frac{20357263}{4478976} \right)\epsilon^{2}\Bigg{]}+\mathcal{O}(\epsilon^{3})\] \[+\ C_{A}n_{f}^{2}\Bigg{[}-\frac{1}{1296\epsilon^{4}}-\frac{5}{1944 \epsilon^{3}}+\left(-\frac{\zeta_{2}}{864}-\frac{1}{216}\right)\frac{1}{ \epsilon^{2}}+\left(-\frac{5\zeta_{2}}{1296}+\frac{65\zeta_{3}}{1296}-\frac{11 1}{2187}\right)\frac{1}{\epsilon}\]
\[+\frac{11\zeta_{2}}{432}+\frac{325\zeta_{3}}{1944}+\frac{1229\zeta_{4} }{6912}+\frac{10}{6561}+\Bigg{(}\frac{65\zeta_{3}\zeta_{2}}{864}+\frac{187\zeta_ {2}}{1458}+\frac{37\zeta_{3}}{72}+\frac{6145\zeta_{4}}{10368}\] \[+\frac{2521\zeta_{5}}{2160}+\frac{190}{6561}\Bigg{)}\epsilon+ \Bigg{(}-\frac{4225\zeta_{3}^{2}}{2592}+\frac{325\zeta_{2}\zeta_{3}}{1296}+ \frac{2632\zeta_{3}}{2187}+\frac{1058\zeta_{2}}{2187}+\frac{9355\zeta_{4}}{3456}\] \[+\frac{2521\zeta_{5}}{648}+\frac{999593\zeta_{6}}{165888}+\frac{ 6614}{59049}\Bigg{)}\epsilon^{2}\Bigg{]}.\]
\[K_{4A}^{(0)} = K_{4A}^{(1)}=K_{4A}^{(2)}=0. \tag{12}\] \[K_{4A}^{(3)} = \left(\frac{\zeta_{2}\zeta_{3}}{8}+\frac{\zeta_{5}}{16}\right) \frac{1}{\epsilon}+\frac{3\zeta_{3}^{2}}{4}-\frac{\zeta_{3}}{12}+\frac{\zeta_ {2}}{4}-\frac{55\zeta_{5}}{24}+\frac{235\zeta_{6}}{64}\] \[+\left(-\frac{77\zeta_{3}^{2}}{12}+\frac{139\zeta_{4}\zeta_{3}}{ 32}+\frac{53\zeta_{3}}{72}+\frac{13\zeta_{2}}{8}-\frac{13\zeta_{4}}{16}+\frac {187\zeta_{2}\zeta_{5}}{32}-\frac{335\zeta_{5}}{72}-\frac{55\zeta_{6}}{8}+ \frac{63\zeta_{7}}{4}\right)\epsilon\] \[+\left(-\frac{459\zeta_{5,3}}{20}+\frac{15}{8}\zeta_{2}\zeta_{3}^ {2}-\frac{469\zeta_{3}^{2}}{36}-\frac{19\zeta_{2}\zeta_{3}}{8}-\frac{77\zeta _{4}\zeta_{3}}{4}-\frac{343\zeta_{5}\zeta_{3}}{16}+\frac{665\zeta_{3}}{108}\right.\] \[\left.+\frac{97\zeta_{2}}{12}+\frac{157\zeta_{4}}{12}-\frac{55 \zeta_{2}\zeta_{5}}{16}-\frac{3163\zeta_{5}}{216}-\frac{335\zeta_{6}}{24}- \frac{1705\zeta_{7}}{16}+\frac{1306649\zeta_{8}}{5760}\right)\epsilon^{2}+ \mathcal{O}(\epsilon^{3}).\]
\[K_{4F}^{(0)} = K_{4F}^{(1)}=K_{4F}^{(2)}=0. \tag{14}\] \[K_{4F}^{(3)} = -\frac{\zeta_{2}}{2}+\frac{\zeta_{3}}{6}+\frac{5\zeta_{5}}{6}+ \left(\frac{7\zeta_{3}^{2}}{3}-\frac{47\zeta_{3}}{36}-\frac{15\zeta_{2}}{4}+ \frac{13\zeta_{4}}{8}+\frac{25\zeta_{5}}{18}+\frac{5\zeta_{6}}{2}\right)\epsilon\] \[\left(\frac{35\zeta_{3}^{2}}{9}+\frac{19\zeta_{2}\zeta_{3}}{4}+7 \zeta_{4}\zeta_{3}-\frac{1471\zeta_{3}}{108}-\frac{239\zeta_{2}}{12}-\frac{58 9\zeta_{4}}{24}+\frac{5\zeta_{2}\zeta_{5}}{4}+\frac{1423\zeta_{5}}{108}\right.\] \[\left.+\frac{25\zeta_{6}}{6}+\frac{155\zeta_{7}}{4}\right) \epsilon^{2}+\mathcal{O}(\epsilon^{3}).\]
## 3 Calculation of the soft limit of scattering amplitudes
In order to compute three-loop corrections to the single-soft emission current, we calculate the soft limit of physical scattering amplitudes. In particular, we compute the limit of the scattering amplitude involving a Higgs boson and three gluons, as well as the limit of the amplitude involving an off-shell transverse photon, a gluon, and a quark-antiquark pair. These scattering amplitudes at three-loop order are contributions to the production cross section of a Higgs boson or vector boson in association with a jet at the LHC at N\({}^{3}\)LO in perturbative QCD, for example. For the purposes of our calculation, we work in a kinematic configuration where the color singlet boson (with momentum \(p_{1}\)) decays to three color charged partons (with momenta \(p_{2}\), \(p_{3}\), and \(p_{4}\)).
\[h(p_{1})\to g(p_{2})\,g(p_{3})\,g(p_{4}),\hskip 28.452756pt\gamma^{*}(p_{1}) \to q(p_{2})\,\bar{q}(p_{3})\,g(p_{4}). \tag{15}\]
These scattering amplitudes were computed through two-loop order in refs. [67; 68; 69; 70; 71; 72], and first results for planar contributions to the virtual photon amplitudes have appeared recently
in ref. [73]. However, a complete set of three-loop amplitudes is still elusive. In the case of Higgs boson scattering amplitudes, we work in the limit where the top quark is treated as infinitely massive and its degrees of freedom are integrated out [74; 75; 76; 77]. As a result, a dimension-five operator is introduced that couples [78; 79; 80; 81] the Higgs boson directly to the gluon field strength. We treat all quarks as massless and work with \(n_{f}\) light quark degrees of freedom.
We create the integrands for our desired scattering amplitudes by generating Feynman diagrams using the QGRAF program [82] and dressing them with QCD Feynman rules. To facilitate our computation, we define suitable projectors to Lorentz tensor structures whose scalar coefficients we compute in the so-called conventional dimensional regularization (CDR) scheme employing standard methods.
Once we obtain scalar integrals, it is our goal to avoid the complexity of computing full, three-loop scattering amplitudes. Instead, we develop a new technique, based on the method of regions [83], to expand the scalar integrands around the limit of the gluon momentum \(p_{4}\) becoming soft (i.e. \(p_{4}\to 0\)). We identify that the contribution to the single-soft emission current is provided by the _maximally soft regions_ of our integrals. In these regions, all the loop momenta are equally as soft as the external soft gluon with momentum \(p_{4}\). We keep only the first term in the expansion for both the Higgs boson and virtual photon amplitudes. More details regarding this expansion technique will be discussed in section 4.
Once an expanded scalar integrand is obtained, we use standard multi-loop techniques in order to integrate over all the loop momenta. First, we use integration-by-part (IBP) identities [50; 51; 52] in the form of the Laporta algorithm [50] to relate all the soft Feynman integrals to a set of soft master integrals. These soft master integrals only depend on the external kinematics via an overall multiplicative prefactor [84]. For example, one soft integral contributing at two-loop order is given by
\[I = \int\frac{d^{d}p_{5}}{(2\pi)^{d}}\frac{d^{d}p_{6}}{(2\pi)^{d}} \frac{1}{[p_{5}^{2}][(p_{5}-p_{6})^{2}][(p_{5}-p_{6})^{2}][(p_{5}-p_{6}-p_{4} )^{2}]}\] \[\times \frac{1}{[2p_{2}p_{5}][-2p_{3}p_{6}][-2p_{3}p_{5}][2p_{2}p_{4}+2p_ {2}p_{6}]}.\]
All propagators involving the hard external momenta \(p_{2}\) and \(p_{3}\) were linearized in the expansion procedure. Consequently, it is now possible to read off the integer power of the dependence of the integral on \(p_{2}\) and \(p_{3}\) directly from the integrand (see for example refs. [48; 85] for details). It is exactly this property, in addition to knowing the overall energy dimension of the integral, that fixes all kinematic dependence of the integral and determines it up to a function in the space-time dimension \(d\):
\[I=(2p_{2}p_{3})^{-1+2\epsilon}(2p_{2}p_{4})^{-1-2\epsilon}(2p_{3}p_{4})^{-1-2 \epsilon}F(\epsilon). \tag{10}\]
Especially at three-loop order, computing the remaining soft master integrals using straightforward integration techniques is challenging. Thus, we follow a different path and temporarily undo our soft expansion for all those propagators depending on one of the hard external momenta, \(p_{3}\).
\[\frac{1}{[-2p_{3}p_{6}][-2p_{3}p_{5}]}\ \to\ \frac{1}{[(p_{6}-p_{3})^{2}][(p_{5}-p_{3})^{2}]}. \tag{11}\]
It is now no longer possible to read off the dependence of the integral on \(p_{3}\) from the integrand, and the result will consequently be a nontrivial function of the dimensionless ratio
\[w=\frac{s_{24}}{s_{23}},\hskip 28.452756pts_{ij}=(p_{i}+p_{j})^{2}. \tag{12}\]
We now apply the method of differential equations [53; 54; 55; 56; 57] to determine our integrals as a function of \(w\). To accomplish this, we transform the differential equations into the canonical form [53] using algorithmic techniques [86]. The solutions to these differential equations can be expressed in terms of harmonic polylogarithms in \(w\)[58]. However, the differential equations determine the master integrals only up to boundary constants. To constrain them, we first compute differential equations for master integrals undoing our soft-expansion for propagators involving \(p_{2}\) and \(p_{3}\) separately. We then demand that the solutions are consistent among themselves when taking the strict soft limit \(w\to 0\). Demanding consistency relations from our system of differential equations, as in refs. [60] and [61], relates all the required boundary conditions to one soft master integral, which is easily computed using direct integration in parameter space. Consequently, we determine all soft master integrals through transcendental weight eight.
The soft master integrals, which serve as building blocks for the computation of the single-soft emission current at three loops, are one of the main results of this article. In total, we compute 50 soft master integrals and label them with an index \(i\):
\[M^{i}\equiv(4\pi)^{-3\epsilon}e^{3\gamma_{\rm E}\epsilon}\left(\frac{(-s_{24} )(-s_{34})}{(-s_{23})}\right)^{3\epsilon}\int\frac{d^{d}p_{5}}{(2\pi)^{d}} \frac{d^{d}p_{6}}{(2\pi)^{d}}\frac{d^{d}p_{7}}{(2\pi)^{d}}\left.\mathcal{I}_{ i}\right|_{s_{23}=s_{24}=s_{34}=1}. \tag{13}\]
Above, we set all kinematic Mandelstam invariants to unity and remove non-rational dependence on them via a universal prefactor. Furthermore, we anticipate the renormalization of the strong coupling constant and absorb some \(\overline{\rm MS}\) scheme prefactors. The integrand \(\mathcal{I}_{i}\) is a rational function of the Lorentz invariant scalar products of the internal loop momenta \(\{p_{5},p_{6},p_{7}\}\) and the external parton momenta \(\{p_{2},p_{3},p_{4}\}\). The soft integrals are related to canonical soft master integrals (i.e. functions of pure transcendental weight) by a linear transformation of the vector of soft master integrals.
\[\vec{M}=T_{\rm can.}(\epsilon)\cdot\vec{M}_{c}. \tag{14}\]
The matrix \(T_{\rm can.}(\epsilon)\) only depends on the dimensional regulator and rational numbers. We provide the integrands \(\mathcal{I}_{i}\), the transformation matrix \(T_{\rm can.}\), and solution to the canonical master integrals \(M^{i}_{c}\) in ancillary files together with the arXiv submission of this article.
Having calculated the strict soft limit of our scattering amplitudes, we can now extract the coefficients \(K^{(o)}_{X}\) of eq. (3) by identifying
\[\lim_{\begin{subarray}{c}\text{maximally soft}\\ p_{4}\to 0\end{subarray}}\mathcal{A}_{p_{1}\to p_{2}p_{3}p_{4}}=\mathbf{J}(p_{4}) \mathcal{A}^{(0)}_{p_{1}\to p_{2}p_{3}}. \tag{15}\]
Above, \(\mathcal{A}^{(0)}\) is the tree-level scattering amplitude for the \(2\to 1\) Born process not involving the soft gluon with momentum \(p_{4}\). We find complete agreement of our computation of the
coefficients of the single-soft emission current from our \(h\to ggg\) and \(\gamma^{*}\to q\bar{q}g\) amplitude, which serves as a strong consistency check of our computation and of the color structure identified in eq. (2).
## 4 Regions in the soft expansion
In this section, we develop a method for the expansion of scattering amplitudes in the limit of a set of partons becoming very low energetic (i.e. soft). The decay processes we introduced in eq. (10) is a specific case to which we apply this new technology. First, we introduce a general set-up which contains our expansion as a special case. Next, we explain how to identify the subgraphs which correspond to the regions of the expansion. Finally, we discuss the particular soft expansion of Feynman diagrams in our concrete setting.
To set up our expansion technique, we divide the external momenta of a Feynman graph into the following three categories:
1. \(K\) massless momenta \(p_{i}\),
2. \(L\) off-shell momenta \(q_{j}\),
3. \(M\) soft massless momenta \(l_{k}\).
We define the soft expansion of a graph as an expansion around the limit \(l_{k}\to 0\). Scalar products involving of momenta including \(l_{k}\) are consequently much smaller than scalar products not involving any \(l_{k}\). Introducing a small parameter \(\lambda\) and a hard reference scale \(Q\), we find
\[p_{i}^{2}=0\ \ (i=1,\ldots,K),\quad q_{j}^{2}\sim Q^{2}\ \ (j=1, \ldots,L),\quad l_{k}^{2}=0\ \ (k=1,\ldots,M), \tag{23a}\] \[p_{i_{1}}\cdot p_{i_{2}}\sim Q^{2}\ \ (i_{1}\neq i_{2}),\quad p _{i}\cdot l_{k}\sim q_{j}\cdot l_{k}\sim\lambda Q^{2},\quad l_{k_{1}}\cdot l _{k_{2}}\sim\lambda^{2}Q^{2}\ \ (k_{1}\neq k_{2}). \tag{23b}\]
Our strategy of identifying the regions in the soft expansion is based on the observation that each region \(R\) (see figure 0(a)) must conform with the solutions of the Landau equations [87]. Furthermore, once all the external soft momenta \(l_{k}\) are removed from \(R\), the resulting configuration \(R^{\prime}\) (see figure 0(b)) must be a region in the on-shell expansion developed in ref. [40]. In other words, the regions in the soft expansion can be derived from those in the on-shell expansion with additional requirements. The regions in the on-shell expansion have been studied in detail in ref. [40], and in particular, a graph-finding algorithm was developed to obtain the complete list of regions from the given Feynman graph. Here, we aim to leverage this knowledge to straightforwardly identify the regions in the soft expansion. To this end, we will first review the graph-finding algorithm for the on-shell expansion and then delve into the generic configuration of regions in the soft expansion.
### The graph-finding algorithm for on-shell expansions
In the context of an on-shell expansion of wide-angle scattering, the Feynman graphs feature on-shell external momenta \(p_{1},\ldots,p_{K}\) and off-shell external momenta \(q_{1},\ldots,q_{L}\), satisfying
\[p_{i}^{2}\sim\lambda Q^{2}\ \ (i=1,\ldots,K),\quad q_{j}^{2}\sim Q^{2}\ \ (j=1,\ldots,L),\quad p_{i_{1}}\cdot p_{i_{2}}\sim Q^{2}\ \ (i_{1}\neq i_{2}). \tag{24}\]
In contrast to the soft expansion defined in eq. (10), there are no soft external momenta present here, and every on-shell momentum \(p_{i}\) is slightly off its corresponding lightcone.
A graph-finding algorithm has been provided in ref. [40], along with a rigorous proof demonstrating that this algorithm generates all the regions for the on-shell expansion. This allows us to comprehend the structures of these regions and derive them more efficiently by circumventing the approach of constructing Newton polytopes in ref. [41; 42; 43; 44].
A key concept in this graph-finding algorithm is that of _moietic graphs_. We call a graph mojetic if it becomes _one-vertex irreducible_ after connecting all of its external edges to an auxiliary vertex. Note that a graph is called one-vertex irreducible if it remains connected after the removal of any one of its vertices. The algorithm can then be described by the following steps.
* _Step 1_: For each nearly on-shell external momentum \(p_{i}\) (\(i=1,\ldots,K\)), we draw a cut through a set of edges \(\{e_{c}\}\) such that: 1, \(\{e_{c}\}\) disconnects a graph \(G\) into two connected subgraphs, one of which, denoted by \(\widehat{\gamma}_{i}\), is attached by \(p_{i}\) only; 2, the graph \(\gamma_{i}\equiv\widehat{\gamma}_{i}\cup\{e_{c}\}\) is mojetic.
* _Step 2_: For all possible sets \(\{\gamma_{1},\ldots,\gamma_{K}\}\), we overlay the graphs \(\gamma_{1},\ldots,\gamma_{K}\) and associate the edges \(e\in G\) to certain subgraphs as follows. If \(e\) has been assigned to two or more \(\gamma_{i}\), it belongs to the soft subgraph \(S\); if \(e\) has been assigned to exactly one \(\gamma_{i}\), it belongs to the jet subgraph \(J_{i}\); if \(e\) has not been assigned to any \(\gamma_{i}\), it belongs to \(H\). Let us also denote \(J\equiv\cup_{i=1}^{K}J_{i}\).
Figure 1: The partitioning of a generic wide-angle scattering graph into infrared subgraphs corresponding to a particular region (a) \(R\) in the soft expansion, eq. (10), and (b) \(R^{\prime}\) in the on-shell expansion, eq. (11). The doubled lines connecting different blobs represent any number of propagators.
* _Step 3_: We now require that the result obtained in _Step 2_ satisfies the following three further conditions: (i) each jet subgraph \(J_{i}\) is connected; (ii) each hard subgraph \(H\) is connected; (iii) each of the \(K\) subgraphs \(H\cup J\setminus J_{i}\) (\(i=1,\ldots,K\)) is mojetic. The region would be ruled out if any of these conditions are not satisfied.
Let us illustrate how this algorithm works through the following example of a \(3\times 2\) fishnet graph, which has four on-shell external momenta \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{4}\). A choice of the graphs \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\) and \(\gamma_{4}\), which satisfy the conditions outlined in _Step 1_, is shown below. Note that in each figure, the edges \(\{e_{c}\}\) are cut by the dotted curve.
(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,10)(10,100)
soft. The kinematic limit can be summarized as
\[p_{1}^{2}\sim Q^{2},\qquad p_{2}^{2}=p_{3}^{2}=p_{4}^{2}=0, \tag{4.5a}\] \[p_{1}\cdot p_{4}\sim p_{2}\cdot p_{4}\sim p_{3}\cdot p_{4}\sim \lambda Q^{2},\qquad p_{1}\cdot p_{2}\sim Q^{2}, \tag{4.5b}\]
which is a special case of soft expansion described in the beginning of this section in eq. (4.1).
Note that in this particular case, where \(p_{4}\) is the unique soft external momentum, additional requirements of the configurations of \(H\), \(J\), and \(S\) are needed [87], giving the following possibilities.
* If there are no internal soft propagators, then there can be at most one nontrivial1 jet \(J_{i}\) (\(i=2\) or \(3\)), to which \(p_{4}\) directly attached. In the special case that neither \(J_{2}\) or \(J_{3}\) is nontrivial, the region is referred to as the _hard region_, where all the loop momenta are equally off shell. Footnote 1: A jet is called nontrivial if it has one or more edges.
* If there are internal soft propagators, then each component of \(S\) must be adjacent to both \(J_{2}\) and \(J_{3}\). In addition, \(p_{4}\) must enter a soft vertex. In general, such regions are depicted in figure 2, where the hard, jet, and soft subgraphs satisfy:
* the hard subgraph \(H\) is connected and attached by \(p_{1}\);
* the jet subgraphs \(J_{2}\) and \(J_{3}\) are both connected and adjacent to \(H\), and are attached by \(p_{2}\) and \(p_{3}\) respectively;
* the soft subgraph \(S\) is attached by \(p_{4}\), and each of its connected components is adjacent to both \(J_{2}\) and \(J_{3}\).
Figure 2: The general configuration of regions in the process of eq. (3.1), where there are internal soft propagators. The external momenta \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(p_{4}\) attach to \(H\), \(J_{2}\), \(J_{3}\), and \(S\), respectively.
This is illustrated below with some examples of regions (marked ) and non-region configurations (marked ).
(20,20)(20,20
Our analyses above is sufficient to develop soft expansions for the \(p_{1}\to p_{2}p_{3}p_{4}\) process, namely, eq. (4.5). Based on the findings of ref. [87], our method can be readily extended to soft expansions for generic wide-angle scattering, eq. (4.1).
## 5 Renormalization and infrared pole structure
In this section, we briefly describe the renormalization and subtraction of singularities of the bare emission current. In general, the infrared and ultraviolet singularities of a scattering amplitude computed in perturbative QCD can be subtracted to yield a finite remainder using the following definitions.
\[\mathcal{A}_{f}(\alpha_{S}(\mu^{2}),\{p_{i}\})=\mathbf{Z}(\alpha_{S}(\mu^{2}), \{p_{i}\},\epsilon)\ \mathbf{Z}_{\rm UV}\ \mathcal{A}(\{p_{i}\},\epsilon). \tag{5.1}\]
The factor \(\mathbf{Z}_{\rm UV}\) implements the renormalization of the strong coupling constant in the \(\overline{\rm MS}\) scheme and \(\epsilon\) is the dimensional regulator related to the space-time dimension by \(d=4-2\epsilon\).
\[\mathbf{Z}_{\rm UV}\alpha_{S}=\alpha_{S}(\mu^{2})\left(\frac{\mu^{2}}{4\pi} \right)^{-\epsilon}e^{\gamma_{E}\epsilon}Z_{\alpha_{S}}. \tag{5.2}\]
The factor \(Z_{\alpha_{S}}\) is given in terms of the \(\beta\)-function [88; 89; 90; 91; 92; 93] through three loops in QCD by the following expression.
\[Z_{\alpha_{S}} = 1-\frac{\alpha_{S}(\mu^{2})}{\pi}\frac{1}{\epsilon}\beta_{0}+ \left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{2}\left(\frac{1}{\epsilon^{2}} \beta_{0}^{2}-\frac{1}{2\epsilon}\beta_{1}\right)\] \[-\left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{3}\left(\frac{1}{ \epsilon^{3}}\beta_{0}^{3}-\frac{7}{6\epsilon^{2}}\beta_{0}\beta_{1}+\frac{1} {3\epsilon}\beta_{2}\right)+\mathcal{O}(\alpha_{S}^{4}).\]
The factor \(\mathbf{Z}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\) is an operator in color space and implements the universal subtraction of infrared and collinear singularities of loop amplitudes [94; 95; 96; 97; 98; 99; 100; 101]. It can be expressed in terms of the _soft anomalous dimension matrix_\(\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\) by the following path ordered exponential.
\[\mathbf{Z}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)=\mathcal{P}\exp\left\{- \frac{1}{4}\int_{0}^{\mu^{2}}\frac{d\mu^{\prime 2}}{\mu^{\prime 2}}\mathbf{ \Gamma}(\alpha_{S}(\mu^{\prime 2}),\{p_{i}\},\epsilon)\right\}\,, \tag{5.4}\]
with
\[\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)=\sum_{i\neq j} \mathbf{T}_{i}^{a}\mathbf{T}_{j}^{a}\Gamma_{\rm cusp}(\alpha_{S}(\mu^{2}))\log \frac{-s_{ij}}{\mu^{2}}+\frac{1}{2}\sum_{i}\mathds{1}\gamma_{c}^{R_{i}}+ \mathbf{\Delta}(\alpha_{S}(\mu^{2}),\{p_{i}\}). \tag{5.5}\]
Above, \(\Gamma_{\rm cusp}\) refers to the cusp anomalous dimension [99], which is currently known exactly through four-loop order [102; 103], and approximately at five loops [104]. Furthermore, \(\gamma_{c}^{R}\) is the collinear anomalous dimension, obtained through four-loop order in refs. [105; 103]. The formula above was derived and calculated through three-loop order in ref. [94] and verified in \(\mathcal{N}=4\) super Yang-Mills theory [106] and QCD [107; 108; 109]. In ref. [101], its general structure was determined at four-loop order. The term \(\mathbf{\Delta}(\alpha_{S}(\mu^{2}),\{p_{i}\})\) is known as the correction to the dipole formula and starts at three-loop order. As the name suggests,
it contains contributions where the color operator acts on more than two color-charged external particles simultaneously for the first time. This term can be further decomposed as follows.
\[\mathbf{\Delta}(\alpha_{S}(\mu^{2}),\{p_{i}\})=\left(\frac{\alpha_{S}(\mu^{2})}{\pi} \right)^{3}\left[\mathbf{\Delta}_{3}^{(3)}+\mathbf{\Delta}_{4}^{(3)}(\{p_{i}\})\right]+ \mathcal{O}\left(\alpha_{S}^{4}\right). \tag{5.6}\]
The expression \(\mathbf{\Delta}_{4}^{(3)}(\{p_{i}\})\) is known as the quadruple correction and involves color correlations among four or more different color-charged particles. Consequently, this term will not contribute to the scattering amplitudes we consider here. The term \(\mathbf{\Delta}_{3}^{(3)}\) relates three different color-charged external particles and is explicitly given by
\[\mathbf{\Delta}_{3}^{(3)}=\frac{1}{4}C\,f_{abe}f_{cde}\sum_{i\neq j,i\neq k,j\neq k }\left\{\mathbf{T}_{i}^{a},\mathbf{T}_{i}^{d}\right\}\mathbf{T}_{j}^{b}\mathbf{ T}_{k}^{c}\,, \tag{5.7}\]
with the constant \(C=\zeta_{5}+2\zeta_{3}\zeta_{2}\). The color operators \(\mathbf{T}_{i}^{a}\) are defined below, via their actions on an outgoing quark, antiquark and gluon.
\[\mathbf{T}_{i}^{a}\epsilon(p_{i})_{b}^{\mu} = -if^{abc}\epsilon_{c}^{\mu}(p_{i}).\] \[\mathbf{T}_{i}^{a}\bar{u}_{k}(p_{i}) = T_{jk}^{a}\bar{u}_{j}(p_{i}).\] \[\mathbf{T}_{i}^{a}u_{k}(p_{i}) = -T_{kj}^{a}u_{j}(p_{i}). \tag{5.8}\]
We are particularly interested in scattering amplitudes involving three final-state gluons (\(ggg\)) or the combination of a final-state quark-antiquark pair and a gluon (\(q\bar{q}g\)). With the definitions above, we can now evaluate the action of the operator given in eq. (5.5) on such an amplitude.
\[\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\mathcal{A}_{ggg} \tag{5.9}\] \[=\left[-C_{A}\Gamma_{\text{cusp.}}\left(\log\frac{-s_{12}}{\mu^{2 }}+\log\frac{-s_{13}}{\mu^{2}}+\log\frac{-s_{23}}{\mu^{2}}\right)+\frac{3}{2} \gamma_{c}^{A}\right.\] \[\left.\hskip 113.811024pt-\frac{C}{8}\left(\frac{\alpha_{S}(\mu^{2 })}{\pi}\right)^{3}\left(C_{A}^{3}-24\frac{C_{4}^{AA}}{d_{A}T_{A}}\right) \right]\mathcal{A}_{ggg}.\] \[\mathbf{\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\mathcal{A}_{ q\bar{q}g}\] \[=\left[-\Gamma_{\text{cusp.}}\left(-(C_{A}-2C_{F})\log\frac{-s_{ 12}}{\mu^{2}}+C_{A}\log\frac{-s_{13}}{\mu^{2}}+C_{A}\log\frac{-s_{23}}{\mu^{2} }\right)\right.\] \[\left.\hskip 113.811024pt+\frac{1}{2}\gamma_{c}^{A}+\gamma_{c}^{F} -\frac{C}{8}\left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{3}\left(C_{A}^{3}- 24\frac{C_{4}^{AF}}{d_{A}T_{F}}\right)\right]\mathcal{A}_{q\bar{q}g}.\]
Note that the formulae above are valid up to three loops. The action of the soft anomalous dimension operator on our amplitudes is diagonal in color space such that the subtraction of infrared singularities becomes multiplicative. We now want to make use of the factorization introduced in eq. (2.1) in order to simplify the subtraction of infrared poles. By rewriting eq. (2.1) for finite amplitudes, we find
\[\mathbf{Z}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon)\mathbf{Z}_{\text{UV}} \mathcal{A}(\{p_{i}\},\epsilon)\]
\[=\left[{\bf Z}_{J}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon){\bf Z}_{ \rm UV}{\bf J}(p_{4})\right]\times\left[{\bf Z}(\alpha_{S}(\mu^{2}),\{p_{i}\}, \epsilon){\bf Z}_{\rm UV}{\cal F}_{1\to 23}{\cal A}_{1\to 23}^{(0)} \right]. \tag{11}\]
The action of \({\bf\Gamma}\) on the amplitudes \({\cal A}_{1\to 23}\) is given by
\[{\bf\Gamma}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon){\cal A}_{1 \to 23}=\left[-2C_{R}\Gamma_{\rm cusp.}\log\frac{-s_{12}}{\mu^{2}}+\gamma_{c}^{R }\right]{\cal A}_{1\to 23}. \tag{12}\]
Above, the sub- or superscript \(R\) indicates the color representations of the colored particles of \({\cal A}_{1\to 23}\). This result can now be used in order to find
\[{\bf\Gamma}_{J}(\alpha_{S}(\mu^{2}),\{p_{i}\},\epsilon){\bf Z}_{ \rm UV}{\bf J}(p_{4})\] \[=\left[C_{A}\Gamma_{\rm cusp.}\left(\log\frac{-s_{12}}{\mu^{2}}- \log\frac{-s_{13}}{\mu^{2}}-\log\frac{-s_{23}}{\mu^{2}}\right)+\frac{1}{2} \gamma_{c}^{A}\right.\] \[\left.-\frac{C}{8}\left(\frac{\alpha_{S}(\mu^{2})}{\pi}\right)^{3 }\left(C_{A}^{3}-24\frac{C_{A}^{4}}{d_{A}T_{R}}\right)\right]{\bf Z}_{\rm UV}{ \bf J}(p_{4}).\]
Next, we perform the integration over \(\mu^{\prime 2}\) in eq. (10) and consequently obtain the necessary ingredients to remove all infrared singularities of the single-soft emission current. Indeed, we find that our results are finite once the subtraction procedure of eq. (11) is complete. The fact that the poles of the soft emission current computed from \(\gamma^{*}\to q\bar{q}g\) amplitude and the \(h\to ggg\) amplitude agree with the prediction based on the soft anomalous dimension matrix discussed here is a robust cross-check of our results.
## 6 The soft current in \({\cal N}=4\) super Yang-Mills theory
Maximally super-symmetric Yang-Mills theory (\({\cal N}=4\) sYM) is an excellent testing ground for many aspects of four-dimensional non-abelian gauge theory. It has countless times served as a laboratory to explore perturbation theory to perturbative orders simply not accessible in realistic theories like QCD. One particular observation is that there is an interesting similarity between QCD and \({\cal N}=4\) sYM: The leading transcendental part of the perturbative expansion of certain quantities agrees among the two theories [64; 65]. This correspondence even holds true for certain form factors of operators of the stress tensor multiplet [110; 111; 112; 113]. In particular, the form factor of three on-shell states \(\Phi\) and the trace of two scalar fields \(\phi\),
\[{\cal F}_{2}=\int d^{d}x\langle\Phi_{1}\Phi_{2}\Phi_{3}|\phi^{I}(x)\phi^{I}(x )|0\rangle, \tag{13}\]
corresponds to the amplitude of a Higgs boson decaying to three gluons in QCD. This form factor has been of great interest to the community [114; 115; 116; 117; 118; 119; 120] and was recently computed to staggering eight-loop accuracy in the planar limit of the theory [121].
Similar to the QCD case discussed above, the soft limit of these form factors can be used to extract the soft current in \({\cal N}=4\) sYM theory. To achieve this, we start by using the integrand for the form factor determined in ref. [63] at two- and three-loop order. We then apply our integrand expansion technology and compute the first term in the maximally soft limit of this form factor. We obtain a pure function (i.e. of maximal transcendental weight) for both the two- and three-loop result. We then compare our result with the maximally soft
limit of the decay form factor of a Higgs to three gluons in QCD. Indeed, we find that these two results agree to all orders in the dimensional regulator for the leading transcendental contribution. Consequently, we determine that the single-soft emission current in \(\mathcal{N}=4\) sYM theory is identical to the leading transcendental part of the QCD results quoted above. This validates the principle of maximal transcendentality for the quantity computed here.
We find that the maximally soft limit of the \(\mathcal{F}_{2}\) form factor at three-loop order relative to its Born contribution can be cast in the following form.
\[\mathcal{F}_{2}^{(3)}/\mathcal{F}_{2}^{(0)}=\frac{a_{S}^{3}\pi^{3}i}{2^{11} \times 3^{6}\times 5\times\epsilon^{6}}\left(\frac{(-2p_{2}p_{4})(-2p_{3}p_{4})}{(-2p_{2}p _{3})\mu^{2}}\right)^{-3\epsilon}\left[C_{A}^{3}F^{(3),\mathrm{P}}+\frac{C_{4 }^{AA}}{d_{A}T_{A}}F^{(3),\mathrm{NP}}\right]. \tag{103}\]
In the equation above, we defined two uniform transcendental functions that we present here as an integer-linear combination of our canonical soft master integrals defined in eq. (101). We would like to emphasize that the solution of the canonical soft master integrals we provide is valid only up to transcendental weight 8. In contrast, the expressions below are correct to arbitrary order of the Laurent expansion in \(\epsilon\).
\[F^{(3),\mathrm{P}} = -3996M_{c}^{50}+4032M_{c}^{49}+1350M_{c}^{48}-8640M_{c}^{47}-1296 0M_{c}^{46}+3807M_{c}^{45}\] \[+6156M_{c}^{43}+720M_{c}^{41}+702M_{c}^{40}+5400M_{c}^{39}-3240M_{ c}^{37}-1125M_{c}^{36}-6570M_{c}^{35}\] \[+360M_{c}^{33}-12960M_{c}^{31}-540M_{c}^{28}-2376M_{c}^{26}+1890M_ {c}^{25}+5184M_{c}^{24}\] \[+9720M_{c}^{23}+16200M_{c}^{21}-560M_{c}^{20}+8076M_{c}^{19}-120M_ {c}^{18}-116640M_{c}^{17}\] \[-1944M_{c}^{16}-4860M_{c}^{15}-180M_{c}^{13}+103680M_{c}^{12}-936M _{c}^{11}+16200M_{c}^{10}\] \[-19440M_{c}^{9}-378M_{c}-7344M_{c}^{7}-3240M_{c}^{6}+6480M_{c}^{ 5}+864M_{c}^{4}-15552M_{c}^{1}.\]
\[F^{(3),\mathrm{NP}} = 95904M_{c}^{50}-96768M_{c}^{49}-32400M_{c}^{48}-103680M_{c}^{47} -91368M_{c}^{45}-147744M_{c}^{43} \tag{104}\] \[-16848M_{c}^{40}-129600M_{c}^{39}+77760M_{c}^{37}+27000M_{c}^{36} +157680M_{c}^{35}+4320M_{c}^{33}\] \[+12960M_{c}^{28}+57024M_{c}^{26}-45360M_{c}^{25}+62208M_{c}^{24}- 233280M_{c}^{23}+67392M_{c}^{21}\] \[+13440M_{c}^{20}+75744M_{c}^{19}+2880M_{c}^{18}+46656M_{c}^{16}+4 320M_{c}^{13}+22464M_{c}^{11}\] \[+77760M_{c}^{10}+466560M_{c}^{9}+9072M_{c}^{8}+176256M_{c}^{7}+77 760M_{c}^{6}-155520M_{c}^{5}\] \[+10368M_{c}^{4}.\]
Above we introduced
\[d_{A}=(n_{c}^{2}-1),\hskip 28.452756ptT_{A}=n_{c}. \tag{105}\]
Single-real threshold contribution to the Higgs boson and Drell-Yan production cross sections at N\({}^{4}\)LO
The inclusive gluon fusion Higgs boson production cross section and the Drell-Yan production cross section of an electron-positron pair are some of the most important LHC observables. Currently, their predictions are known through N\({}^{3}\)LO in perturbative QCD [122; 123; 124; 125]. Going beyond the current state of the art is a formidable challenge and we present here a first contribution towards this step.
The LHC cross sections for the production of a virtual photon or a Higgs boson in gluon fusion in the infinite top quark mass limit is described by the following factorization formula.
\[\sigma_{B}=\tau\hat{\sigma}_{0}^{B}C_{B}^{2}\sum_{ij}f_{i}(\tau)\circ_{\tau}\eta _{ij}^{B}(\tau)\circ_{\tau}f_{j}(\tau),\hskip 28.452756ptB\in\{H,\gamma^{*}\}. \tag{115}\]
Above, the \(f_{i}\) are parton distribution functions (PDFs), \(\hat{\sigma}_{0}^{B}\) represents the partonic Born cross section, and we define the ratio \(\tau=Q^{2}/S\), such that \(Q\) is the virtuality of the virtual photon or the mass of the Higgs boson, and \(S\) is the hadronic center-of-mass energy. The PDFs are convoluted with the partonic coefficient functions using standard Mellin convolutions indicated by the symbol \(\circ\). The partonic coefficient functions \(\eta_{ij}^{B}\) are given by
\[\eta_{ij}^{B}(z)=\frac{\mathcal{N}_{ij}}{2Q^{2}\hat{\sigma}_{0}^{B}}\sum_{m=0} ^{\infty}\int d\Phi_{B+m}\mathcal{M}_{ij\to B+m}. \tag{116}\]
The normalization factor \(\mathcal{N}_{ij}\) depends on the initial state and is given by
\[\mathcal{N}_{gg}=\frac{1}{4(n_{c}^{2}-1)^{2}(1-\epsilon)^{2}},\hskip 28.452756pt \mathcal{N}_{q\bar{q}}=\frac{1}{4n_{c}^{2}}, \tag{117}\]
where \(g\), \(q\) and \(\bar{q}\) represent a gluon, quark and anti-quark respectively, and \(n_{c}\) denotes the number of colors. The coefficient \(C_{B}\) is simply unity for the production cross section of a virtual photon and equal to the Wilson coefficient [78; 79; 80; 81] for the effective field theory describing the interactions of a Higgs boson with gluons in the limit of infinitely large top quark mass [74; 75; 76; 77]. The color and spin summed squared matrix element is given by \(\mathcal{M}_{ij\to B+m}\). This squared matrix element describes the production of the desired boson \(B\) and \(m\) final state partons in the collision of initial state partons \(i\) and \(j\). In this article, we focus in particular on the contribution for one final state gluon (i.e. \(m=1\)). We refer to this contribution as the single real emission (R) contribution to the inclusive cross section. The corresponding partonic coefficient function is consequently given by
\[\eta_{ij}^{B,R}(z)=\frac{\mathcal{N}_{ij}}{2Q^{2}\hat{\sigma}_{0}^{B}}\int d \Phi_{B+1}\mathcal{M}_{ij\to B+1}. \tag{118}\]
We focus on the limit in which the energy of the final state parton vanishes. This limit is referred to as the production threshold, as all energy of the colliding partons is used to produce the final state boson. To parametrize this limit, we introduce the following variables
\[\bar{z}=1-z,\hskip 28.452756ptz=\frac{Q^{2}}{s}. \tag{119}\]
The threshold (or soft) limit is given by \(\bar{z}\to 0\). We can now exploit the factorization scattering amplitudes as introduced in eq. (1) to compute the threshold limit of the single-real emission partonic coefficient function.
\[\eta_{ij}^{B,R,\text{thr.}}(z)=\lim_{\bar{z}\to 0}\eta_{ij}^{B,R}(z)=\frac{ \mathcal{N}_{ij}}{2Q^{2}\hat{\sigma}_{0}^{B}}\int d\Phi_{B+1}\sum_{\text{Spin, Color}}\Big{|}\mathbf{J}(p_{g})\mathcal{A}_{p_{i}p_{j}\to B}\Big{|}^{2}. \tag{120}\]
The result for this part of the partonic coefficient function can be expanded in the strong coupling constant (eq. (4)).
\[\eta^{B,R,{\rm thr.}}_{ij}(z)=\sum_{o=0}^{\infty}a_{S}^{o}\eta^{B,R,{\rm thr.},(o )}_{ij}(z). \tag{110}\]
The above single-real emission contribution to the partonic coefficient function computed through N\({}^{4}\)LO in perturbative QCD represents a major result of this article. To obtain this result through three loops in QCD, we make use of our newly derived results for the soft current (eq. (2)) and apply it to the purely virtual amplitudes of the scattering process in question. These virtual amplitudes are currently available in the literature to four-loop order [103; 126; 127; 128; 129; 130; 105; 131], even beyond what is required here. To arrive at the desired result for the partonic coefficient function through N\({}^{4}\)LO, we first perform an analytic continuation of the soft current in eq. (3) into the production region. Next, we apply the current to the purely virtual amplitudes to obtain the threshold limit of the desired scattering amplitude. Then, we interfere the soft scattering amplitude with its complex conjugate and finally perform the integration over the single emission phase space \(d\Phi_{B+1}\).
We express our results in terms of Laurent expansions in the dimensional regulator \(\epsilon\) and express threshold singularities in terms of standard Dirac delta functions and plus distributions. Their action on a test function \(f(\bar{z})\) is given by
\[f(0)=\int_{0}^{1}d\bar{z}\delta(\bar{z})f(\bar{z}),\hskip 28.452756pt\int_{0}^{1} d\bar{z}\left[\frac{\log^{n}\bar{z}}{\bar{z}}\right]_{+}f(\bar{z})=\int_{0}^{1}d \bar{z}\frac{\log^{n}\bar{z}}{\bar{z}}(f(\bar{z})-f(0)). \tag{111}\]
In order for our results to be usable for the computation for the N\({}^{4}\)LO production cross section we truncate the Laurent expansion in \(\epsilon\) at \({\cal O}(\epsilon^{8-2n})\) at N\({}^{n}\)LO in QCD perturbation theory for \(n\in\{1,2,3,4\}\). Note, that first approximate results for the full threshold limit of the N\({}^{4}\)LO production cross section already appeared in refs. [132; 85]. We confirm previous computations through N\({}^{3}\)LO in the literature [47; 61; 133; 134; 135; 136; 137]. We present our results in terms of computer readable files in association with the arXiv submission of this article.
## 8 Conclusion
In this article, we computed N\({}^{3}\)LO corrections to the single-soft emission current applied to amplitudes with two color-charged partons. Our result is a significant contribution to our understanding of soft factorization at N\({}^{3}\)LO in perturbative QFT and the understanding of infrared singularities at N\({}^{4}\)LO.
We achieved our results by performing a systematic expansion of three-loop scattering amplitudes involving one color-neutral boson and three partons around the limit of one gluon becoming soft. To facilitate this expansion, we develop a new method for the systematic expansion of Feynman graphs around soft limits. We emphasize the generality of this technique and apply it to obtain our results for the soft emission current as a first example.
We perform the expansion of scattering matrix elements in QCD and in maximally super-symmetric Yang-Mills theory. We observe that the result from the two different
gauge theories agree at the highest transcendentality, in accord with previous conjectures. Furthermore, we use our new results to determine the contributions to the threshold approximation of the Higgs boson and Drell-Yan production cross sections at the LHC at N\({}^{4}\)LO in perturbative QCD. To facilitate the use of our results, we make them available in terms of computer readable files associated with the arXiv submission of this article.
**Note:** During the completion of this manuscript a separate calculation of the three-loop single-soft emission current became public on the arXiv in ref. [138]. We have found complete agreement among our main results for coefficients \(K_{2}^{(3)}\) (eq. (11)), \(K_{4A}^{(3)}\) (eq. (13)) and \(K_{4F}^{(3)}\) (eq. (15)). It is worth noting that the methods employed in ref. [138] and in this article are substantially different, and obtaining matching results thus provides a robust cross-validation.
###### Acknowledgments.
We thank Lance Dixon, Einan Gardi and Stephen Jones for useful discussions. FH and YM are supported by the UKRI FLF grant "Forest Formulas for the LHC" (Mr/S03479x/1) and the STFC Consolidated Grant "Particle Physics at the Higgs Centre". BM and AS are supported by the United States Department of Energy, Contract DE-AC02-76SF00515. YM would like to thank the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support, during the completion of this work.
|
2309.16751 | Secondary Whistler and Ion-cyclotron Instabilities driven by Mirror
Modes in Galaxy Clusters | Electron cyclotron waves (whistlers), are commonly observed in plasmas near
Earth and the solar wind. In the presence of nonlinear mirror modes, bursts of
whistlers, usually called lion roars, have been observed within low magnetic
field regions associated to these modes. In the intracluster medium (ICM) of
galaxy clusters, the excitation of the mirror instability is expected, but it
is not yet clear whether electron and ion cyclotron waves can also be present
under conditions where gas pressure dominates over magnetic pressure (high
$\beta$). In this work, we perform fully kinetic particle-in-cell (PIC)
simulations of a plasma subject to a continuous amplification of the mean
magnetic field $\textbf{B}(t)$ to study the nonlinear stages of the mirror
instability and the ensuing excitation of whistler and ion cyclotron (IC) waves
under ICM conditions. Once mirror modes reach nonlinear amplitudes, both
whistler and IC waves start to emerge simultaneously, with sub-dominant
amplitudes, propagating in low-$\textbf{B}$ regions, and quasi-parallel to
$\textbf{B}(t)$. We show that the underlying source of excitation is the
pressure anisotropy of electrons and ions trapped in mirror modes with
loss-cone type distributions. We also observe that IC waves play an essential
role in regulating the ion pressure anisotropy at nonlinear stages. We argue
that whistler and IC waves are a concomitant feature at late stages of the
mirror instability even at high-$\beta$, and therefore expected to be present
in astrophysical environments like the ICM. We discuss the implications of our
results for collisionless heating and dissipation of turbulence in the ICM. | Francisco Ley, Ellen G. Zweibel, Drake Miller, Mario Riquelme | 2023-09-28T18:00:00Z | http://arxiv.org/abs/2309.16751v1 | # Secondary Whistler and Ion-cyclotron Instabilities driven by Mirror Modes in Galaxy Clusters
###### Abstract
Electron cyclotron waves (whistlers), are commonly observed in plasmas near Earth and the solar wind. In the presence of nonlinear mirror modes, bursts of whistlers, usually called lion roars, have been observed within low magnetic field regions associated to these modes. In the intracluster medium (ICM) of galaxy clusters, the excitation of the mirror instability is expected, but it is not yet clear whether electron and ion cyclotron waves can also be present under conditions where gas pressure dominates over magnetic pressure (high \(\beta\)). In this work, we perform fully kinetic particle-in-cell (PIC) simulations of a plasma subject to a continuous amplification of the mean magnetic field \(\textbf{B}(t)\) to study the nonlinear stages of the mirror instability and the ensuing excitation of whistler and ion cyclotron (IC) waves under ICM conditions. Once mirror modes reach nonlinear amplitudes, both whistler and IC waves start to emerge simultaneously, with sub-dominant amplitudes, propagating in low-**B** regions, and quasi-parallel to \(\textbf{B}(t)\). We show that the underlying source of excitation is the pressure anisotropy of electrons and ions trapped in mirror modes with loss-cone type distributions. We also observe that IC waves play an essential role in regulating the ion pressure anisotropy at nonlinear stages. We argue that whistler and IC waves are a concomitant feature at late stages of the mirror instability even at high-\(\beta\), and therefore expected to be present in astrophysical environments like the ICM. We discuss the implications of our results for collisionless heating and dissipation of turbulence in the ICM.
Plasma astrophysics(1261) -- Intracluster medium(858) -- High energy astrophysics(739) -- Extragalactic magnetic fields(507)
## 1 Introduction
Several classes of astrophysical plasmas display fully developed turbulent states and a weak collisionality, in the sense that the particles' mean free path is several orders of magnitude larger than the typical radius at which they gyrate around the ambient magnetic field. These two characteristics alone can make the transport properties and global evolution of the astrophysical environment in question challenging and dependent on the local evolution at particles' scales. Therefore a detailed study of the behavior of these plasmas at the kinetic level becomes a necessity.
That is the case of the intracluster medium of galaxy clusters (ICM). The ICM is a hot, magnetized (Bonafede, A. et al. (2010)), weakly collisional and turbulent (Schuecker, P. et al. (2004); Zhuravleva et al. (2014); Hitomi Collaboration et al. (2016)) gas in the plasma state where the thermal pressure greatly exceeds the magnetic pressure (\(\beta\equiv 8\pi P/B^{2}\sim 10-100\), \(P\) is the isotropic thermal pressure and \(B\) the magnetic field strength). In these conditions, departures from thermodynamic equilibrium, such as pressure anisotropies, are easy to achieve. For example, slow compression of the magnetic field increases particle kinetic energy perpendicular to the magnetic field such that the magnetic moment (or, the magnetic flux through the particle gyro-orbit) remains constant, leading to an excess of perpendicular pressure \(P_{\perp}\) over parallel pressure \(P_{\parallel}\). However, pressure anisotropy cannot grow unchecked. Pressure anisotropies can easily destabilize microinstabilities such as mirror, firehose, ion-cyclotron and whistler (Schekochihin et al. (2005); Schekochihin & Cowley (2006)). The back reaction of these instabilities on the particles can maintain pressure anisotropy near its marginally unstable value, and are thought to play an important role in several aspects of ICM transport and heating (Kunz et al. (2011); Berlok et al. (2021); Drake et al. (2021); Perrone & Latter (2022a,b); Ley et al. (2023); Tran et al. (2023)).
In a similar vein, the solar wind and some regions of the Earth's magnetosheath and magnetosphere host plasmas that are also collisionless and turbulent. Even when the plasma \(\beta\) is lower than in the ICM (\(\beta_{i}\sim 1-10\), \(\beta_{e}\sim 1\)), we can encounter some similarities. In particular, the plasma is also pressure anisotropic, and the same microinstabilities above mentioned are found to be present, usually in their fully developed, nonlinear stage (Bale et al. (2009)). Particularly important to this work is the presence of the mirror instability (Chandrasekhar et al. (1958); Rudakov & Sagdeev (1961); Hasegawa (1969); Southwood & Kivelson (1993); Kivelson & Southwood (1996); Pokhotelov et al. (2002, 2004)) and its interplay with the whistler and (potentially) ion-cyclotron in
stabilities (Gary (1992),Gary & Wang (1996)). An example of this has been observed in these space plasmas, and termed whistler lion roars.
Whistler lion roars are short bursts of right-hand polarized waves, with frequencies below the electron cyclotron frequency (\(\omega_{c,e}\)) commonly observed in the Earth's magnetosheath and magnetosphere (Smith et al. (1969); Tsurutani et al. (1982); Baumjohann et al. (1999); Breuillard et al. (2018); Giagkiozis et al. (2018); Kitamura et al. (2020); Zhang et al. (2021)), therefore identified as whistler waves. They have also been observed in Saturn's magnetosheath (Pisa et al. (2018)) and the solar wind. They are observed in regions of locally low magnetic field strength (magnetic troughs, or magnetic holes) of magnetic fluctuations. These magnetic troughs are usually identified as structures produced by mirror instability modes, which are able to trap electrons with low parallel velocity within these regions due to the aforementioned invariance of magnetic moment (Southwood & Kivelson (1993)).
Several mechanisms have been proposed to explain the excitation of whistler lion roars. They usually invoke the pressure anisotropy \(P_{\perp,e}>P_{\parallel,e}\) that electrons generate while trapped inside the magnetic troughs (\(P_{\perp,e}\) and \(P_{\parallel,e}\) are, respectively, the electron pressure perpendicular and parallel with respect to the local magnetic field **B**). Other mechanisms have also been proposed involving counter-propagating electron beams inside these regions, and butterfly distributions in pitch-angle (Zhang et al. (2021); Jiang et al. (2022)). As the waves propagate out from the magnetic troughs, they are thought to interact with electrons, regulating the number of trapped electron inside magnetic troughs and also the global anisotropy of electrons in the magnetosheath. This way, there would be a causal connection between an ion-scale mirror instability with an electron scale whistler instability at nonlinear stages, providing valuable insight into the interaction of mirror modes with electrons.
The question arises as to whether a similar interplay can be expected in the ICM. Such behavior would imply a more complex scenario in which several microinstabilities would be causally connected and coexisting with each other, and several channels of turbulent energy dissipation would open, leading to a much richer dynamics.
Mirror instability and its consequences have been extensively studied using particle-in-cell (PIC) simulations of moderately and high-\(\beta\) plasmas, both hybrid (Kunz et al. (2014); Melville et al. (2016); Arzamasskiy et al. (2023)) and fully kinetic (Sironi & Narayan (2015); Riquelme et al. (2015, 2016); Ley et al. (2023)), up to nonlinear stages. Consistent with early theoretical works (Southwood & Kivelson (1993); Kivelson & Southwood (1996)), it has been demonstrated that mirror modes are efficient in trapping ions inside regions of low magnetic field strength during their secular growth (Kunz et al. (2014)). When mirror modes reach amplitudes of order \(\delta B/B\sim 1\), they reach a saturated stage and the ions eventually undergo scattering, allowing them to escape. This trapping process is similar for electrons, and it has been shown to have important consequences in the electron viscosity and thermal conduction of the plasma (Riquelme et al. (2016); Roberg-Clark et al. (2016, 2018)). Interestingly, Riquelme et al. (2016) reported the observation of whistler waves in the nonlinear, saturated stages of mirror modes in their simulations, along with ion-cyclotron (IC) waves, although they did not pinpoint the cause of the excitation.
In this work, we use PIC simulations to investigate the nonlinear stages of the mirror instability at moderate and high-\(\beta\), focusing on the abovementioned excitation of whistler and IC waves. We observe that, indeed, both right hand and left hand polarized, quasi parallel-propagating waves are excited at the end of mirror's secular growth and during its saturated stage, and provide evidence for their excitation mechanism associated to the pressure anisotropy electrons and ions within magnetic troughs of mirror modes. The right- and left-handed circular polarization of these waves lead to their identification as electron-cyclotron (i.e. whistlers) and ion-cyclotron (IC) waves. We also provide some additional discussion about their nature. We describe the interaction of these waves with electrons and ions, and their effect on the regulation of the pressure anisotropy at late stages.
This paper is organized as follows. Section SS2 describes our simulation setup and the runs we perform. Section SS3 shows our simulation results starting from the excitation of the mirror instability, an early whistler burst and then the late excitation of the electron and ion cyclotron waves at nonlinear stages of the mirror instability. We also detail the mechanism by which these cyclotron waves are excited during the saturated stage of mirror modes, by tracking ions and electrons throughout the simulations. We also describe the subsequent interaction of these waves with the ions and electrons at late stages. In section SS4 we discuss the dependence of our results on the mass ratio used in our simulations and show that they are fairly insensitive to it. In section SS5 we present results of simulations at different initial ion plasma beta, and show these cyclotron waves are also present at lower and higher betas as well. Finally, we discuss the implication of our work in the context of galaxy clusters and present our conclusions in section SS6.
## 2 Simulation Setup
We perform fully kinetic, 2.5D particle-in-cell (PIC) simulations using TRISTAN-MP (Buneman (1993); Spitkovsky (2005)), in which we continuously shear a collisionless, magnetized plasma composed of ions and electrons (Riquelme et al. (2012)). The magnetic field is initially spatially uniform and starts pointing along the \(x\)-axis. A shear velocity field is imposed with \(\textbf{v}=-sx\hat{y}\) (red arrows in fig. 1), where \(x\) is the distance along the \(x\)-axis and \(s\) is a constant shear rate. We solve the PIC system of equations using shearing coordinates, as implemented in Riquelme et al. (2012) (The suitability of this approach to studying ion Larmor scale phenomena is also discussed in Riquelme et al. (2015)). The conservation of magnetic flux implies that the \(y\)-component of the magnetic field **B** evolves as \(dB_{y}/dt=-sB_{0}\), whereas \(dB_{x}/dt=0\) and \(dB_{z}/dt=0\). The action of the shear then
continuously amplifies the magnetic field strength such that its magnitude evolves as \(B(t)=B_{0}\sqrt{1+s^{2}t^{2}}\).
In our simulations, ions and electrons are initialized with Maxwell-Juttner distributions (the relativistic generalization of the Maxwell-Boltzmann distribution, Juttner (1911)) with equal initial temperatures \(T_{i}^{\text{init}}=T_{e}^{\text{init}}\), and \(k_{B}T_{i}^{\text{init}}/m_{i}c^{2}\) between \(0.01\) and \(0.02\). The physical parameters of our simulations are the initial temperature of ions and electrons (\(T_{i}^{\text{init}}=T_{e}^{\text{init}}\)), the initial ion plasma beta, \(\beta_{i}^{\text{init}}\), the mass ratio between ions and electrons \(m_{i}/m_{e}\), and the ratio between the initial ion cyclotron frequency and the shear frequency, \(\omega_{c,i}^{\text{init}}/s\), that we call the "scale-separation ratio". The numerical parameters in our simulations are the number of macroparticles per cell, \(N_{\text{ppc}}\), the plasma skin depth in terms of grid point spacing, \(c/\sqrt{\omega_{p,e}^{2}+\omega_{p,i}^{2}}/\Delta x\), and the domain size in terms of the initial ion Larmor radius, \(L/R_{L,i}^{\text{init}}\), where \(R_{L,i}^{\text{init}}=v_{\text{th},i}/\omega_{c,i}^{\text{init}}\) and \(v_{\text{th},i}^{2}=k_{B}T_{i}/m_{i}\). These physical and numerical parameters are listed in Table 1. We fix \(c/\sqrt{\omega_{p,e}^{2}+\omega_{p,i}^{2}}/\Delta x=3.5\) in the simulations presented in Table 1.
In the bulk of the paper we discuss a representative, fiducial simulation with \(m_{i}/m_{e}=8\), \(\beta_{i}^{\text{init}}=20\) (thus \(\beta^{\text{init}}=\beta_{i}^{\text{init}}+\beta_{e}^{\text{init}}=40\)) and \(\omega_{c,i}^{\text{init}}=800\) (simulation b20m8w200 in Table 1, highlighted in boldface). We vary the above parameters in a series of simulations, all listed in Table 1. Importantly, given the available computational capabilities, performing a simulation with realistic mass ratio \(m_{i}/m_{e}=1836\) becomes prohibitively expensive. Therefore, a range of values of ion-to-electron mass ratio are presented in order to ensure that our results do not strongly depend on this parameter. The effects of varying these parameters are discussed in SSS4 & 5.
In the absence of a scattering mechanism and/or collisions, the ion and electron magnetic moments \(\mu_{j}\equiv p_{\perp,j}^{2}/(2m_{j}B)\) and longitudinal action \(\mathcal{J}_{j}\equiv\oint p_{j,\parallel}d\ell\) are adiabatic invariants (\(p_{\perp,j}\) and \(p_{\parallel,j}\) are the components of the momentum of a particle of species \(j\) perpendicular and parallel to the local magnetic field, respectively, and \(j=i,e\)), and therefore are conserved as the system evolves, provided that the variation of **B** is sufficiently slow compared to the particle cyclotron frequencies; in our case, \(s\ll\omega_{c,j}\), where \(\omega_{c,j}=eB/m_{j}c\) is the cyclotron frequency of particles of species \(j\), \(c\) is the speed of light, and \(e\) is the magnitude of the electric charge.
The continuous amplification of the magnetic field **B** implies that the particles' adiabatic invariance drives a pressure anisotropy in the plasma such that \(P_{\perp,j}>P_{\parallel,j}\). In the very early stages of the simulation, we expect the evolution of \(P_{\perp,j}\) and \(P_{\parallel,j}\) to be dictated by the double-adiabatic scalings (Chew et al. (1956)). Soon after this stage, however, the pressure anisotropy acts as a free energy source in the plasma and is able to excite several kinetic microinstabilities after surpassing their excitation thresholds, which are proportional to \(\beta^{-\alpha}\), \((\alpha\sim 0.5-1)\)(Hasegawa (1969); Gary & Lee (1994); Gary & Wang (1996)). These microinstabilities break the adiabatic invariants and act upon the pressure anisotropy to regulate the anisotropy growth in the nonlinear stages.
In our simulations, and given our initial physical parameters (namely, \(\beta_{i}^{\text{init}}\equiv 8\pi P_{i}^{\text{init}}/B^{2\text{init}}=20\)), we expect the dominant instability to be the mirror instability. Mirror modes are purely growing (i.e. zero real frequency), with the fastest growing modes propagating highly obliquely with respect to the mean magnetic field. Their most unstable wavenumbers satisfy \(k_{\perp}R_{L,i}\sim 1\), where \(R_{L,i}\) is the ion Larmor radius. This instability presents Landau resonances with particles of very small parallel momentum, \(p_{\parallel}\approx 0\), that become trapped in between mirror modes, and contribute to regulating the pressure anisotropy.
In addition to the mirror instability, we also observe wave activity that we associate with the ion-cyclotron (Gary (1992)) and whistler (Gary & Wang (1996)) instabilities at ion and electron scales, respectively, during the late stages of our simulations. Ion cyclotron (IC) modes are left circularly polarized and have real frequency below the ion-cyclotron frequency \(\omega_{c,i}\), with modes of maximum growth
\begin{table}
\begin{tabular}{l c c c c c c} \hline Runs & \(\beta_{i}^{\text{init}}\) & \(m_{i}/m_{e}\) & \(\omega_{c,i}^{init}/s\) & \(\frac{k_{B}T}{m_{i}c^{2}}\) & \(N_{\text{ppc}}\) & \(L/R_{L,i}^{\text{init}}\) \\ \hline \hline
**b20m8w800** & **20** & **8** & **800** & **0.02** & **600** & **54** \\ b20m32w800 & 20 & 32 & 800 & 0.01 & 300 & 50 \\ b20m64w800 & 20 & 64 & 800 & 0.01 & 200 & 40 \\ b40m8w800 & 40 & 8 & 800 & 0.02 & 300 & 49 \\ b2m8w800 & 2 & 8 & 800 & 0.02 & 300 & 68 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation List: The physical parameters of the simulations are: the initial ion plasma beta \(\beta\equiv 8\pi P_{i}^{\text{init}}/B^{2}\), where \(P_{i}^{\text{init}}\) is the initial ion pressure, the mass ratio between ions and electrons \(m_{i}/m_{e}\), and the magnetization \(\omega_{c,i}/s\). The numerical parameters are the number of particles per cell \(N_{\text{ppc}}\) and the domain size in terms of the initial ion Larmor radius \(L/R_{L,i}^{\text{init}}\). Our fiducial simulation is highlighted in bold.
Figure 1: The evolution of the simulation domain. Panel \(a\): Initially, the box is straight, the magnetic field is initialized pointing in the \(\hat{x}\) direction and a shear velocity field \(\textbf{v}=-sx\hat{y}\) is imposed in the y–direction (red arrows). Panel \(b\): The velocity field shears the box continuously throughout the simulation, amplifying the magnetic field and changing its direction in the process due to magnetic flux conservation.
rate propagating parallel to the mean magnetic field \(\mathbf{B}\). Similarly, whistler modes are right circularly polarized and have real frequency below the electron cyclotron frequency \(\omega_{c,e}\), with modes of maximum growth rate also propagating parallel to \(\mathbf{B}\). As we will see, this wave activity is associated with the ion and electron trapping processes that mirror modes generate.
## 3 Results
Figures 2 and 3 summarize the evolution of magnetic field fluctuations and particle pressure anisotropy over time.
Figure 2 shows the fluctuations in the magnetic field \(\delta\mathbf{B}\equiv\mathbf{B}-\langle\mathbf{B}\rangle\) (where \(\langle\cdot\rangle\) denotes a volume average over the entire simulation domain) in its three different components at two different times: \(t\cdot s=0.4\) (first row, panels \(a\),\(b\) and \(c\)) and at \(t\cdot s=1.4\) (second row, panels \(d\), \(e\) and \(f\)). The black arrows in panels \(a\)-\(f\) denote the direction of the mean magnetic field \(\langle\mathbf{B}\rangle\) at those particular times. The components of \(\delta\mathbf{B}\) are defined as parallel with respect to the main field \(\langle\mathbf{B}\rangle\) (\(\delta B_{\parallel}\), panels \(b\) and \(e\)), perpendicular to \(\langle\mathbf{B}\rangle\) in the plane of the simulation (\(\delta B_{\perp,xy}\), panels \(a\) and \(d\)) and perpendicular to \(\langle\mathbf{B}\rangle\) in the direction out of the simulation plane (\(\delta B_{z}\), panels \(c\) and \(f\)). Additionally, figure \(2g\) shows the evolution of the energy in each of the three components of \(\delta\mathbf{B}\), normalized by \(B(t)^{2}\); \(\delta B_{\parallel}^{2}\) (blue line), \(\delta B_{\perp,xy}^{2}\) (red line), and \(\delta B_{z}^{2}\) (green line).
Figure \(3a\) shows the evolution of the ion pressure anisotropy \(\Delta P_{i}\equiv P_{\perp,i}-P_{\parallel,i}\) for run b20m8w800, and the dashed gray line shows the approximate instability threshold for the mirror instability (Hasegawa (1969); Hellinger (2007)). We can see that the ion anisotropy surpasses the mirror threshold very early in the simulation, and reaches its maximum value at \(t\cdot s\approx 0.5\) (we will call this stage the anisotropy overshoot hereafter). We will show that this is consistent with the beginning of the secular growth of mirror modes (Kunz et al. (2014), Riquelme et al. (2016)). Figure \(3b\) shows the same for the electron pressure anisotropy, which we will show relaxes by efficient scattering.
### Mirror Instability Evolution
Since mirror modes are highly oblique, their evolution is well represented by the time trace of \(\delta B_{\parallel}^{2}\) shown in fig. \(2g\). We identify both a linear, exponentially growing phase until \(t\cdot s\approx 0.45\), and a subsequent nonlinear, slower growing secular phase, consistent with the different evolutionary phases of the ion and electron pressure anisotropies described above. Besides the break in the mirror mode's evolution at \(t\cdot s\approx 0.45\), a second break in the secular growth occurs around \(t\cdot s=0.6\) followed by a shallower slope of growth. We will show that this break coincides with the excitation of both whistler and IC waves in \(\delta B_{\perp,xy}^{2}\) and \(\delta B_{z}^{2}\), implying that whistler and IC waves, albeit smaller in amplitude, modulate the evolution of mirror modes during nonlinear stages.
#### 3.1.1 Linear, exponentially growing mirror phase
After an early CGL phase of the pressure anisotropy \(\Delta P_{j}\) (\(j=i,e\), see fig. 3), fig. \(2g\) shows the excitation of the mirror instability starting at \(t\cdot s\approx 0.35\), mainly in the parallel component of the magnetic fluctuations, \(\delta B_{\parallel}\) (blue line), consistent with theoretical expectations (Southwood & Kivelson (1993); Pokhotelov et al. (2004)). Figure \(2g\) also shows that \(\delta B_{\parallel}\) grows first and it has the largest amplitude throughout the entire simulation, meaning that the mirror instability is indeed the dominant instability.
Figure \(2b\) (i.e. \(\delta B_{\parallel}^{2}\)) shows the linear, exponentially growing phase of mirror modes at \(t\cdot s=0.4\), where small filamentary structures of high local magnetic field amplitude start to emerge and slowly grow, in between wider regions of low local magnetic field amplitude. The obliqueness of the modes is readily apparent, as well as the fact that the mirror generated magnetic fluctuations lie mainly in the (**k**,**B**) plane (they can be seen in \(\delta B_{\perp,xy}^{2}\) too, but not in \(\delta B_{z}^{2}\), as expected from linear theory (Pokhotelov et al. (2004))). The oblique nature of mirror modes can also be seen in fig. \(4a\), where we show the power spectrum in space of \(\delta B_{\parallel}\) at \(t\cdot s=0.4\). The solid and dashed lines represent the directions parallel and perpendicular to the mean magnetic field \(\langle\mathbf{B}\rangle\), respectively. Therefore, we can see that at \(t\cdot s=0.4\), the power is mostly concentrated between wavevectors \(0.44\lesssim kR_{L,\mathrm{t}}^{\mathrm{init}}\lesssim 1.35\) and angles of \(52^{\circ}\lesssim\theta_{k}\lesssim 77^{\circ}\), where \(\theta_{k}\equiv\cos^{-1}(\mathbf{k}\cdot\langle\mathbf{B}\rangle/kB)\) is the angle between mirror modes' wavevector and the mean magnetic field \(\langle\mathbf{B}\rangle\).
It should be emphasized that the ion-cyclotron wave activity only starts at \(t\cdot s=0.6\), and not before. There is no sign of an early excitation of the ion-cyclotron instability competing with the mirror instability for the available free energy in \(\Delta P_{i}\). Instead, at earlier stages, only the mirror instability is excited, consistent with our initial conditions of high-beta (\(\beta_{i}^{\mathrm{init}}=20\)), where the mirror instability is expected to dominate (e.g. Riquelme et al. (2015)).
The absence of ion-cyclotron waves early in the simulation (\(0<t\cdot s<0.6\)) is clearly seen in fig. \(5a\), where we show the power spectrum in time and space of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) at early stages: \(0.3<t\cdot s<0.5\). This particular combination of the two perpendicular components of \(\delta\mathbf{B}\) allows us to disentangle the parallel-propagating waves (with respect to the main magnetic field \(\langle\mathbf{B}\rangle\), e.g. ion-cyclotron and whistlers), and also their left-handed and right-handed circular polarizations (Ley et al. (2019); Tran et al. (2023)). In this case, the left-hand circularly polarized wave activity is shown for \(\omega>0\), whereas right-hand circularly polarized wave activity is shown for \(\omega<0\). We readily see that, apart from the \(\omega\approx 0\) power consistent with mirror modes appearing in \(\delta B_{\perp,xy}\), there is no left-handed polarized wave activity throughout \(0.3<t\cdot s<0.5\), only right-handed polarized waves, which corresponds to an early excitation of the whistler instability, as we will see in section 3.2.
#### 3.1.2 Nonlinear, secular mirror phase
At \(t\cdot s\approx 0.45\), we can clearly see the beginning of the secular growth of the mirror instability, where the modes reach nonlinear amplitudes, and keep growing but at a slower rate. This evolution is consistent with previous works (Kunz et al. (2014); Riquelme et al. (2016)).
Figure 2: **First row:** The different component of magnetic fluctuations \(\delta\mathbf{B}=\mathbf{B}-\langle\mathbf{B}\rangle\) for run b20mSw800 in the simulation domain at \(t\cdot s=0.4\): \(\delta B_{\perp}\) (Panel \(a\)) is the component perpendicular to the main field \(\langle\mathbf{B}\rangle\) in the \(x\)–\(y\) plane of the simulation, \(\delta B_{\parallel}\) (panel \(b\)) is the component parallel to \(\langle\mathbf{B}\rangle\) and \(\delta B_{z}\) (panel \(c\)) is the component perpendicular to \(\langle\mathbf{B}\rangle\) in the direction out of the plane of the simulation. **Second row:** Panels \(d\), \(e\) and \(f\) show the same as panels \(a\), \(b\) and \(c\), but but at \(t\cdot s=1.4\). **Third row:** The evolution of the energy in the three component of the magnetic field fluctuations \(\delta\mathbf{B}\) normalized to \(B(t)^{2}\), \(\delta B_{\parallel}^{2}\) (blue line), \(\delta B_{\perp,xy}^{2}\) (red line) and \(\delta B_{z}^{2}\) (green line). The dashed gray lines indicate the time at which the fluctuations in the first and second row are shown. An animation is available in the online version.
Interestingly, the mirror secular growth is interrupted at \(t\cdot s\approx 0.6\), and the slope of \(\delta B_{\parallel}^{2}\) breaks. This is also approximately where the ion pressure anisotropy experiences its fastest decline (fig. 3). Mirror modes continue to grow, but at a much slower rate. This is consistent with the saturation of energy in the subdominant components \(\delta B_{\perp,xy}^{2}\) and \(\delta B_{z}^{2}\) (solid red and green line in fig. 2\(g\), respectively), which also presents a distinct pattern of oscillations. This activity is a clear evidence of a new burst of waves with components mainly in the direction perpendicular to \(\delta\)**B**, and we will see that they are consistent with both electron cyclotron waves (whistlers) and ion cyclotron waves excited by electron and ion populations, respectively, that become trapped within mirror modes (see sec. 3.3).
Figure 2\(e\) shows a late, nonlinear stage of the mirror instability, at \(t\cdot s=1.4\). At this time, the regions of high magnetic field of mirror modes (e.g. red filamentary structures seen in fig. 2\(b\)) have grown significantly and merged with neighboring structures to form wider and sharper regions of high local amplitudes (\(\delta B_{\parallel}/B\sim 0.9\)), whose sizes are comparable to regions of low magnetic field. At this stage, most of the power is concentrated in wavevectors \(0.2\lesssim kR_{L,i}^{\rm init}\lesssim 1.1\), and angles \(57^{\circ}\lesssim\theta_{k}\lesssim 85^{\circ}\) (see fig. 4\(b\)).
After reaching its overshoot, the ion anisotropy starts to decrease towards marginal stability. However, this decrease stops around \(t\cdot s\approx 0.65\) at \(\Delta P_{i}/P_{\parallel,i}\approx 0.18\), well above the approximate mirror threshold (dashed gray line, (Hasegawa (1969); Hellinger (2007))). The anisotropy then reaches a marginal stability level that is above the mirror threshold, similar to some previous works using both hybrid and fully kinetic simulations (Sironi & Narayan (2015); Melville et al. (2016); Ley et al. (2023)).
In order to better characterize the evolution of \(\Delta P_{i}\), we fit a relation \(\Delta P_{i}=A_{i}\beta_{\parallel,i}^{\alpha_{i}}\) from \(0.7\leq t\cdot s\leq 2\) (In our simulations, the shear motion continuously amplifies \(B\), therefore \(\beta_{\parallel,i}\) also evolves.). As shown in fig. 3\(a\), our best-fit parameters are \(A_{i}=0.544\pm 0.003\) and \(\alpha_{i}=-0.445\pm 0.003\). The obtained exponent is consistent with marginal stability threshold given by the ion-cyclotron instability for lower \(\beta_{i}\) (Gary & Lee (1994)). Indeed, the threshold for the IC instability, \(\Delta P_{i}=0.53\beta_{\parallel,i}^{-0.4}\), is plotted as dotted-dashed orange line in fig. 3\(a\) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (Gary & Lee (1994)), and we can clearly see the similarity with our best-fit threshold, even at this higher value of initial \(\beta_{\parallel,i}^{\rm init}\). This observation was also reported in Sironi & Narayan (2015), and we will see that, indeed, we do observe ion-cyclotron waves as part of the saturated phase of the mirror instability that starts at \(t\cdot s=0.6\). The presence of ion and electron cyclotron waves coexisting with mirror modes at late, nonlinear stages of the mirror instability has been reported in previous works (Riquelme et al. (2016); Sironi & Narayan (2015); Ahmadi et al. (2018)). In SS3.3, we argue that a natural explanation of the source of these cyclotron waves is pressure anisotropy of ions trapped within nonlinear mirror modes.
### First Whistler Burst - \(t\cdot s\approx 0.4\)
Figure 3: Panel \(a\): The evolution of the ion pressure anisotropy \(\Delta P_{i}/P_{\parallel,i}\) for run b20m8w800 is shown as a solid green line. The dashed green line shows the double-adiabatic evolution of \(\Delta P_{i}/P_{\parallel,i}\)(Chew et al. (1956)). The dashed gray line shows the approximate threshold for the mirror instability: \(1/\beta_{\parallel,i}\)(Hasegawa (1969)). The dotted-dashed orange line shows the threshold for the IC instability from Gary & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (\(\gamma_{IC}\) is the IC growth rate). The red dashed line shows the best-fit to \(\Delta P_{i}/P_{\parallel,i}=A_{i}\beta_{\parallel,i}^{\alpha_{i}}\) from \(t\cdot s=0.7\) to \(t\cdot s=2.0\), with \(A_{i}=0.544\pm 0.003\) and \(\alpha_{i}=0.445\pm 0.003\). Panel \(b\): The evolution of the electron pressure anisotropy \(\Delta P_{e}/P_{\parallel,e}\) is shown as solid orange line. The dashed orange line shows the double-adiabatic evolution of \(\Delta P_{e}/P_{\parallel,e}\). The dashed blue line shows the best-fit to \(\Delta P_{e}/P_{\parallel,e}=A_{e}\beta_{\parallel,e}^{\alpha_{e}}\) from \(t\cdot s=0.7\) to \(t\cdot s=2.0\), with \(A_{e}=0.036\pm 0.0002\) and \(\alpha_{e}=0.341\pm 0.003\). The dashed gray line shows the linear threshold for the anisotropic whistler instability from (Gary & Wang (1996)) for growth rate \(\gamma_{W}/\omega_{c,e}=0.01\) (\(\gamma_{W}\) is the whistler growth rate).
Figure 3\(b\) shows the evolution of the electron pressure anisotropy \(\Delta P_{e}\equiv P_{\perp,e}-P_{\parallel,e}\) for run b20m8w800. Initially, the electrons develop their own pressure anisotropy alongside ions and for the same reasons. The anisotropy follows double-adiabatic (CGL) scaling (dashed orange line) until \(t\cdot s\approx 0.4\), when it has already reached a value significantly larger than the theoretical threshold for the growth of whistler modes, marked by grey-dashed lines (Gary & Wang (1996)). Around this time, the whistler instability starts to grow, as seen by the time trace of \(\delta B_{z}^{2}\) in fig. 2\(g\), which is
Figure 4: Panel \(a\): Power spectrum in space of \(\delta B_{\parallel}(k_{x},k_{y})\) at \(t\cdot s=0.4\). The wavenumbers \(k_{x},k_{y}\) are normalized by the initial Larmor radius of the ions, \(R_{L,i}^{\rm ini}\). The solid and dashed black lines represent the direction parallel and perpendicular to the main magnetic field at that time, respectively. Panel \(b\): Power spectrum in space of \(\delta B_{\parallel}(k_{x},k_{y})\) at \(t\cdot s=1.4\). Note that the scale of colorbars in panel \(a\) and \(b\) are different.
Figure 5: Panel \(a\): The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\parallel,xy}(\omega,k_{ \parallel})\) in the entire simulation domain and between \(0.3<t\cdot s<0.5\). The frequency is normalized by the initial electron cyclotron frequency \(\omega_{c,e}\), and the wavevector is normalized by the plasma frequency \(\omega_{p,e}\) over the speed of light \(c\). The solid black line shows the linear dispersion relation \(\omega_{r}(k)\) for the whistler instability according to our linear dispersion solver, whereas the dashed black line shows its growth rate \(\gamma\). Panel \(b\): The power spectrum in space of \(\delta B_{z}(k_{x},k_{y})\) at \(t\cdot s=0.4\). The wavenumbers \(k_{x},k_{y}\) are normalized to the initial Larmor radius of the electrons, \(R_{L,e}^{\rm ini}\). The solid and dashed black lines represent the direction parallel and perpendicular to the main magnetic field at that time.
a rough proxy for whistler waves, and also because there are no left-handed IC waves as shown in fig. 5\(a\). At \(t\cdot s\approx 0.45\) the whistler modes saturate and enter a regime of quasi-steady amplitude, which lasts until \(t\cdot s\approx 0.53\). During this \(t\cdot s\approx 0.4-0.53\) period, \(\Delta P_{e}\) is rapidly drawn down by frequent scattering, reaching a more slowly decreasing regime between \(t\cdot s\approx 0.53\) and \(0.6\). The draw down of electron anisotropy happens at a time when the ion anisotropy is still growing. This lasts until mirror modes are sufficiently high amplitude to start trapping the electrons (\(t\cdot s=0.6\)).
The presence of whistler modes at \(t\cdot s=0.4\) can be seen mainly in the perpendicular components of \(\delta\)**B**, namely, \(\delta B_{\perp,xy}\) and \(\delta B_{z}\), figures 2\(a\) and 2\(c\), respectively. They propagate quasi-parallel to the main magnetic field **B** in a fairly homogeneous way inside the simulation domain. This quasi-parallel propagation can also be seen in fig. 5\(b\), where we show the power spectrum in space of \(\delta B_{z}(k_{x},k_{y})\) at \(t\cdot s=0.4\) for run b20m8w800, and the solid and dashed black lines indicate the directions parallel and perpendicular to the main magnetic field \(\langle\textbf{B}\rangle\) at \(t\cdot s=0.4\). The power of \(\delta B_{z}(k_{x},k_{y})\) is concentrated at parallel propagation and wavevectors \(0.6<kR_{L,e}^{\text{init}}<1\).
We show the whistler wave frequencies in the power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) in the interval \(0.3<t\cdot s<0.5\) in fig. 5\(a\). We can see that the power is localized in the region \(\omega<0\), i.e. right-handed circularly polarized waves, consistent with the whistler polarization, and within frequencies \(0.02<\omega/\omega_{c,e}<0.05\). As mentioned above, no IC activity is present during this time period.
We also calculated the theoretical dispersion relation of the anisotropic whistler instability using a linear dispersion solver assuming an initial bi-maxwellian distribution of electrons (Tran et al. (2023)), using the initial parameters and values of \(T_{\perp,e},T_{\parallel,e}\) directly from the simulations. The dispersion relation \(\omega(k)\) is shown as a solid black line in fig. 5\(a\), whereas the instability growth rate is shown in dashed black lines. We can see that the power in right-hand circularly polarized waves is consistent with the whistler dispersion relation.
This way, the early evolution of the electrons is determined by an early burst of whistler modes associated to the initial electron pressure anisotropy growth. We will see that, once electrons start to become trapped in between mirror modes at \(t\cdot s\approx 0.6\), another burst of whistler activity happens, this time associated with the trapping process within mirror modes during their secular and saturated phase.
### Whistler and Ion-cyclotron Excitations \(-t\cdot s\approx 0.6\)
At the end of its secular growth, when mirror modes have reached sufficiently high-amplitudes, we simultaneously observe right-hand and left-hand circularly polarized wave activity, which we identify as whistler and ion-cyclotron waves, respectively. We will see below (SS3.3) that these whistler and ion-cyclotron waves propagate mainly in regions of locally low magnetic field (magnetic troughs). The source of this wave activity is identified to be the pressure anisotropic population of ions and electrons mainly due to trapped particles inside the magnetic troughs. The whistlers and ion cyclotron waves then pitch-angle scatter both the trapped and untrapped particles, contributing to regulation of the global anisotropy.
Figure 6 shows different spectral properties of the late burst of waves excited from \(t\cdot s\approx 0.6\) onwards. Figure 6\(a\) shows the power spectrum in time of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) between \(0.5<t\cdot s<1.1\), so we can see both left-hand (solid blue line) and right-hand (solid orange line) circular polarizations. The power spectrum peaks at low-frequencies, consistent with the nature of the dominant mirror modes (mainly appearing in \(\delta B_{\perp,xy}\)). Additionally, we can clearly see a secondary peak at around \(\omega\sim 0.2\omega_{c,i}\), with a spread that goes from \(\omega\sim 0.1\omega_{c,i}\) to \(\omega\sim 0.3\omega_{c,i}\), in both left and right hand circular polarizations. This constitutes the characteristic feature informing the late burst of wave activity. This peak resembles observations of whistler lion roars in the Earth's Magnetosheath (see e.g. figs. 1 and 2 of Giagkiozis et al. (2018), fig. 3 of Zhang et al. (2021) for right-hand polarized waves.).
Figure 6\(b\) shows the spectrogram of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) in frequency and time, ranging \(0.4<t\cdot s<1.3\), with positive frequencies representing left-hand circularly polarized waves, and negative frequencies denoting right-hand circularly polarized waves. Here we can also see the early burst of whistler waves starting at \(t\cdot s\approx 0.4\) and peaking at \(t\cdot s\approx 0.45\) (see section SS3.2), followed by the burst of both left-hand and right-hand circularly polarized waves at \(t\cdot s\approx 0.53\) and peaking at \(t\cdot s\approx 0.65\). This coincides with the rise in amplitude of \(\delta B_{z}^{2}\) and \(\delta B_{\perp,xy}\) (see fig. 2\()g\), and the waves are continuously maintained throughout the simulation at around the same frequencies.
Finally, figure 6\(c\) shows the power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) in time and space, at \(0.5<t\cdot s<1.1\). Frequencies and wavenumbers are normalized by \(\omega_{c,i}\) and \(\omega_{p,i}/c\), respectively. Here we can also see the power at low frequencies consistent with the dominance of mirror modes appearing in \(\delta B_{\perp,xy}\). The burst of left and right hand circularly polarized waves can be seen concentrated around frequencies \(\omega\approx 0.2\omega_{c,i}\) and \(\omega\approx-0.15\omega_{c,i}\), respectively. Their range in wavenumbers is \(0.2\lesssim ck_{\parallel}/\omega_{p,i}\lesssim 0.5\). Overall, the power spectra of both left and right hand polarized waves are very similar to those of ion-cyclotron and electron cyclotron whistlers, and we will identify these waves as such from now on. In the next section, we will confirm that the population of particles that excites these waves have anisotropic distributions that are IC and whistler unstable.
The morphology of IC and whistler waves can also be seen in figures 2\(d\) and 2\(f\). The short wavelength, wavepacket-like structures are identified with whistler modes, which propagate mainly through regions of low magnetic field strength of mirror modes, as we can see from \(\delta B_{\perp,xy}\) ( blue shaded regions in fig. 2\(d\)). The IC modes, on the other hand, are identified as the longer wavelength, extended modes that can be seen in \(\delta B_{z}\). The IC modes seem to propagate through the entire simulation box, given their ion-scale wavelength, whereas whistler modes clearly propagate within mirrors'
magnetic troughs. This also resembles magnetosheath's observations of whistler waves within magnetic troughs (e.g. Kitamura et al. (2020)).
The peak frequencies observed in figure 6 for both ion-cyclotron and whistler waves can be understood in terms of their dispersion relations. At high-\(\beta\) and \(kR_{L,e}\sim 1\), and for quasi-parallel propagation, the dispersion relation for whistler waves can be written as (Six (1992); Drake et al. (2021))
\[\omega_{W}=\omega_{c,e}k_{W}^{2}d_{e}^{2}=\omega_{c,i}k_{W}^{2}d_{i}^{2}, \tag{1}\]
where \(d_{e}=c/\omega_{p,e}\) and \(d_{i}=c/\omega_{p,i}\) are the electron and ion skin depths, respectively. Knowing that \(d_{i}^{2}=R_{L,i}^{2}/\beta_{i}\), we can also write
\[\omega_{W}=\omega_{c,i}k_{W}^{2}R_{L,i}^{2}/\beta_{i}. \tag{2}\]
Similarly, at high-\(\beta\) and \(kR_{L,i}\sim 1\), and for quasi-parallel propagation, the ion-cyclotron wave dispersion relation is approximately (Six (1992))
\[\omega_{\rm IC}=\omega_{c,i}k_{\rm IC}d_{i}, \tag{3}\]
and we can also write
\[\omega_{\rm IC}=\omega_{c,i}k_{\rm IC}R_{L,i}/\sqrt{\beta_{i}}. \tag{4}\]
Figure 6: Panel a: The power spectrum of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) as a function of frequency. The frequencies are normalized by the initial ion-cyclotron frequency. The power spectrum of left-handed circularly polarized waves (\(\omega>0\)) is shown as a solid blue line, whereas the power spectrum corresponding to right-handed circularly polarized waves (\(\omega<0\)) is shown as an orange line folded into positive frequencies. Panel b: Spectrogram of \(\delta B_{z}(\omega)+i\delta B_{\perp,xy}(\omega)\) in frequency and time, at \(0.4<t\cdot s<1.3\). The frequency is normalized by the initial ion-cyclotron frequency. Positive and negatives frequencies corresponds to left-hand and right-hand circularly polarized waves, respectively. Panel c: The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp}(\omega,k_{\parallel})\) at \(0.5<t\cdot s<1.1\). Frequencies are normalized by the initial ion gyrofrequency, and wavenumbers are normalized by the initial ion skin depth. Here also, positive and negative frequencies show left-hand and right-hand polarized waves, respectively.
Figure 7: The power spectrum in space of \(\delta B_{\perp,xy}(k_{x},k_{y})\) at \(t\cdot s=0.9\). The wavenumbers \(k_{x},k_{y}\) are normalized by the initial ion Larmor radius \(R_{L,i}^{\rm ini}\). The solid and dashed white lines represent, respectively, the direction parallel and perpendicular to the main magnetic field at that time.
We can estimate \(k_{W}\), \(k_{\text{IC}}\) by looking at the power spectrum of any of the perpendicular components of the magnetic field fluctuations. Figure 7 shows the power spectrum of \(\delta B_{\perp,xy}(k_{x},k_{y})\) at \(t\cdot s=0.9\), where the solid and dashed white lines denote the direction parallel and perpendicular to the mean magnetic field \(\mathbf{B}\) at that time, respectively. Apart from the power in the perpendicular direction corresponding to the mirror modes, in the power parallel to \(\mathbf{B}\) (i.e. along the solid black line in fig. 7) we can distinguish large wavenumbers centered at \((k_{y}R_{L,i}^{\text{init}},k_{x}R_{L,i}^{\text{init}})\approx(0.75,-1.5)\) (and also at \((-1.5,0.75)\)), corresponding to whistlers, and also smaller wavenumbers centered at \((k_{x}R_{L,i}^{\text{init}}\), \(k_{y}R_{L,i}^{\text{init}})\approx(0.5,0.7)\), corresponding to ion-cyclotron waves.
The large wavenumber extent in \(k_{x},k_{y}\) observed in fig. 7 gives us an approximate range of wavenumbers \(1.5\lesssim k_{W}R_{L,i}^{\text{init}}\lesssim 3.2\) for whistlers, implying frequencies \(0.1\lesssim\omega_{W}/\omega_{c,i}^{\text{init}}\lesssim 0.5\) (as \(\beta_{i}^{\text{init}}=20\)), consistent with the frequencies observed in the negative half of fig. 6\(c\), corresponding to right-hand polarized waves. Similarly, the small wavenumber extent in \(k_{x},k_{y}\) gives us a range of wavenumbers \(0.4\lesssim k_{W}R_{L,i}^{\text{init}}\lesssim 1.1\), implying frequencies \(0.1\lesssim\omega_{IC}/\omega_{c,i}^{\text{init}}\lesssim 0.25\), also consistent with the frequencies in the positive half of fig. 6\(c\), corresponding to left-hand polarized waves.
### 2D Particle Distributions
The specific time at which ion and electron cyclotron wave activity saturates, which coincides with the end of mirror instability's secular growth (\(t\cdot s\approx 0.6\)), and the propagation of whistler waves within regions of low-magnetic field strength, give a hint towards uncovering the mechanism by which the whistler and IC waves are excited.
As a first step, we explore the evolution of the pressure anisotropy of ions and electrons at the time at which the IC and whistler waves are excited. At this time, mirror modes have achieved high amplitudes, and created sharp regions of high and low magnetic field strength, making the plasma spatially inhomogeneous. This implies that, in general, the plasma \(\beta\) of ions and electrons would not be the same at different locations in the simulation domain, making the anisotropy thresholds for the growth of the modes different in different regions. For this reason, a more appropriate method would be to measure the 2D distribution of pressure anisotropy, \(\beta_{\parallel}\) and \(\delta B_{\parallel}/B\) in the simulation domain.
Figure 8 shows the distribution of ion and electron pressure anisotropy as a function of ion \(\beta_{\parallel,i}\) (panels \(a\), \(b\), \(c\)) and electron \(\beta_{\parallel,e}\) (panels \(g\), \(h\), \(i\)), respectively, and the distribution of \(\delta B_{\parallel}/B\) versus \(\beta_{\parallel,i}\) (panels \(d\), \(e\), \(f\)) and electron \(\beta_{\parallel,e}\) (panels \(j\), \(k\), \(l\)), respectively. These distributions are shown at three different times: beginning of the simulation (\(t\cdot s\approx 0\), left column); end of mirror's secular growth and beginning of ion and electron cyclotron wave activity (\(t\cdot s=0.6\), middle column), and a late stage well into the saturated regime of mirror instability (\(t\cdot s=1.4\), right column). In the top row of fig. 8 (i.e. panels \(a\), \(b\), and \(c\)), the dashed gray line corresponds to the approximate mirror instability threshold \(1/\beta_{\parallel,i}\)(Hasegawa (1969)), the dashed-dotted orange line corresponds to the theoretical IC threshold \(0.53/\beta_{\parallel,i}^{0.4}\) from Gary & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\), and the solid black line is the best-fit to the global ion anisotropy derived in section 3.1 (see fig. 3\(a\)). In the third row of fig. 8 (panels \(g\), \(h\), \(i\)), the dotted-dashed black line shows the whistler instability threshold \(0.36/\beta_{\parallel,e}^{0.55}\) from Gary & Wang (1996), for \(\gamma_{W}/\omega_{c,e}=10^{-2}\).
Starting with the ions, we can see that, from a stable, isotropic distribution at the very beginning of the simulation (fig. 8\(a\)), the ions become anisotropic enough to surpass both the mirror and the theoretical IC threshold from Gary & Lee (1994), as well as our best-fit instability threshold, as shown in fig. 8\(b\). At this point (\(t\cdot s=0.6\)), we start to observe the excitation of ion-cyclotron waves that seem to interact with the ions and start driving them towards a marginally stable state. This can be seen in fig. 8\(c\), where the distribution becomes bimodal, with one population of ions under both the IC-threshold and our best-fit threshold (centered at \(\beta_{\parallel,i}\sim 5\) and \(P_{\perp,i}/P_{\parallel,i}\sim 1.2\)), meaning that they are driven towards marginal stability with respect to the IC threshold. Interestingly, there exists another ion population that is still unstable (centered at \(\beta_{\parallel,i}\sim 18\) and \(P_{\perp,i}/P_{\parallel,i}\sim 1.4\)), therefore IC waves could then continue being excited even at this late stages. This could explain the sustained amplitude observed in \(\delta B_{z}^{2}\) and \(\delta B_{\perp,xy}^{2}\) in figure 2\(g\). Therefore, we can see that the unstable population has a higher \(\beta_{\parallel,i}\), and the marginally stable population moves to lower \(\beta_{\parallel,i}\).
For a similar value of \(P_{\parallel,i}\), the difference in the values of \(\beta_{\parallel,i}\) between the unstable and marginally stable populations should imply a difference in the local magnetic field strength (recall \(\beta_{\parallel,i}=8\pi P_{\parallel,i}/B^{2}\)). This gives us a hint on the location of the unstable and marginally stable populations in the domain, as mirror modes generate distinct regions of low and high magnetic field strength.
As we can see in figs. 8\(d\), 8\(e\), and 8\(f\), the ions also separate into two populations now in \(\delta B_{\parallel}/B\). Starting from zero magnetic field fluctuations at the beginning (\(t\cdot s\approx 0\), fig. 8\(d\)), we see how \(\delta B_{\parallel}/B\) starts to grow at \(t\cdot s=0.6\) (fig. 8\(e\)), until we clearly see the bimodal distribution at \(t\cdot s=1.4\), separating the two ion populations: the high-\(\beta_{\parallel,i}\) population located in regions of \(\delta B_{\parallel}/B<0\) (i.e. low-\(B\) strength), and the low-\(\beta_{\parallel,i}\) population located in regions of \(\delta B_{\parallel}/B>0\) (i.e. high-\(B\) strength).
We can therefore conclude that, after mirror modes develop and the IC waves are excited (\(t\cdot s\gtrsim 0.6\)), the ions separate into two populations: one of low-\(\beta_{\parallel,i}\), located mainly in high-\(B\) strength regions, and marginally stable to IC waves, and the second population with high-\(\beta_{\parallel,i}\), low-\(B\) strength regions, and still unstable to IC waves. This suggests that the IC wave are excited by the unstable ion populations in regions of low magnetic field strength, and then interact with the ions in such a way that the ions move to regions of high-\(B\) strength and low \(\beta_{\parallel,i}\). In sections 3.5 and 3.6 we will see that the population of ions that contribute most to the
Figure 8: Top row: The distribution of ion \(P_{\perp,i}/P_{\parallel,i}\) versus \(\beta_{\parallel,i}\) in the simulation domain at different times: \(t\cdot s=0.01\) (left column), \(t\cdot s=0.6\) (middle column), and \(t\cdot s=1.4\) (right column). The dashed gray line represents the approximate mirror instability threshold \(1/\beta_{\parallel,i}\)(Hasegawa (1969)), the dotted-dashed orange line represents the IC instability threshold from \(\mathrm{Gary}\) & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (\(\gamma_{IC}\) is the IC instability growth rate), and the solid black line represents our best-fit threshold from section 3.1 (see fig. 3a). Second row: The distribution of \(\delta B_{\parallel}/B\) versus in \(\beta_{\parallel,i}\) for the same three times as in the top row. Third row: The distribution of electron \(P_{\perp,e}/P_{\parallel,e}\) versus \(\beta_{\parallel,e}\) in the simulation domain at the same three times as in the top row. The dotted-dashed black line represents the whistler instability threshold from Gary & Wang (1996). Fourth row: The distribution of \(\delta B_{\parallel}/B\) versus electron \(\beta_{\parallel,e}\) for the same three times as in the top row. An animated version of this plot is available in the online version.
anisotropy that destabilize the IC waves are the ones that become trapped within mirror troughs.
In the case of the electrons, we can see a similar evolution. From a stable, isotropic distribution at \(t\cdot s\approx 0\) (fig. 8\(d\)), we can see how part of it becomes now whistler unstable at \(t\cdot s=0.6\) (fig. 8\(e\)), after which the excited whistler waves interact with the electrons driving again part of the distribution gradually towards marginal stability, also generating a bimodal distribution similar to that of the ions. At \(t\cdot s=1.4\) (fig. 8\(f\)), we can see that the electron population with low \(\beta_{\parallel,e}\) (centered at \(\beta_{\parallel,e}\sim 5\) and \(P_{\perp,e}/P_{\parallel,e}\sim 1\)) is marginally stable with respect to the whistler threshold, whereas the electron population with high \(\beta_{\parallel,e}\) (centered at \(\beta_{\parallel,e}\sim 18\) and \(P_{\perp,e}/P_{\parallel,e}\sim 1.2\)) is still unstable with respect to the whistler threshold. This also implies that whistler waves can still be excited at late stages in the simulation.
Analogously, the electrons also separate into two populations with respect to \(\delta B_{\parallel}/B\). Similarly to ions, we also see that the population with low-\(\beta_{\parallel,e}\) is located in regions of \(\delta B_{\parallel}/B<0\) (low \(B\) strength), whereas the high-\(\beta_{\parallel,e}\) population is located in regions of \(\delta B_{\parallel}/B>0\) (high \(B\) strength). In this sense, we also conclude that in the case of electrons, the unstable population is located mainly in regions of low-\(B\) strength and high-\(\beta_{\parallel,e}\), where whistler waves are being excited, and the marginally stable population is located mainly in regions of high-\(B\) field and low-\(\beta_{\parallel,e}\). This also suggests that whistler waves interact with electrons so they move to regions of high-\(B\) strength. We will also see in sections 3.5 and 3.6 that the electrons that contributes the most to the pressure anisotropy that destabilizes whistler waves are the ones that become trapped within mirror modes.
### Physical Mechanism of Secondary IC/Whistler Excitation: Trapped and Passing Particles
In this section, we study the evolution of the ions and electrons that become trapped within mirror modes as part of the mirror instability's interaction with the particles. We characterize the pressure anisotropy and distribution functions of these populations at the moment of trapping, and provide evidence that they are able to destabilize parallel propagating modes that ultimately allow them to escape the mirrors and regulate the overall anisotropy.
As part of their evolution, and after reaching secular growth, mirror modes start to trap particles of low parallel momentum \(p_{\parallel,j}\) (\(j=i,e\)) in regions of low local magnetic field strength. The trapped particles bounce between these regions and conserve their magnetic moment in the process (Southwood & Kivelson (1993); Kunz et al. (2014)). In order to investigate the relation between this trapping process and the excitation of the these late IC and whistler waves, we select and track a population of ions and electrons throughout the evolution of the simulation, and study the trapped and passing (i.e. untrapped) subpopulations separately.
We select and track two populations of ions and two populations of electron having relatively small and large parallel momentum at a particular time in the simulation. This way, we make sure that we can capture particles that eventually become trapped and others that remained passing. In our fiducial simulation b20m8w800, the two populations of ions that we track have parallel momentum \(-0.12<p_{\parallel,i}/m_{i}c<0.12\) and \(0.3395<p_{\parallel,i}/m_{i}c<0.3405\) at \(t\cdot s=0.4\). Similarly, the two populations of electrons have \(-0.2<p_{\parallel,e}/m_{e}c<0.2\) and \(0.4599<p_{\parallel,i}/m_{i}c<0.4601\) at \(t\cdot s=0.4\).
In order to study the behavior of the tracked particles when the IC and whistler activity starts, we ask how many particles become trapped and how many become passing during the interval of time at which this activity happens, which we denote by \(\Delta\tau_{LR}\). To answer this, we look at fig. 2\(g\) and define \(\Delta\tau_{LR}\) as the interval of time \(0.52<t\cdot s<0.62\), which covers the exponential growth that \(\delta B_{z}^{2}\) and \(\delta B_{\perp,xy}^{2}\) undergo before saturating. This interval of time also covers the majority of the secular growth of mirror modes (see \(\delta B_{\parallel}^{2}\)).
Having this time interval well defined, we now must define the criterion by which we consider a particle to become trapped and passing during \(\Delta\tau_{LR}\), and for this we look at the evolution of their parallel momentum. Similarly to Ley et al. (2023), we define a particle as trapped during \(\Delta\tau_{LR}\) if the median of its parallel momentum over \(\Delta\tau_{LR}\) is smaller than one standard deviation over \(\Delta\tau_{LR}\). We then define a particle as passing if the median of its parallel momentum over \(\Delta\tau_{LR}\) is greater than or equal than one standard deviation over \(\Delta\tau_{LR}\). This is a statement of small variation of \(p_{\parallel,j}\) over \(\Delta\tau_{LR}\), which in turn is a proxy for an oscillatory be
Figure 9: Panel a: Evolution of the parallel momentum of an individual trapped ion (blue line) and passing ion (red line) for our fiducial simulation b20m8w800. Panel b: Evolution of the parallel momentum of a trapped electron (blue line) and passing electron (red line) for run b20m8w800. The dashed vertical gray lines in each panel indicates the time interval \(\Delta\tau_{LR}\).
havior of \(p_{\parallel,j}\), characteristic of a bouncing particle between mirror points. We confirm that this simple criterion gives excellent results separating trapped from passing particles.
Figure 9 shows the evolution of the parallel momentum of a trapped and a passing ion (panels \(a\)) and a trapped and a passing electron (panels \(b\)), where the dashed vertical gray lines indicate \(\Delta\tau_{LR}\). We can see the the oscillation pattern in the evolution of the parallel momentum of the trapped ion during \(\Delta\tau_{LR}\) and until \(t\cdot s\approx 0.7\), when it escapes. The parallel momentum of the passing ion evolves without major changes as the ion streams through the simulation box. This behavior is consistent with previous works using hybrid and fully kinetic simulations Kunz et al. (2014); Riquelme et al. (2016).
In figure 9\(d\) we can also see the oscillating pattern of the parallel momentum of the trapped electron, indicating bouncing inside mirror modes, which ends at \(t\cdot s\approx 1.1\), when it escapes. The parallel momentum of the passing electron does not vary significantly during \(\Delta\tau_{LR}\), confirming that it was streaming along field lines at least at that interval.
It is worth noting, however, what happens after \(\Delta\tau_{LR}\). Our criterion for identifying particles as trapped and passing was only within \(\Delta\tau_{LR}\), and after that period of time particles can continue evolving into the saturated stage of mirror modes, where they can escape, be trapped again or continue streaming unperturbed. Indeed, by looking at its parallel momentum, we can see that after escaping and streaming for a while, the trapped ion shown in figure 9\(a\) gets trapped again at \(t\cdot s\approx 1.1\), bounces inside a mirror mode and escapes again at \(t\cdot s\approx 1.4\). Similarly, we can also see that the trapped electron shown in figure 9\(b\) gets trapped again at \(t\cdot s\approx 1.2\) and seems to stay trapped until the end of the simulation. Interestingly, the passing electron also gets trapped at around \(t\cdot s\approx 0.7\), by looking at its parallel momentum, and then escapes again at \(t\cdot s\approx 1.2\). Therefore, in a statistical sense, we can consider the particles as trapped and passing only over the particular period of time \(\Delta\tau_{LR}\) that we chose, after which they can continue evolving and turn into passing or trapped again, as long as the mirror saturation persists in the simulation.
### Physical Mechanism of Secondary IC/Whistler Excitation: Distribution Functions
In this section, we look at the evolution of the pressure anisotropy and distribution functions of trapped and passing ions and electrons defined according to the criterion described in section SS3.5. We see that during \(\Delta\tau_{LR}\), both trapped ions and trapped electrons contribute most of the pressure anisotropy necessary to destabilize IC and whistler modes. We show that these IC and whistler waves interact in a quasilinear fashion with ions and electrons, respectively, and quickly regulate their pressure anisotropy such that their distributions evolve to a more isotropic state.
Figure 10\(a\) shows the evolution of the pressure anisotropy of trapped and passing ions. We can see that the anisotropy of trapped ions initially follows a double-adiabatic (CGL, dotted blue line) evolution until \(t\cdot s\approx 0.5\) (i.e. just start
Figure 10: Panel a: Evolution of the pressure anisotropy of ions identified as trapped (blue line) and passing (red line). The dashed green line indicates the best-fit threshold to \(\Delta P_{\parallel,i}/P_{\parallel,i}\) shown in fig. 3\(a\), and the dotted blue-gray and red lines show the corresponding double-adiabatic (CGL) evolution of trapped and passing ions, respectively. Panel b: Evolution of the pressure anisotropy of trapped (blue line) and passing (red line) electrons. The dotted blue and red lines show the corresponding CGL evolution of trapped and passing electrons, respectively.
ing \(\Delta\tau_{LR}\)), when the mirror modes start to trap them. We can readily see that during \(\Delta\tau_{LR}\), the trapped ions develop a significant anisotropy, peaking at around \(t\cdot s\approx 0.55\). The anisotropy is quickly regulated and converges to the best-fit threshold that we derived in section 3.1 and show in figure 3\(a\). Similarly, the pressure anisotropy of passing ions evolves in a relatively unperturbed fashion following CGL evolution (dotted red line) through the majority of \(\Delta\tau_{LR}\), until \(t\cdot s\approx 0.6\), where it passes from negative values (consistent with passing ions having preferentially large parallel momentum) to a positive but, more isotropic state consistent with the best-fit threshold from fig. 3\(a\).
The behavior of the pressure anisotropy of trapped and passing particles can be understood as follows. Mirror modes interact resonantly with ions and electrons according to the resonance condition \(\omega_{M}-k_{\parallel,M}v_{\parallel}=0\), where \(\omega_{M}\) and \(k_{\parallel,M}\) are the frequency and parallel wavenumber of mirror modes, respectively, and \(v_{\parallel}\) is the parallel velocity of the particle. The very low frequency of mirror modes, \(\omega_{M}\sim 0\), implies that the resonant particles are the ones having very low \(v_{\parallel}\) (\(v_{\parallel}<\gamma_{M}/k_{\parallel,M}\), where \(\gamma_{M}\) is the mirror growth rate, Southwood and Kivelson (1993); Pokhotelov et al. (2002)). These are the particles that become trapped within mirror modes (Kivelson and Southwood (1996)). Consequently, all trapped particles have very low parallel velocity and, as a whole, they should naturally have a pressure anisotropy \(P_{\perp,j}>P_{\parallel,j}\) (\(j=i,e\)). Similarly, all passing particles have large \(v_{\parallel}\), and therefore they have a pressure anisotropy \(P_{\parallel,j}>P_{\perp,j}\). In this sense, fig. 10 is consistent with the trapping argument described in Kivelson and Southwood (1996) (see their fig. 1).
The fact that both trapped and passing ions evolve into the average level of ion anisotropy shown in fig 3\(a\) shows that their trapped or passing condition corresponds to a transient state, that passes after a time comparable to \(\Delta\tau_{LR}\). Also, notice that the anisotropy of the two populations (and for the whole population for that matter) is significant enough to drive IC waves unstable (see section 3.3), and therefore this can provide evidence for the source of the IC waves that we see. If this is the case, their interaction with ions is the source of the quick regulation of the anisotropy that we see in fig. 10\(a\). Interestingly, under this scenario, the regulation of the pressure anisotropy of passing ions, which happens at the same time as that of the trapped ions, should also be due to the interaction with these IC waves, meaning that the IC waves interact with both populations of trapped and passing ions simultaneously, and therefore regulate the global ion anisotropy. We confirm that this is the case by looking at the evolution of the distribution functions of trapped and passing ions.
In the case of electrons, we observe a similar evolution in figure 10\(b\). Initially, both trapped and passing electrons detach from their respective CGL evolution (dotted blue and red lines, respectively), and develop a significant anisotropy \(\Delta P_{e}>0\), that peaks at \(t\cdot s\approx 0.4\). We also see that trapped electrons detach from their CGL evolution much earlier than passing electrons. This evolution then leads to the early burst of whistler waves, which also quickly regulates and drives anisotropies of both trapped and passing electrons towards a more isotropic state (see section 3.2). As expected, the anisotropy of trapped electrons is higher than the one of the passing electrons. After this process, and during \(\Delta\tau_{LR}\), the anisotropy of trapped electrons increases again, while that of passing electrons continues to decrease. This way, we see that trapped electrons build up a pressure anisotropy \(\Delta P_{e}>0\) that is also quickly regulated after \(\Delta\tau_{LR}\), converging to an anisotropy level similar to the one of the general electron populations. The anisotropy \(\Delta P_{e}<0\) of the passing electrons also gets regulated towards a similar anisotropy level during the same time. This evolution of trapped electrons also suggests that they become anisotropic enough to destabilize whistler waves, and therefore could be the source of the whistler activity observed at \(t\cdot s>0.6\). We provide evidence of this by showing the evolution of the distribution function of electrons.
Figure 11 shows the distribution functions of trapped and passing ions and electrons at three different times \(t\cdot s=0.57\), \(t\cdot s=0.61\), and \(t\cdot s=0.75\), spanning \(\Delta\tau_{LR}\) and also part of mirror's saturated stage. In the following we describe the evolution of each population:
The distribution of trapped ions (figs. 11\(a\), 11\(b\), and 11\(c\)) shows a clear loss-cone like form at \(t\cdot s=0.57\) (all outside the loss-cone), meaning that all trapped ions are effectively trapped in mirror troughs. At this time, trapped ions have reached its maximum pressure anisotropy according to figure 10\(a\).
Once IC waves are excited, they interact with both trapped and passing ions via pitch-angle scattering in a quasilinear fashion (Kennel and Engelmann (1966)). This diffusion process happens along paths of constant particle's energy in the frame moving with the waves (see e.g. Squire et al. (2022)):
\[v_{\perp,j}^{2}+(v_{\parallel,j}-\omega/k_{\parallel})^{2}=\text{const.} \tag{5}\]
We plot these contours in solid white lines in each plot of figure 11 as \(v_{\perp,j}^{2}+(v_{\parallel,j}-\omega/k_{\parallel})^{2}\approx v_{\perp,j} ^{2}+v_{\parallel,j}^{2}=\text{const.}\), as in a high-\(\beta\) scenario, the phase velocity of an IC wave offers a small correction of order \(v_{A}/v_{th,i}=\sqrt{1/\beta}\). Additionally, the IC waves in our simulations are destabilized in both parallel and anti-parallel directions to \(\mathbf{B}\). We can see that the relaxation of the distribution function of trapped ions by the quasi-linear interaction with IC waves agrees very well with these paths, by looking at \(t\cdot s=0.61\) and \(t\cdot s=0.75\).
The distribution of passing ions (figs. 11\(d\), 11\(e\), and 11\(f\)) shows, on the one hand, a concentration of ions at low perpendicular velocities and relatively large parallel velocities, and it looks fairly symmetric in \(v_{\parallel}\). This is consistent with having untrapped ions mainly streaming along the mean magnetic field in both directions. On the other hand, the population of large parallel velocity is also shown at \(v_{\parallel}/c\approx 0.3\) (see section 3.5). Interestingly, the passing ions also interact quasilinearly with IC waves, and this is particularly evident in the evolution of passing ions. Indeed, we can clearly see how the large parallel velocity population of passing ions evolves along the contours of of constant particle energy with
Figure 11: The distribution function \(f(v_{\parallel,j},v_{\perp,j})\) of trapped and passing ions and electrons at three different times: \(t\cdot s=0.57\) (first column), \(t\cdot s=0.61\) (second column), and \(t\cdot s=0.75\) (third column). The distribution function \(f_{\text{\rm trapped}}(v_{\parallel,i},v_{\perp,i})\) of the trapped ions is shown in the first row, \(f_{\text{\rm passing}}(v_{\parallel,i},v_{\perp,i})\) for the passing ions are shown in the second row, \(f_{\text{\rm trapped}}(v_{\parallel,e},v_{\perp,e})\) for the trapped electrons are shown in the third row, and \(f_{\text{\rm passing}}(v_{\parallel,e},v_{\perp,e})\) for the passing electrons are shown in the fourth row. In all the plots, the solid white curves denote contours of constant particle energy in the frame moving with the waves: \(v_{\perp,j}^{2}+(v_{\parallel,j}-\omega/k_{\parallel})^{2}\approx v_{\perp,j} ^{2}+v_{\parallel,j}^{2}=\text{const.}\) (\(j=i,e\)). An animation is available.
excellent agreement at \(t\cdot s=0.61\) and \(t\cdot s=0.75\). We can understand the evolution of this population by looking at the gyroresonance condition
\[\omega-k_{\parallel}v_{\parallel,i}=\pm\omega_{c,i}. \tag{6}\]
If we look at the peak power at positive frequencies in the power spectrum shown in fig. 6\(c\), we can estimate the frequency and wavenumber at which most of the power of IC waves resides: \(\omega/\omega_{c,i}^{\text{init}}\approx 0.2\), and \(ck_{\parallel}/\omega_{p,i}^{\text{init}}\approx\pm 0.15\). From eq. (6) we can estimate then the parallel velocity of the ions interacting gyroresonantly with these IC waves:
\[\frac{v_{\parallel,i}}{c}=\frac{\omega/\omega_{c,i}^{\text{init}}\mp 1}{(ck_{ \parallel}/\omega_{p,i}^{\text{init}})(m_{i}c^{2}/k_{B}T_{i}^{\text{init}})^{1 /2}(\beta_{i}^{\text{init}}/2)^{1/2}}, \tag{7}\]
which gives \(v_{\parallel,i}/c\approx 0.36\) and \(v_{\parallel}/c\approx-0.24\), which falls in the range of the large parallel velocity population. The quasilinear evolution also happens with the population with smaller parallel velocity.
The population of trapped electrons (figs. 11\(g\), 11\(h\), and 11\(i\)) shows a very similar evolution to that of trapped ions; the loss-cone like distribution is also apparent. The evolution of this distribution is also consistent with a quasilinear interaction now between the electron and whistler waves, driving the distribution towards isotropy along paths of constant particle energy, as can be seen at later times in figure 11.
Finally, the population of passing electrons (figs 11\(j\), 11\(k\), and 11\(l\)) also shows a very similar evolution to that of the ions. The populated loss-cone shape of the distribution is also apparent, and we can see the quasilinear evolution of the distribution function along constant particle energy contours at later times.
This way, we have provided evidence for the source of both IC and whistler waves observed in our simulations. Once ions and electrons get trapped in regions of low magnetic field strength of mirror modes, they get significantly anisotropic with a loss-cone like distribution, which is able to destabilize parallel-propagating IC and whistler waves, respectively. These waves then interact with both population of trapped and passing particles in a quasilinear fashion, driving both populations of trapped and passing ions and electrons towards a more isotropic state. Consequently, this mechanism can contribute to regulate the global anisotropy of ions and electrons, and can thus be a pathway for particle escape and consequent saturation of mirror modes (Kunz et al. (2014)).
## 4 Mass-Ratio Dependence
In this section, we compare simulations with different mass ratios: \(m_{i}/m_{e}=8\), \(m_{i}/m_{e}=32\), but with the same initial conditions for ions, as shown for runs b20m8w800, b20m32w800,and b20m64w800 in Table 1, although with somewhat different temperatures. We see that IC and whistler waves' signatures do appear in all three simulations, and thus they do not seem to present a strong dependence on mass ratio.
Figure 12 shows the evolution of \(\delta B_{\parallel}^{2}\) (panel \(a\)) and \(\delta B_{z}^{2}\) (panel \(b\)) for the three runs with mass ratios: \(m_{i}/m_{e}=8,32\), and \(64\) (runs b20m8w800, b20m32w800, and b20m64w800 in table 1). We can see a very consistent evolution of \(\delta B_{\parallel}^{2}\) in all three runs, meaning that \(m_{i}/m_{e}\) does not play a significant role on the early evolution and saturation of the mirror instability. Similarly, \(\delta B_{z}^{2}\) shows the same features in all three runs, especially during mirrors' secular growth and saturated stages (\(t\cdot s\approx 0.5\) onwards). The early peak in \(\delta B_{\parallel}^{2}\) at \(t\cdot s\approx 0.4\) corresponding to the early whistler burst is also seen in the three runs, but more prominently in the simulation with \(m_{i}/m_{e}=8\). This is possibly due to an enhancement of this wave activity by the ions, which are able to weakly feel the presence of whistlers, as the mass separation is not very large. This effect disappears as the mass ratio increases, and the early whistlers only affect the electrons. More importantly, for \(t\cdot s>0.5\), all three runs show a very similar evolution of \(\delta B_{\parallel}^{2}\).
Figure 13 shows the evolution of the pressure anisotropy of ions (panel \(a\)) and electrons (panel \(b\)) for the same three runs. In the case of the ions, we can see an overall evolution that is very consistent in all three runs, both in early and late stages. We can see a smaller anisotropy overshoot for the simulation with \(m_{i}/m_{e}=8\) at \(t\cdot s\approx 0.4\), coincident with the enhancement seen in \(\delta B_{z}^{2}\), during the early whistler burst, suggesting that ions can weakly interact with the whistlers at this mass ratio, and consequently their anisotropy does not reach the
Figure 12: Panel a: The energy in the parallel component of the magnetic field fluctuations \(\delta\mathbf{B}\), for three simulations with different mass ratios: \(m_{i}/m_{e}=8\) (run b20m8w8, blue line), \(m_{i}/m_{e}=32\) (run b20m32w8, orange line), and \(m_{i}/m_{e}=64\) (run b20m64w80, green line). Panel b: same as in panel a but for the perpendicular component of \(\delta\mathbf{B}\) out of the plane of the simulation in the same runs.
same overshoot as the rest of the runs. Notwithstanding the foregoing, we can see how all three runs display a very similar pressure anisotropy evolution afterwards, which is also well described by the best-fit threshold \(\Delta P_{i}\propto\beta_{i}^{-0.45}\) shown in fig. 3.
In the case of the electron pressure anisotropy \(\Delta P_{e}\), we can also see a similar evolution overall in fig. 13\(b\). The overshoot at \(t\cdot s\approx 0.4\) is larger for decreasing mass ratios, possibly due to the fact that the whistler amplitude required for efficient scattering decreases as \(m_{i}/m_{e}\) increases, as explained above. This means that, after \(\Delta P_{e}/P_{e,\parallel}\) has surpassed the threshold for efficient growth of the whistler modes, the simulations with larger \(m_{i}/m_{e}\) take shorter times to reach the necessary whistler amplitude to efficiently scatter the electrons. This implies that the overshoot decreases for higher mass ratios. During late stages, we can see a very similar evolution of \(\Delta P_{e}\) in all three runs, that is even more evident for \(m_{i}/m_{e}=32\) and \(m_{i}/m_{e}=64\) (orange and green curves in fig. 13\(a\)), which essentially lie on top of each other.
Finally, figure 14 shows the power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) for the simulation with \(m_{i}/m_{e}=32\) (fig. 14\(a\)) and with \(m_{i}/m_{e}=64\) (fig. 14\(b\)). Here we also see a very similar power distribution at both mass ratios, showing both left-hand and right-hand polarized waves (positive and negative frequencies, respectively). The peak power is also observed at the same frequencies and wavenumbers as in fig. 6 for both polarizations.
This way, we can see that the linear and nonlinear evolution of the mirror instability and the late IC and whistler evolution are well captured in our simulations, and it does not strongly depend on mass ratio.
## 5 Dependence on initial plasma \(\beta\)
We tested whether the IC and whistler waves' activity is present in simulations with \(\beta_{i}^{\text{init}}=2\) (i.e, total \(\beta^{\text{init}}=4\)), and \(\beta_{i}^{\text{init}}=40\) (i.e. total \(\beta^{\text{init}}=80\)), and compare them with our fiducial simulation at \(\beta_{i}^{\text{init}}=20\). We confirm that the mirror instability can develop in all simulations, and both IC and whistler waves do appear at nonlinear stages.
The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp,xy}(\omega,k_{\parallel})\) is shown in figure 15, and we can see that it is similar among the three \(\beta_{i}\) cases. In all three cases we see the power concentrated at \(\omega\sim 0\) corresponding to mirror modes. In addition, we also see a concentration of power in right and left polarized waves, so both IC and whistler waves are also present, although their peak frequency changes. For the \(\beta_{i}^{\text{init}}=2\) case we see that the peak frequency is at \(\omega/\omega_{c,i}^{\text{init}}\approx 0.5\), whereas in the \(\beta_{i}^{\text{init}}=40\) case it shifts to smaller values, \(\omega/\omega_{c,i}^{\text{init}}\approx 0.1\). This shift in peak frequency can also be explained by the IC and whistler dispersion relations analogous to our discussion in section 3.3.
Figure 16 compares the evolution of \(\delta B_{\parallel}^{2}\) (i.e., mainly the development of the mirror instability) for the three runs with different initial \(\beta^{\text{init}}\) (the other phyiscal parameters are the same, see table 1). In all three cases we can see an exponential phase followed by the secular and saturated stages characteristic of the mirror instability, which develops earlier for higher initial \(\beta^{\text{init}}\), consistent with the smaller anisotropy threshold for the growth of the mirror instability at larger beta. The amplitude of \(\delta B_{\parallel}^{2}\) at the saturated stage is comparable for both \(\beta^{\text{init}}=20\) and \(\beta^{\text{init}}=40\) runs, and is smaller for the \(\beta^{\text{init}}=2\) run, as also seen by previous works (e.g. Riquelme et al. (2015)).
Indeed, when we look at the evolution of \(\delta B_{z}^{2}\), we can see that for both \(\beta^{\text{init}}=20\) and \(\beta^{\text{init}}=40\) runs, the evolution is similar: both display an early whistler burst at \(t\cdot s\approx 0.4\), and
Figure 14: The power spectrum of \(\delta B_{z}(\omega,k_{\parallel})+i\delta B_{\perp}(\omega,k_{\parallel})\) at \(0.5<t\cdot s<0.7\) for \(m_{i}/m_{e}=32\) (run b20m32w800, left panel) and \(m_{i}/m_{e}=64\) (run b20m64w800, right panel). Positive and negatives frequencies show the power in left-hand and right-hand polarized waves, respectively.
Figure 13: Panel a: Evolution of the ion pressure anisotropy for three simulations with different mass ratios: \(m_{i}/m_{e}=8\) (run b20m8w8, blue line), \(m_{i}/m_{e}=32\) (run b20m32w8, orange line), and \(m_{i}/m_{e}=64\) (run b20m64w8, green line). The dashed red line indicates the best-fit the threshold shown figure \(3a\), \(\Delta P_{i}/P_{\parallel,i}\propto\beta_{\parallel,i}^{-0.45}\). Panel b: same as in panel a but for the electron pressure anisotropy in the same runs.
a IC/whistler excitation stage (\(t\cdot s\approx 0.5\) onwards) at almost the same amplitude. In the case of the \(\beta^{\text{init}}=2\) run, we can see that the first exponential growth in \(\delta B_{z}^{2}\) at \(t\cdot s\approx 0.6\) is consistent with an IC burst (see e.g. Ley et al. (2019)), after which we see the typical oscillation pattern that the excitation of late IC and whistler waves produces, from \(t\cdot s\approx 0.8\) onwards, saturating at a similar amplitude than the rest of the runs, and displaying a very high-frequency oscillation.
In figure 17, we compare the evolution of the ion and electron pressure anisotropy plotted as a function of their parallel plasma \(\beta_{i}\) for the three simulations with different initial \(\beta_{i}\) (As in all our simulations the mean magnetic field strength is continuously increasing, so the particles' \(\beta_{i}\) decreases over time, and therefore the simulation evolves towards the left in fig. 17.).
In the case of the ions (fig. 17\(a\)), we can see a similar overshoot and subsequent regulation, but the overshoot occurs at a lower anisotropy value for increasing \(\beta_{i}\). This is consistent with the inverse \(\beta_{i}\) dependence of the mirror instability threshold: mirror modes are excited earlier at higher \(\beta_{i}\), and therefore have relatively more time to regulate the anisotropy before it reaches a higher overshoot. Interestingly, the saturated stage of the ion pressure anisotropy is consistent with the theoretical IC threshold from Gary & Lee (1994): \(\Delta P_{i}/P_{\parallel,i}=0.53\beta_{i,i}^{-0.40}\) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\) (see fig. 3\(a\)) in all three runs, suggesting a universality in the threshold that \(\Delta P_{i}/P_{\parallel,i}\) follows, as a consequence of the excitation of IC waves during mirrors' saturated stage. (In the case of the \(\beta_{i}^{\text{init}}=40\) run, however, it is more unclear whether it can follow the above mentioned threshold at late stages, given the short duration of this run.)
In the case of electrons (fig. 17\(b\)), we can also see that the overshoot is reached at lower values of the pressure anisotropy \(\Delta P_{e}/P_{\parallel,e}\) for increasing initial beta, consistent with an inverse-\(\beta_{i}\) dependence now of the whistler instability anisotropy threshold. It is interesting to note that after the anisotropy overshoot, and during these late stages, the electron pressure anisotropy tends to be significantly smaller than the expectation from the threshold for the whistler instability in the higher initial \(\beta_{i}\) runs (\(\beta_{i}^{\text{init}}=20\) and \(\beta_{i}^{\text{init}}=40\)), irrespective of the generation of pressure anisotropy that the continuous amplification of the magnetic field produces as a consequence of the shear motion in the simulation. Notice, however, that in low magnetic field regions the electron pressure anisotropy is larger than the whistler threshold
Figure 16: Panel \(a\) : Evolution of \(\delta B_{\parallel}^{2}\) for three simulations with different initial ion beta: \(\beta_{i}^{\text{init}}=2\) (solid red line, run b2m8w800), \(\beta_{i}^{\text{init}}=20\) (solid black line, run b20m8w800), and \(\beta_{i}^{\text{init}}=40\) (solid blue line, run b40m8w800). Panel \(b\): Evolution of \(\delta B_{z}^{2}\) for the same three simulations shown in panel \(a\).
and, therefore, enough to excite whistlers (fig 8). This shows the key role played by mirror-generated magnetic troughs in creating the conditions to excite whistlers despite the fact that, globally, the pressure anisotropy may be not be enough to make these waves unstable. On the other hand, in the \(\beta_{i}^{\text{init}}=2\) run, \(\Delta P_{e}/P_{\parallel,e}\) continues to weakly grow because of the continuous \(B\) amplification, and this is done following a marginal stability state well described by the threshold of the whistler instability \(\Delta P_{e}/P_{\parallel,e}\propto\beta^{-0.55}\)(Gary & Wang (1996)), consistent with previous works at lower \(\beta_{\parallel,e}\)(Ahmadi et al. (2018)).
The persistence of the late IC and whistler activity at different initial plasma \(\beta_{i}\) suggests that this phenomenon is a natural consequence of the excitation of the mirror instability. In other words, in a weakly collisional plasma with an initial plasma \(\beta_{i}\) sufficiently high to effectively excite the mirror instability, the excitation of IC and whistler waves at its late, saturated stages seems to be ubiquitous.
## 6 Summary and Discussion
In summary, we have performed fully kinetic PIC simulations of a collisionless plasma subject to a continuous amplification of the background magnetic field to study the nonlinear stages of the mirror instability and the ensuing excitation of secondary ion-cyclotron (IC) and whistler instabilities, in conditions where plasma pressure dominates over magnetic pressure (high-\(\beta\)). After mirror modes reach high-amplitudes and are able to trap ions and electrons within regions of low-**B**, we observe the excitation of sub-dominant left-hand polarized IC and right-hand polarized whistler waves that persist throughout the rest of the simulation, well into the nonlinear stages of the mirror instability (see section 3.3). The whistler waves in our simulations seem to be consistent with the observations of whistler lion roars in the Earth's magnetosheath.
By tracking ions and electrons through the simulation, we studied the excitation mechanism of both IC and whistler waves. We characterized the population of tracked particles as trapped and passing (i.e. untrapped) within mirror modes, and followed the evolution of their distribution functions. We observed that the trapped population of both ions and electrons become highly anisotropic while trapped inside mirror modes, contributing most of the anisotropy that allows the plasma to become unstable to IC and whistler waves, respectively. On the other hand, the passing ions and electrons developed features concentrated at small perpendicular and large parallel velocities, and fairly symmetric with respect to \(v_{\parallel}\), with a clear absence at small parallel velocities (see section 3.6).
Once IC and whistlers are excited, they interact with both trapped and passing population of ions and electrons, respectively, via gyroresonant pitch-angle scattering. As a result of this interaction, both trapped ions and electrons reduce their anisotropy and escape from magnetic troughs of mirror modes, following the prediction of quasilinear theory. The passing ion and electron populations evolve in a similar manner (see fig. 11). Interestingly, this process is observed to regulate the global anisotropy of ions and electrons in the simulation, driving the ion pressure anisotropy towards the IC instability threshold (Gary & Lee (1994)), and the electron pressure anisotropy towards a global anisotropy much smaller than expected from theoretical whistler threshold. Given this low electron pressure anisotropy, the whistler excitation can be explained by the fact that, within mirror-generated magnetic troughs, the pressure anisotropy is locally larger than the whistler threshold (fig. 8\(i\)). Thus, we interpret the whistler-driven regulation of electron pressure anisotropy as a local phenomenon, mainly produced by trapped electrons within non-linear mirror structures.
The excitation of the secondary IC and whistler waves is maintained as long as mirror modes are present and growing, and this also was observed in simulations of lower and higher initial plasma \(\beta\). This way, IC and whistler waves could be a concomitant feature of the nonlinear evolution of the mir
Figure 17: Panel \(a\): Ion Anisotropy, \(\Delta P_{i}/P_{\parallel,i}\) as a function of parallel ion beta, \(\beta_{\parallel,i}\) (with respect to the main magnetic field **B**) for three different simulations with different initial ion beta: \(\beta_{i}^{\text{init}}=2\) (solid red line, run b2m8w800), \(\beta_{i}^{\text{init}}=20\) (solid black line, run b20m8w800), and \(\beta_{i}^{\text{init}}=40\) (solid blue line, run b40m8w800). The dotted-dashed orange line shows the IC threshold \(\Delta P_{i}/P_{\parallel,i}=0.53/\beta_{i,4}^{0.4}\) from Gary & Lee (1994) for \(\gamma_{IC}/\omega_{c,i}=10^{-2}\). Panel \(b\): Electron anisotropy \(\Delta P_{e}/P_{\parallel,e}\) as a function of parallel electron beta, \(\beta_{\parallel,e}\) for the same three simulations shown in panel \(a\). The dashed gray line in this case shows the threshold for the whistler instability, \(\Delta P_{e}/P_{\parallel,e}=0.36\beta_{\parallel,e}^{-0.55}\) for growth rate \(\gamma=0.01\omega_{c,e}\), from Gary & Wang (1996).
ror instability, and provide an interesting physical connection between ion-scale instabilities and electron-scale physics.
In this work, we did not vary the scale-separation ratio \(\omega_{c,i}/s\). In an environment like the ICM, turbulent eddies could drive the plasma locally through shear motions at kinetic scales with a wide range of frequencies \(s\), and we typically expect larger kinetic energy at low frequencies (i.e., higher \(\omega_{c,i}/s\)). For larger values of \(\omega_{c,i}/s\), previous works have shown that mirror modes can develop comparatively earlier in the simulations, therefore having relatively more time to saturate, and reaching similar amplitudes (Kunz et al. (2014); Melville et al. (2016); Riquelme et al. (2016); Ley et al. (2023)). In this sense, we would expect a similar late excitation of IC and whistler waves once mirror modes have reached a saturated stage.
The excitation of IC and whistler waves at saturated stages of the mirror instability modulates its nonlinear evolution, and therefore could affect transport processes in the ICM in which mirror modes come into play.
Particularly important is the pressure anisotropy regulation in the context of collisionless heating and dissipation via magnetic pumping in the ICM (Kunz et al. (2011); Ley et al. (2023)). The marginal stability level that the ion pressure anisotropy reaches at the saturated stage, \(\Delta P_{i}\propto\beta_{\parallel,i}^{0.45}\) (see fig. 3\(a\), also correctly pointed out by Sironi & Narayan (2015)) is larger than the usual mirror threshold \(1/\beta_{\parallel,i}\) by a factor \(\sim\beta_{\parallel,i}^{0.55}\). which directly translates into an excess heating of the same order. Indeed, given that \(\beta\) is estimated to be \(\beta\sim 10-100\), and that the heating rate is directly proportional to the pressure anisotropy, this could imply a heating rate several times larger than predicted from the mirror threshold, enhancing the efficiency of the mechanism by draining more energy from the turbulent motions that drive the pumping.
The structures of high and low magnetic field that mirror modes produce in the saturated stage seem to be persistent in time, and its energy \(\delta B_{\parallel}^{2}\) does not decrease as long as the amplification of the mean magnetic field \(B\) is maintained (see fig. 2\(g\)). Even when this amplification is halted or reversed, the decaying timescales of mirror modes are large compared to the typical ion gyroperiod (Melville et al. (2016); Ley et al. (2023)). This implies that the trapping process of ions and electrons also persists, along with the excitation of secondary IC and whistlers. This source of whistler waves can have interesting implications in the context of ICM thermal conduction models like whistler-regulated MHD (Drake et al. (2021)), as they can dominate the electron scattering in the presence of mirror modes.
This source of whistler waves associated to mirror modes can also influence the suppression of the effective heat conductivity in the plasma even in the absence of heat-fluxes (Komarov et al. (2016); Riquelme et al. (2016); Roberg-Clark et al. (2016, 2018)), and this can have consequences in larger-scale instabilities such as the Magneto-thermal instability (MTI, Balbus (2000); Berlok et al. (2021); Perrone & Latter (2022a,b)).
Future work aimed towards 3D fully kinetic PIC simulations would be required to have a full understanding of the consequences of the mirror instability and secondary IC/whistler excitation in these high-\(\beta\) plasmas.
We thank Aaron Tran for providing the dispersion solver used in this work, and we thank Lorenzo Sironi, Jonathan Squire and Alexander Schekochihin for useful comments and discussion. FL. acknowledges support from NSF Grant PHY-2010189. M.R. thanks support from ANID Fondecyt Regular grant No. 119167. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant No. ACI-1548562. This work used the XSEDE supercomputer Stampede2 at the Texas Advanced Computer Center (TACC) through allocation TG-AST190019 (Towns et al. (2014)). This research was performed using the compute resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences. This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02).
|
2309.16989 | Extension realizing affine datum: low-dimensional cohomology | For arbitrary varieties of universal algebras, we develop the theory around
the first and second-cohomology groups characterizing extensions realizing
affine datum. Restricted to varieties with a weak-difference term, extensions
realizing affine datum are exactly extensions with abelian kernels. This
recovers many classic examples of extensions with abelian coefficients since
varieties with a weak-difference term give a far-reaching generalization of
algebras like groups with multiple operators; indeed, any variety of algebras
whose congruences form modular lattices. We introduce a notion of action and
its model relation with a set of equations. In varieties with a difference
term, central extensions are characterized by a property of their actions.
Restricting further to a subclass of varieties with a difference term which
still includes groups with multiple operators, we recover a special case of the
representation of extensions with abelian kernels. | Alexander Wires | 2023-09-29T05:21:23Z | http://arxiv.org/abs/2309.16989v2 | # Extensions realizing affine datum : Low-dimensional cohomology
###### Abstract.
For arbitrary varieties of universal algebras, we develop the theory around the first and second-cohomology groups characterizing extensions realizing affine datum. Restricted to varieties with a weak-difference term, extensions realizing affine datum are exactly extensions with abelian kernels. This recovers many classic examples of extensions with abelian coefficients since varieties with a weak-difference term give a far-reaching generalization of algebras like groups or modules expanded by multilinear operations; indeed, any variety of algebras whose congruences form modular lattices. We introduce a notion of action and its model relation with a set of equations. In varieties with a difference term, central extensions are characterized by a property of their actions.
## 1. Introduction
Let \(\operatorname{Eqv}A\) denote the set of equivalence relations on the set \(A\).
**Definition 1.1**.: The algebra \(A\) is an _extension_ of the equivalence relation \(\alpha\in\operatorname{Eqv}A\) by the algebra \(Q\) if there is a surjective homomorphism \(\pi:A\to Q\) such that \(\ker\pi=\alpha\).
Here we will be interested in the following general problem.
**Problem 1.2**.: (The Extension Problem) Given the algebra \(Q\) in the signature \(\tau\) and an equivalence relation \(\alpha\in\operatorname{Eqv}A\), classify the extensions of \(\alpha\) by \(Q\).
As stated in the problem, the task is to understand the different interpretations of \(\tau\) on the set \(A\) such that \(\alpha\) becomes a congruence \(\alpha\in\operatorname{Con}A\) on the algebra \(A\) and \(A/\alpha\approx Q\). Note that we did not assume any additional structure on the equivalence relation \(\alpha\) - this is almost certainly too general to begin with and we will require that \(\alpha\) be presented with additional information of some sort; in particular, a partial structure related to abelianess in a given commutator theory. For universal algebras, a commutator can be a complicated object and there are several available. The term-condition commutator (or TC-commutator) [6, Ch 2.5] and rectangular commutator [6, Ch 5] would be natural choices to develop a cohomology theory since for large classes of varieties their abelian algebras are related to modules and semimodules, respectively. It would be interesting for future study to consider central extensions or extensions with abelian kernels in more general commutators, since it is possible to axiomatize commutator theories even in non-regular categories such as topological spaces and ordered sets [15].
In the present manuscript, we consider the deconstruction/reconstruction of extensions of universal algebras realizing affine datum. Instead of fixing a class of algebras with a particular equational theory, we elect to follow the approach of the Riemann integrable: that is, with abelian congruences in varieties with a weak-difference term as a motivating example, we isolate properties which will serve as a definition for a particular type of affine datum and then develop the standard \(1^{\text{st}}\) and \(2^{\text{nd}}\)-cohomology constructions for this abstract definition of datum. Then one would prove that the extensions with abelian kernels in a particular class of algebras under consideration are captured by this notion; analogously, functions with countable jump discontinuities are Riemann integrable, etc. One benefit of this approach is that
the equational theory (or varietal membership) of extensions is included in the \(2^{\mathrm{nd}}\)-cohomology group as a parameter which affords a Galois correspondence between the lattice of subvarieties and the subgroup lattice in cohomology. Formalizing this approach leads us to consider the \(2\)-cocycles parametrizing extensions as a particular type of multisorted structure together with their equations. Future work might explore the general model-theoretic constructions in this situation, but at the moment it suggests to us in the same way that third cohomology for groups describes the realization of outer actions by possible extensions, higher cohomologies of universal algebras should be describing structures interpreting special multisorted expansions of the language of \(2\)-cocycles together with the equations they satisfy.
The genesis for the present work and its sequel Wires [13] is the curious result [3, Prop 7.1] and the following corollary [3, Cor 7.2] which provides a characterization of \(2\)-step nilpotent algebras in congruence modular varieties. This is extended in Theorem 2.7 and Proposition 2.9 below to varieties with a difference term. While not stated explicitly, [3, Prop 7.1] is essentially the decomposition of a central extension in a manner similar to that found in group theory where a factor set is added to the operations of the direct product of the datum groups. The analogy can be developed further where the unital ring, which comes from the derived module structure of abelian algebras in varieties with a difference term, encodes the action terms which correspond in the case of groups to the trivial action, or \(\mathds{Z}\)-module structure, on abelian kernels in central extensions. Some classical results concerning central extensions of groups are generalized in [13] to the broader class of varieties with a difference term. The current manuscript is a generalization to arbitrary varieties of the original motivating observations.
The approach follows the constructions and concrete manipulations of functions in what is sometimes referred to as the Schreier cohomology of group extensions found in Schreier [9] and Dedecker[2]. We define the notion of affine datum and establish the machinery around the \(1^{\mathrm{st}}\) and \(2^{\mathrm{nd}}\)-cohomology groups characterizing extensions in a variety \(\mathcal{V}\) which realize the datum. The development is satisfyingly replete with the standard toolkit of \(2\)-cocycles, \(2\)-coboundaries, actions, derivations and stabilizing automorphisms which reduce to previous definitions upon specialization for many familiar classes of algebras. We provide a summary of the key points of the development:
* The notion of an action compatible with a set of equations (Definition 3.2).
* The notion of \(2\)-cocycles as interpretations of a multisorted signature compatible with a set of equations (Definition 3.14).
* Definition of affine datum (Definition 3.9).
* Reconstruction of an algebra from affine datum and \(2\)-cocycle compatible with a variety \(\mathcal{U}\) (Theorem 3.21).
* For an abelian congruence \(\alpha\in\operatorname{Con}A\) in which \(\mathcal{V}(A)\) has a weak-difference term, decomposition of the algebra into the quotient \(A/\alpha\), a partial structure \(A^{\alpha,\tau}\), and a \(2\)-cocycle \(T\) and homomorphic action \(A/\alpha*A^{\alpha,\tau}\) derived from \(A^{\alpha,\tau}\) both compatible with any parameter variety \(\mathcal{U}\geq\mathcal{V}(A)\) (Theorem 3.19).
* The characterization of semidirect products realizing affine datum (Proposition 3.22).
* Definition of \(2\)-coboundaries associated to affine datum (Definition 3.24). We shall see \(2\)-coboundaries, are compatible with any variety containing the datum (Lemma 3.26).
* An equivalence on \(2\)-cocycles (and so extensions) derived from \(2\)-coboundaries which is finer than isomorphism (Theorem 3.31).
* The abelian \(2^{\mathrm{nd}}\)-cohomology groups \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) for affine datum defined from \(2\)-cocycles modulo the \(2\)-coboundaries. The cohomology group accepts as an additional parameter a variety \(\mathcal{U}\) which restricts the class of extensions to membership in \(\mathcal{U}\) (Theorem 3.29).
* The \(2^{\mathrm{nd}}\)-cohomology groups define a Galois connection between the lattice of varieties containing fixed datum and the subgroup lattice of the ableian group generated by \(2\)-cocycles of the datum (Proposition 3.30).
* The automorphisms stabilizing an extension form an abelian group isomorphic to the \(1\)-cocycles (derivations) of the datum (Theorem 3.34).
* In varieties with a difference-term, a characterization of central extensions by affine datum with trivial actions (Proposition 2.4 and Proposition 3.37). In such varieties, central extensions of the datum are characterized by the \(2^{\mathrm{nd}}\)-cohomology group \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) restricted to trivial actions (Theorem 3.40).
* In varieties with a weak-difference term, abelian extension are classified by a subgroup of \(2^{\mathrm{nd}}\)-cohomology which generalizes the classic group-theoretic case which characterizes abelian extensions by symmetric \(2\)-cocycles (Corollary 3.45).
In the sequel manuscript [13], we consider central extensions in varieties with a difference term. Such varieties include groups, quasigroups, relative Rota-Baxter groups and braces but also many familiar examples of abelian groups expanded by multilinear operations such as rings, Liebniz algebras, diassociative algebras, Rota-Baxter algebras and conformal algebras. In this very general setting, we establish a low-dimensional Hochschild-Serre exact sequence for central extensions and prove theorems of Schur-type relating relative commutators of free presentations and \(2^{\mathrm{nd}}\)-cohomology of regular datum, covers and perfect algebras. This generalizes analogous results for many previously established cases, including groups, which can then be derived by specialization to those particular varieties. Highlights include:
* A low-dimensional Hochschild-Serre exact sequence (or inflation/restriction sequence) for a central extension with an additional affine datum in varieties with a difference term.
* Characterizing injectivity and surjectivity of the transgression map and its relation to lifting homomorphisms through central extensions of regular datum.
* Generalization of Schur's formula relating commutators of presentations with second-cohomology to any variety with a difference term which has an idempotent element (see Schur [10] and Karpilovski [4]). By specialization this recovers the classical result for groups and more recent work on algebras of Loday-type.
* Discussion of covers and the relationship between cohomology and perfect algebras (see Milnor [8]).
Consult Bergman [1] for fundamentals of universal algebras. For the basic definition and general properties of the term-condition commutator, Kearnes and Kiss [6, Cha 2.5] is useful and for the theory of the commutator in congruence modular varieties consult Freese and McKenzie [3] and McKenzie and Snow [7]. The properties of the term-condition commutator in varieties with a difference term is developed in Kearnes [5]. Any special definitions are introduced in the text as needed.
## 2. Varieties with a difference term
Fix a variety \(\mathcal{V}\) in the signature \(\tau\), an algebra \(A\in\mathcal{V}\) and \(\alpha,\beta\in\operatorname{Con}A\). A _section_ of a surjective map \(\pi:A\to Q\) is a map \(l:Q\to A\) such that \(\pi\circ l=\operatorname{id}_{Q}\). An \(\alpha\)-_trace_ is a map \(r:A\to A\) such that \(r=l\circ\pi\) for a section \(l\) of the canonical map \(\pi:A\to A/\alpha\); equivalently, \((r(x),x)\in\alpha\) and \(|r(x/\alpha)|=1\) for all \(x\in A\). Let us recall the definition and some properties of the congruence \(\Delta_{\alpha\beta}\).
* \(M(\alpha,\beta)\) is the subalgebra of \(A^{4}\) generated by \(\left\{\begin{bmatrix}x&x\\ y&y\end{bmatrix},\begin{bmatrix}u&v\\ u&v\end{bmatrix}:(x,y)\in\alpha,(u,v)\in\beta\right\}\).
* \(A(\alpha)=\{(x,y)\in A\times A:(x,y)\in\alpha\}\) is the congruence \(\alpha\) as a subalgebra of \(A\times A\).
* \(\Delta_{\alpha\beta}=\operatorname{Cg}^{A(\alpha)}\left(\left\{\left\langle \begin{bmatrix}u\\ u\end{bmatrix},\begin{bmatrix}v\\ v\end{bmatrix}\right\rangle:(u,v)\in\beta\right\}\right)=\operatorname{Tr}M( \alpha,\beta)\in\operatorname{Con}A(\alpha)\) where \(\operatorname{Tr}\) denotes the transitive closure of a binary relation.
* The diagonal homomorphism is the map \(\delta:A\to A(\alpha)/\Delta_{\alpha\alpha}\) given by \(\delta(u)=\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha\alpha}\).
* We have the following inclusions \[\Delta_{\alpha\beta}\vee\eta_{0}=p_{0}^{-1}(\beta)\qquad\text{and}\qquad\Delta_{ \alpha\beta}\vee\eta_{1}=p_{1}^{-1}(\beta)\] for the projections \(p_{i}:A(\alpha)\to A\) with \(\eta_{i}=\ker p_{i}\).
In the case that \(\mathcal{V}\) is a congruence modular variety, \(\Delta_{\alpha\beta}\) has stronger properties.
* We have the following description of the commutator ([3, Thm 4.9]): \[(x,y)\in[\alpha,\beta]\qquad\text{iff}\qquad(\exists u\in A)\ \ \begin{bmatrix}u\\ u\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ y\end{bmatrix}\qquad\text{iff}\qquad(\exists v\in A)\ \ \begin{bmatrix}v\\ x\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}v\\ y\end{bmatrix}.\]
* If \(\alpha\leq\beta\) and \([\alpha,\beta]=0\), then we have the following description of the congruence ([3, Thm 5.5, Prop 5.7]): \[\begin{bmatrix}x\\ y\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}u\\ v\end{bmatrix}\qquad\text{iff}\qquad x\,\alpha\,y\,\beta\,u\text{ and }v=m(y,x,u).\]
* We have the following inclusions ([7, Lem 8.6]): \[\Delta_{\alpha\beta}\wedge\eta_{0}\leq p_{1}^{-1}([\alpha,\beta])\qquad\qquad \Delta_{\alpha\beta}\wedge\eta_{1}\leq p_{0}^{-1}([\alpha,\beta])\qquad\qquad[ p_{0}^{-1}(\beta),p_{1}^{-1}(\beta)]\leq\Delta_{\alpha\beta}.\]
If \(\alpha\) is an abelian congruence, then we can easily observe some analogues in varieties with a difference term or weak-difference term. Given an equivalence relation \(\alpha\) on a set \(A\), there is an equivalence relation \(\hat{\alpha}\) on the set \(A(\alpha)\subseteq A\times A\) defined by \(\begin{bmatrix}x\\ y\end{bmatrix}\,\hat{\alpha}\,\begin{bmatrix}u\\ v\end{bmatrix}\Leftrightarrow x\;\alpha\;y\;\alpha\;u\;\alpha\;v\).
**Lemma 2.1**.: Let \(\mathcal{V}\) be a variety, \(A\in\mathcal{V}\) and \(\alpha,\beta\in\operatorname{Con}A\).
1. If \(\mathcal{V}\) has a difference term, then \[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\quad\Longrightarrow\quad b\;[\alpha,\beta]\;d.\]
2. If \(\mathcal{V}\) has a weak-difference term and \(\alpha\) is abelian, then \[(\exists a\in A)\ \begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\quad\Longleftrightarrow\quad b\;[\beta,\alpha]\;d.\]
3. If \(\mathcal{V}\) has a weak-difference term and \(\alpha\) is abelian, then \(\Delta_{\alpha\alpha}=\Delta_{\alpha\gamma}\wedge\hat{\alpha}\) for any \(\alpha\leq\gamma\leq(0:\alpha)\).
4. If \(\mathcal{V}\) has a difference term and \(\alpha\) is abelian, the following are equivalent: 1. \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ d\end{bmatrix}\); 2. \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\\ c\end{bmatrix}\quad\text{and}\quad d\;[\alpha,\beta]\;m(b,a,c)\); 3. \(c\;\beta\;a\;\alpha\;b\quad\text{and}\quad d\;[\alpha,\beta]\;m(b,a,c)\).
Proof.: (1) Note \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\) implies \((b,d)\in\alpha\wedge\beta\), and so \(d\;[\alpha,\beta]\;m(d,b,b)\) since \(m\) is a difference term. Applying the difference term to the sequence
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ d\end{bmatrix}\,\ \begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}b\\ b\end{bmatrix}\ \ \Longrightarrow\ \ \begin{bmatrix}b\\ b\end{bmatrix}=\begin{bmatrix}m(a,a,b)\\ m(b,b,b)\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(a,a,b)\\ m(d,b,b)\end{bmatrix}=\begin{bmatrix}b\\ m(d,b,b)\end{bmatrix}.\]
Since \(\Delta_{\alpha\beta}=\operatorname{Tr}M(\alpha,\beta)\), we see that \((b,m(d,b,b))\in[\beta,\alpha]=[\alpha,\beta]\). Then \(b\;[\alpha,\beta]\;m(d,b,b)\;[\alpha,\beta]\;d\).
(2) For necessity, this is the same calculation as part (1) except \(m\) is the weak-difference term. We use \(m(a,a,b)=b\) because \((a,b)\in\alpha\) is abelian, and \((b,d)\in\alpha\wedge\beta\leq\alpha\) implies \(d=m(d,b,b)\), as well. For sufficiency, use the recursive generation of the TC-commutator by \(M(\alpha,\beta)\) matrices starting from the equality relation. The thing to note is that the elements in the matrices in \(M(\alpha,\beta)\) which witness any
inclusion \((a,b)\in[\alpha,\beta]\) will all be contained in a single \(\alpha\)-class on which the weak-difference term behaves as a Mal'cev term.
(3) Note \(\Delta_{\alpha\alpha}\leq\Delta_{\alpha\gamma}\wedge\hat{\alpha}\). Conversely, suppose \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\gamma}\wedge\hat{\alpha}\begin{bmatrix}c\\ d\end{bmatrix}\). Then the coordinates are all contained in a single \(\alpha\)-class. Apply the weak-difference term to the generators
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}a\\ a\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}c\\ c\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}=\begin{bmatrix}m(a,a,a)\\ m(b,a,c)\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}m(a,a,c)\\ m(b,a,c)\end{bmatrix}=\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\]
using that \(\alpha\) is abelian. Then \(\Delta_{\alpha\alpha}\leq\Delta_{\alpha\gamma}\) implies \(\begin{bmatrix}c\\ d\end{bmatrix}\Delta_{\alpha\gamma}\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\gamma}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\), and part (2) yields \((d,m(b,a,c))\in[\gamma,\alpha]=0\).
(4) Assuming (a), we have \((a,b)\in\alpha\) and \((a,c)\in\beta\). Apply the difference term to the sequence
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ a\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ c\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}=\begin{bmatrix}m(a,a,a)\\ m(b,a,c)\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(a,a,c)\\ m(b,a,c)\end{bmatrix}=\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}.\]
Then applying part (1) to \(\begin{bmatrix}c\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\) produces \((d,m(b,a,c))\in[\alpha,\beta]\). From (b), (c) follows immediately.
Now assume (c). By (2) above, there is \(x\in A\) such that \(\begin{bmatrix}x\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ m(b,a,c)\end{bmatrix}\). We also have \(m(b,a,c)\ \alpha\)\(m(a,a,c)=c\) and \(d\ \alpha\wedge\beta\ m(b,a,c)\ \beta\ p(b,a,a)=b\). The condition \(c\ \beta\ a\ \alpha\ b\) produces from the generators
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ b\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}a\\ a\end{bmatrix}\,\ \begin{bmatrix}a\\ a\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ c\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}.\]
We then apply the difference term to the sequence
\[\begin{bmatrix}x\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ d\end{bmatrix}\,\ \begin{bmatrix}x\\ d\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}x\\ m(b,a,c)\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ m(b,a,c)\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ d\end{bmatrix}.\]
We then use these last two relations and apply the difference term to derive
\[\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ m(b,a,c)\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ m(b,a,c)\end{bmatrix}\,\ \begin{bmatrix}b\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}m(b,a,c)\\ d\end{bmatrix}\quad\Rightarrow\quad\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}c\\ d\end{bmatrix}.\]
**Lemma 2.2**.: Let \(\mathcal{V}\) be a variety, \(A\in\mathcal{V}\) and \(\alpha,\beta,\sigma\in\operatorname{Con}A\). Let \(r:A\to A\) be a \(\sigma\)-trace.
1. If \(\mathcal{V}\) has a difference term, then 1. \(A(\alpha)/\Delta_{\alpha 1}\) is abelian; 2. the set map \(\ \psi:A/[\alpha,\beta]\longrightarrow A(\alpha)/\Delta_{\alpha\beta}\,\times\,A/\sigma\) which is defined by \[\psi(x/[\alpha,\beta])=(\left\langle r(x),x\right\rangle/\Delta_{\alpha\beta}, x/\sigma)\] is injective.
2. If \(\mathcal{V}\) is congruence modular, then 1. for all \((a,b)\in\alpha\) and \(u\in A\), \(\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha 1}=\begin{bmatrix}a\\ d(a,b,b)\end{bmatrix}/\Delta_{\alpha 1}\) where \(d\) is the difference term for \(\mathcal{V}\); 2. \(A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1]1}\approx A(\alpha)/ \Delta_{\alpha 1}\);
Proof.: (1a) Since \(\mathcal{V}\) has a difference term, the TC-commutator is join-additive in both coordinates [5]; therefore,
\[[1_{A(\alpha)},1_{A(\alpha)}]=[p_{0}^{-1}(1_{A}),p_{1}^{-1}(1_{A})] =[\eta_{1}\vee\Delta_{\alpha 1},\eta_{0}\vee\Delta_{\alpha 1}]\] \[=[\eta_{1},\eta_{0}]\vee[\eta_{1},\Delta_{\alpha 1}]\vee[\Delta_{ \alpha 1},\eta_{0}]\vee[\Delta_{\alpha 1},\Delta_{\alpha 1}]\] \[=[\eta_{1},\Delta_{\alpha 1}]\vee[\Delta_{\alpha 1},\eta_{0}]\vee[ \Delta_{\alpha 1},\Delta_{\alpha 1}]\] \[\leq\Delta_{\alpha 1}.\]
(1b) Since \(r:A\to A\) is a \(\sigma\)-trace, we have \(r(x)\ \sigma\ x\) and \(r(x)=r(y)\) iff \((x,y)\in\sigma\). Suppose \(\psi(x/[\alpha,\beta])=\psi(y/[\alpha,\beta])\). Then \(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\beta}=\begin{bmatrix}r(y)\\ y\end{bmatrix}/\Delta_{\alpha\beta}\) and \(x/\sigma=y/\sigma\). So we have \((x,y)\in\sigma\) and \(\begin{bmatrix}r(x)\\ x\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}r(y)\\ y\end{bmatrix}\). Then \(r(x)=r(y)\) implies \((x,y)\in[\alpha,\beta]\) by Lemma 2.1(1), and so \(x/[\alpha,\beta]=y/[\alpha,\beta]\).
(2a) For \((a,b)\in\alpha\), \(a\ [\alpha,\alpha]\ d(a,b,b)\) since \(d\) is a difference term. Since \([\alpha,\alpha]\leq[\alpha,1]\), by the remarks before Lemma 2.1 there is \(v\in A\) such that \(\begin{bmatrix}a\\ d(a,b,b)\end{bmatrix}\Delta_{\alpha 1}\begin{bmatrix}v\\ v\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}u\\ u\end{bmatrix}\) using the generators of \(\Delta_{\alpha 1}\) for the second step.
(2b) Define \(\phi:A(\alpha)\to A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1]1}\) by \(\phi\left(\begin{bmatrix}a\\ b\end{bmatrix}\right):=\begin{bmatrix}a/[\alpha,1]\\ b/[\alpha,1]\end{bmatrix}/\Delta_{\alpha/[\alpha,1]1}\). It is easy to see that \(\phi\) is a surjective homomorphism. We now calculate the kernel. We have \(\Delta_{\alpha 1}\subseteq\ker\phi\) since \(\phi\) identifies the generators of \(\Delta_{\alpha 1}\). Now assume \(\left(\begin{bmatrix}a\\ b\end{bmatrix},\begin{bmatrix}c\\ e\end{bmatrix}\right)\in\ker\phi\). Then \((a,b),(c,e)\in\alpha\) and
\[\begin{bmatrix}a/[\alpha,1]\\ b/[\alpha,1]\end{bmatrix}\Delta_{\alpha/[\alpha,1]1}\begin{bmatrix}c/[\alpha,1 ]\\ e/[\alpha,1]\end{bmatrix}.\]
Since \(\alpha/[\alpha,1]\) is central in \(A/[\alpha,1]\), we have \(d(b,a,c)/[\alpha,1]=e/[\alpha,1]\) and so \((e,d(b,a,c))\in[\alpha,1]\). Then by congruence modularity, \(\begin{bmatrix}u\\ u\end{bmatrix}\Delta_{\alpha\beta}\begin{bmatrix}e\\ d(b,a,c)\end{bmatrix}\) for some \(u\in A\). Define \(x+_{0}y:=d(x,0,y)\) where \(0:=\begin{bmatrix}u\\ u\end{bmatrix}/\Delta_{\alpha 1}\) is the \(\Delta_{\alpha 1}\)-class containing the diagonal. Since \(A(\alpha)/\Delta_{\alpha 1}\) is abelian by (1a), \(x+_{0}y\) is the operation of an abelian group in which \(0=\begin{bmatrix}e\\ d(b,a,c)\end{bmatrix}/\Delta_{\alpha 1}\) is the identity.
Now observe
\[\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1}=\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}a\\ a\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1} =\begin{bmatrix}d(a,a,e)\\ d(b,a,c)\end{bmatrix}/\Delta_{\alpha 1}\] \[=\begin{bmatrix}e\\ d(b,a,c)\end{bmatrix}/\Delta_{\alpha 1}=0\]
implies \(\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1}=-\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}\). Then
\[\begin{bmatrix}c\\ e\end{bmatrix}/\Delta_{\alpha 1}-\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}=\begin{bmatrix}c\\ e\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}e\\ c\end{bmatrix}/\Delta_{\alpha 1} =\begin{bmatrix}d(c,c,e)\\ d(e,c,c)\end{bmatrix}/\Delta_{\alpha 1}\] \[=\begin{bmatrix}e\\ d(e,c,c)\end{bmatrix}/\Delta_{\alpha 1}=0.\]
This yields \(\ker\phi\subseteq\Delta_{\alpha 1}\)
**Remark 2.3**.: There is freedom in the choice of \(\sigma\) in Lemma 2.2(4). Injectivity is immediate if we choose \(\sigma=0_{A}\) since the second coordinates will always be distinct. If we take \(\sigma=1_{A}\), then \(A/[\alpha,\beta]\) is injective with \(A(\alpha)/\Delta_{\alpha\beta}\). The question becomes for which choices of \(\sigma\) do we have that \(\pi\) is surjective and a homomorphism?
Suppose we have two algebras \(B\) and \(Q\) in the same signature \(\tau\) and a binary operation on \(B\) denoted by \(x+y\). Suppose further that for every operation symbol \(f\in\tau\) we have an operation \(T_{f}:Q^{\operatorname{ar}f}\to B\) we shall call the transfer of \(f\). We define a new algebra \(B\otimes^{T}Q\) over the universe of the direct product \(B\times Q\) where each operation symbol \(f\in\tau\) is interpreted by the rule
\[F_{f}\left((b_{1},q_{1}),\ldots,(b_{n},q_{n})\right):=\left(f(b_{1},\ldots,b_{n })+T_{f}(q_{1},\ldots,q_{n}),f(q_{1},\ldots,q_{n})\right).\]
In order to prove Theorem 2.7, we will first establish a special case in the next proposition which extends [3, Prop 7.1] by considering an abelian congruence instead of the center. We observe the proof is almost the same.
**Proposition 2.4**.: Let \(\mathcal{V}\) be a variety with a difference term and \(A\in\mathcal{V}\). If \(\alpha\in\operatorname{Con}A\) is abelian, then
\[A/[\alpha,1_{A}]\approx A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
Proof.: We argue by two claims. That \(A(\alpha)/\Delta_{\alpha 1}\) is abelian is Lemma 2.2(1). Fix an \(\alpha\)-trace \(r:A\to A\). Define the map \(\psi:A/[\alpha,1]\to A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha\) by
\[\psi(x/[\alpha,1]):=\left(\genfrac{[}{]}{0.0pt}{}{r(x)}{x}\big{/}\Delta_{ \alpha 1}\,\ x/\alpha\right)\]
**Claim**: \(\psi\) is bijective.
Proof.: Injectivity of \(\psi\) is Lemma 2.2(1b) where we take \(\beta=1_{A}\) and \(\sigma=\alpha\). To show surjectivity, take \(\left(\genfrac{[}{]}{0.0pt}{}{a}{b}\big{/}\Delta_{\alpha 1}\,\ x/\alpha \right)\in A(\alpha)/\Delta_{\alpha 1}\times A/\alpha\). Let \(d\) be a difference term for \(\mathcal{V}\). Then applying the difference term to the sequence
\[\genfrac{[}{]}{0.0pt}{}{a}{b}\Delta_{\alpha 1}\genfrac{[}{]}{0.0pt}{}{a}{b}\,\ \genfrac{[}{]}{0.0pt}{}{a}{a}\Delta_{\alpha 1} \genfrac{[}{]}{0.0pt}{}{a}{a}\,\ \genfrac{[}{]}{0.0pt}{}{a}{a}\Delta_{\alpha 1} \genfrac{[}{]}{0.0pt}{}{r(x)}{r(x)}\]
produces
\[\genfrac{[}{]}{0.0pt}{}{a}{b}=\genfrac{[}{]}{0.0pt}{}{d(a,a,a)}{d(b,a,a)}\Delta _{\alpha 1}\genfrac{[}{]}{0.0pt}{}{d(a,a,r(x))}{d(b,a,r(x))}=\genfrac{[}{]}{0.0pt}{}{r( x)}{d(b,a,r(x))}\,. \tag{1}\]
Then \((a,b)\in\alpha\) implies \(r(a)=r(b)\) and so \(d(b,a,r(x))\)\(\alpha\)\(d(r(b),r(a),r(x))=r(x)\); thus, \(r(d(b,a,r(x)))=r(r(x))=r(x)\) and so \(d(b,a,r(x))/\alpha=x/\alpha\). Altogether we have
\[\psi\left(d(b,a,r(x))/[\alpha,1]\right)=\left(\genfrac{[}{]}{0.0pt}{}{r(x)}{d( b,a,r(x))}\,/\Delta_{\alpha 1}\,\ d(b,a,r(x))/\alpha\right)=\left(\genfrac{[}{]}{0.0pt}{}{a}{b}\big{/}\Delta_{ \alpha 1}\,\ x/\alpha\right).\]
We now define the transfer functions. For any basic operation \(f\) with \(\operatorname{ar}f=n\), define \(T_{f}:A/\alpha\to A(\alpha)/\Delta_{\alpha 1}\) by
\[T(x_{1}/\alpha,\ldots,x_{n}/\alpha):=\genfrac{[}{]}{0.0pt}{}{r(f(x_{1},\ldots, x_{n})}{f(r(x_{1}),\ldots,r(x_{n}))}\,/\Delta_{\alpha 1}.\]
For the binary operation on \(A(\alpha)/\Delta_{\alpha 1}\), we take \(x+_{0}y:=d(x,0,y)\) where \(0:=\genfrac{[}{]}{0.0pt}{}{u}{u}\big{/}\Delta_{\alpha 1}\) is the \(\Delta_{\alpha 1}\)-class containing the diagonal. Since \(A(\alpha)/\Delta_{\alpha 1}\) is abelian by Lemma 2.2(1a), \(x+_{0}y\) is the operation of an abelian group in which \(0\) is the identity. Since \(\alpha\) is an abelian congruence, the difference term evaluates as a Mal'cev operation on \(\alpha\)-classes.
**Claim**: \(\psi\) is a homomorphism.
Proof.: Take \(\bar{x}=(x_{1},\ldots,x_{n})\in A\) and write \(r(\bar{x})=(r(x_{1}),\ldots,r(x_{n}))\). We calculate
\[F_{f}(\psi(\bar{x})) =\left(\begin{bmatrix}f(r(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}r(f(\bar{x}))\\ f(r(\bar{x}))\end{bmatrix}/\Delta_{\alpha 1}\,\ f(\bar{x})/\alpha\right)\] \[=\left(\begin{bmatrix}f(r(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}f(r(\bar{x}))\\ f(r(\bar{x}))\end{bmatrix}/\Delta_{\alpha 1}+\begin{bmatrix}r(f(\bar{x}))\\ f(r(\bar{x}))\end{bmatrix}/\Delta_{\alpha 1}\,\ f(\bar{x})/\alpha\right)\] \[=\left(\begin{bmatrix}d\left(f(r(\bar{x})),f(r(\bar{x})),r(f(\bar {x}))\right)\\ d\left(f(\bar{x}),f(r(\bar{x})),f(r(\bar{x}))\right)\end{bmatrix}\,\ f(\bar{x})/\alpha\right)\] \[=\left(\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}\,\ f(\bar{x})/\alpha\right)\] \[=\psi(f(\bar{x}))\]
because \(f(r(\bar{x}))\ \alpha\ f(\bar{x})\ \Rightarrow\ d\left(f(\bar{x}),f(r(\bar{x})),f(r(\bar{x})) \right)=f(\bar{x})\).
The theorem is established.
**Remark 2.5**.: The proof of Proposition 2.4 can show the following: Let \(\mathcal{V}\) be a variety with a difference term and \(A\in\mathcal{V}\). If \(\alpha,\beta\in\operatorname{Con}A\) such that \(\alpha\leq\beta\) with \(\alpha\) abelian, then
\[A/[\alpha,\beta]\approx\operatorname{Sg}\left(\left\{\left(\begin{bmatrix}a \\ b\end{bmatrix}/\Delta_{\alpha\beta},c/\alpha\right):(a,c)\in\beta\right\} \right)\leq A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
We can recover [3, Prop 7.1] from the next corollary by taking \(\alpha\) to be the center and the variety to be congruence modular.
**Corollary 2.6**.: (see [3, Prop 7.1]) Let \(\mathcal{V}\) be a variety with a difference term, \(A\in\mathcal{V}\) and \(\alpha\in\operatorname{Con}A\) central. Then
\[A\approx A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
Projection onto the second factor \(p_{2}:A(\alpha)/\Delta_{\alpha 1}\otimes^{T}\to A/\alpha\) is a surjective homomorphism. If \(\psi\) effects the isomorphism in Corollary 2.6, then \(\ker p_{2}=\psi(\alpha)\).
**Theorem 2.7**.: Let \(\mathcal{V}\) be a congruence modular variety, \(A\in\mathcal{V}\) and \(\alpha\in\operatorname{Con}A\). Then
\[A/[\alpha,1_{A}]\approx A(\alpha)/\Delta_{\alpha 1}\otimes^{T}A/\alpha.\]
Proof.: Apply Corollary 2.6 to the algebra \(A/[\alpha,1]\) and the central congruence \(\alpha/[\alpha,1]\) to conclude
\[A/[\alpha,1]\approx A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1] 1}\otimes^{T}(A/[\alpha,1])/(\alpha/[\alpha,1]).\]
For the second factor, the \(2^{\text{nd}}\)-isomorphism theorem yields \((A/[\alpha,1])/(\alpha/[\alpha,1])\approx A/\alpha\). For the first factor, Lemma 2.2(2b) gives the isomorphism \(A/[\alpha,1](\alpha/[\alpha,1])/\Delta_{\alpha/[\alpha,1]1}\approx A(\alpha)/ \Delta_{\alpha 1}\).
To finish, we make the following observation. Suppose \(\psi:B\to C\) is an isomorphism and \(B\otimes^{T}Q\) is defined using the binary polynomial \(x+y:=t(x,y,b_{1},\ldots,b_{k})\) for some term \(t(x_{1},\ldots,x_{k+2})\). Then \(B\otimes^{T}Q\approx C\otimes^{T^{\prime}}Q\) where \(C\otimes^{T^{\prime}}Q\) is defined using the binary polynomial \(x\oplus y:=t(x,y,\psi(b_{1}),\ldots,\psi(b_{k}))\) and transfer \(T^{\prime}_{f}:=\psi(T_{f}):Q\to C\) for each fundamental operation \(f\).
The following proposition is an extension of [3, Cor 7.2] to nilpotent algebras. We need to understand the evaluation of a term \(t\) in the algebra \(B\otimes^{T}Q\). Since the Mal'cev term is compatible with the operations
of the abelian algebra \(B\), by recursive evaluation along the composition tree of the term \(t\), we see that the interpretation of \(t\) in \(B\otimes^{T}Q\) is given by
\[F_{t} \left(\langle\vec{a},\vec{x}\rangle\right)\] \[=\left\langle t^{B}(\vec{a})+\sum f^{B}\left(T_{g_{1}}\big{(}h_{11 }^{Q}(\vec{y}_{11}),\ldots,h_{1n_{1}}^{Q}(\vec{y}_{1n_{1}})\big{)},\ldots,T_{g_ {m}}\big{(}h_{m1}^{Q}(\vec{y}_{m1}),\ldots,h_{mn_{m}}^{Q}(\vec{y}_{mn_{m}}) \big{)}\right),t^{Q}(\vec{x})\right\rangle\] \[=\left\langle t^{B}(\vec{a})+s(\vec{x}),t^{Q}(\vec{x})\right\rangle\]
where the sum is taken over all \(f,g_{i},h_{ij}\) such that \(f\) and \(h_{ij}\) are subterms of \(t\), \(g_{i}\in\tau\) are operation symbols or variables and from the composition tree of \(t\) we have \(t=f\left(g_{1}(h_{11},\ldots,h_{1n_{1}}),\ldots,g_{m}(h_{m1},\ldots,h_{mn_{m }})\right)\). The coordinates of the tuples \(\vec{y}_{ij}\) all belong to \(\vec{x}\). In the last line, we have written \(s(\vec{x})\) for the above sum and we note that it is an operation that depends only on the tuples \(\vec{x}\in Q^{\operatorname{ar}t}\). This suffices for the calculation in Lemma 2.8.
Define \([\alpha]_{1}:=[\alpha,\alpha]\) and recursively, \([\alpha]_{n+1}:=[\alpha,[\alpha]_{n}]\) for any \(\alpha\in\operatorname{Con}A\), \(A\in\mathcal{V}\). A congruence \(\alpha\) is \(n\)-step nilpotent if \([\alpha]_{n}=0\).
**Lemma 2.8**.: Let \(B\) and \(Q\) be algebras in the same signature \(\tau\) and suppose \(B\) abelian with a Mal'cev term m(x,y,z). If the binary term \(m(x,0,y):=x+y\) for some \(0\in B\) is used to define \(\otimes^{T}\), then \([1,\ker q]=0\) where \(q:B\otimes^{T}Q\to Q\) is projection.
Proof.: Since \(B\) is abelian, the \(m(x,0,y)\) defines an abelian group operation. We verify the centrality condition. Fix a term \(F_{t}(\bar{x},\bar{y})\) and tuples \(\bar{a}=(\bar{a}^{1},\bar{a}^{2}),\bar{b}=(\bar{b}^{1},\bar{b}^{2}),(\bar{c}^ {1},\bar{c}^{2})=\bar{c}\ker q\ \bar{d}=(\bar{d}^{1},\bar{d}^{2})\) such that \(F_{t}(\bar{a},\bar{c})=F_{t}(\bar{a},\bar{d})\). This means
\[\left\langle t^{B}(\bar{a}^{1},\bar{c}^{1})+s(\bar{a}^{2},\bar{c}^{2}),t^{Q}( \bar{a}^{2},\bar{c}^{2})\right\rangle=\left\langle t^{B}(\bar{a}^{1},\bar{d}^ {1})+s(\bar{a}^{2},\bar{d}^{2}),t^{Q}(\bar{a}^{2},\bar{d}^{2})\right\rangle.\]
Since \(\bar{c}\ker q\ \bar{d}\), we have \(\bar{c}^{2}=\bar{d}^{2}\). This yields the equalities
\[s(\bar{a}^{2},\bar{c}^{2})=s(\bar{a}^{2},\bar{d}^{2})\qquad s(\bar{b}^{2},\bar {c}^{2})=s(\bar{b}^{2},\bar{d}^{2})\qquad t^{B}(\bar{b}^{2},\bar{c}^{2})=t^{B }(\bar{b}^{2},\bar{d}^{2}).\]
Then from \(t^{B}(\bar{a}^{1},\bar{c}^{1})+s(\bar{a}^{2},\bar{c}^{2})=t^{B}(\bar{a}^{1}, \bar{d}^{1})+s(\bar{a}^{2},\bar{d}^{2})\) we conclude \(t^{B}(\bar{a}^{1},\bar{c}^{1})=t^{B}(\bar{a}^{1},\bar{d}^{1})\). Since \(B\) is abelian, we then have \(t^{B}(\bar{b}^{1},\bar{c}^{1})=t^{B}(\bar{b}^{1},\bar{d}^{1})\). Together with the above equalities we conclude \(F_{t}(\bar{b},\bar{c})=F_{t}(\bar{b},\bar{d})\).
**Proposition 2.9**.: ( see [3, Cor 7.2]) Let \(\mathcal{V}\) be a variety with a difference term. An algebra \(A\in\mathcal{V}\) is n-step nilpotent if and only if it can be represented as a right-associated product
\[A\approx Q_{n}\otimes^{T_{n-1}}Q_{n-1}\otimes^{T_{n-2}}\cdots\otimes^{T_{1}}Q _{1}\]
for abelian algebras \(Q_{1},\ldots,Q_{n}\in\mathcal{V}\).
Proof.: Assume \(A\) is n-step nilpotent. Then \(0=[1]_{n}=[1,[1]_{n-1}]<[1,[1]_{n-2}]\). Set \(Q_{1}=A/[1,1]\) and for \(2\leq k\leq n\), define \(Q_{k}=A/\Delta_{[1]_{k-1}1}\) which are abelian by Lemma 2.2(1a). By Theorem 2.7, we have
\[A/[1]_{k}\approx Q_{k}\otimes^{T}A/[1]_{k-1}\]
for \(2\leq k\leq n\). The associated product now follows since \(A/[1,1]\) is abelian.
Now, suppose we have
\[A\approx Q_{n}\otimes^{T_{n-1}}Q_{n-1}\otimes^{T_{n-2}}\cdots\otimes^{T_{1}}Q _{1}\]
a right-associated product for abelian algebras \(Q_{1},\ldots,Q_{n}\in\mathcal{V}\). Set \(B_{k}=Q_{k}\otimes^{T_{k-1}}\cdots\otimes^{T_{1}}Q_{1}\) for the right-associated product. Note we have surjective homomorphisms \(q_{k+1}:B_{k+1}\to B_{k}\) given by right-projections This implies each \(B_{k}\in\mathcal{V}\) for \(1\leq k\leq n\). The argument is by induction on \(k\). Assume \(B_{k}\) is
\(k\)-step nilpotent and let \(\alpha=\ker q_{k+1}\). By recursive application of the homomorphism property in varieties with a difference term,
\[\left(\left[1_{B_{k+1}}\right]_{k}\vee\alpha\right)/\alpha =\left[\left(1_{B_{k+1}}\vee\alpha\right)/\alpha,\left(\left[1_{B_ {k+1}}\right]_{k-1}\vee\alpha\right)/\alpha\right]\] \[=\left[1_{B_{k}},\left[\left(1_{B_{k+1}}\vee\alpha\right)/\alpha, \left(\left[1_{B_{k+1}}\right]_{k-2}\vee\alpha\right)/\alpha\right]\right]\] \[=\left[1_{B_{k}},\left[1_{B_{k}},\left[\left(1_{B_{k+1}}\vee\alpha \right)/\alpha,\left(\left[1_{B_{k+1}}\right]_{k-3}\vee\alpha\right)/\alpha \right]\right]\right]\] \[\vdots\] \[=[1_{B_{k}}]_{k}=0;\]
thus, \(\alpha\geq[1]_{k}\). The hypothesis of Lemma 2.8 is satisfied by Lemma 2.2(1a) and so \(0=[1,\alpha]\leq[1,[1]_{k}]\); therefore, \(B_{k+1}\) is k+1-step nilpotent. The argument is concluded since \(A\approx B_{n}\).
It is instructive to consider the previous development in the case of groups; in particular, the \(\otimes^{T}\) construction recovers the reconstruction of central extensions where the transfer corresponds to the addition of a 2-cocycle. For a normal subgroup \(K\triangleleft G\), let \(\alpha_{K}=\{(x,y)\in G^{2}:xy^{-1}\in K\}\) denote the corresponding congruence. Recall, given a homomorphism \(\phi:Q\to\operatorname{Aut}K\) and group 2-cocycle \(f:Q\times Q\to K\) the group \(K\rtimes_{\phi,f}Q\) is defined over the set \(K\times Q\) with operation
\[(a,x)\cdot(b,y)=(a\cdot\phi(x)(b)\cdot f(x,y),x\cdot y).\]
**Lemma 2.10**.: Let \(G\) be a group and \(K,H\triangleleft G\) with \(K\leq H\).
1. \(G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{H}}\approx K/[K,H]\rtimes_{\phi}G/H\) for a homomorphism \(\phi:G/H\to\operatorname{Aut}K/[K,H]\).
2. \(G(\alpha_{K})/\Delta_{\alpha_{K}1}\approx K/[K,G]\).
3. \(G/[K,G]\approx K/[K,G]\ \otimes^{T}G/K\) for some transfer \(T\).
4. For a central extensions \(\pi\colon G\to Q\) with \(K=\ker\pi\), the transfers \(T_{\sigma}:Q^{\operatorname{ar}(\sigma)}\to G(\alpha_{K})/\Delta_{\alpha_{K}1}\) and their image under the isomorphism from (2) are given by * \(T_{\times}(x,y)=\begin{bmatrix}l(xy)\\ l(x)l(x)\end{bmatrix}/\Delta_{\alpha_{K}1}\longmapsto l(x)l(y)l(xy)^{-1}\) * \(T_{-1}(x)=\begin{bmatrix}l(x^{-1})\\ l(x)^{-1}\end{bmatrix}/\Delta_{\alpha_{K}1}\longmapsto l(x)^{-1}l(x^{-1})^{-1}\) * \(T_{1}(x)=\begin{bmatrix}0\\ 0\end{bmatrix}/\Delta_{\alpha_{K}1}\longmapsto 0\) ; therefore, \(G\approx K\otimes^{T}Q\approx K\rtimes_{0,f}Q\) for the 2-cocycle \(f(x,y)=l(x)l(y)l(xy)^{-1}\).
Proof.: (1) Let \(l:G/H\to G\) be a lifting associated to \(\alpha_{H}\). Define \(\phi:G/H\to\operatorname{Aut}K/[K,H]\) by \(\phi(xH)(y[K,H]):=l(xH)yl(xH)^{-1}[K,H]=y^{l(xH)}[K,H]\). Define \(\psi:K\rtimes_{\phi}G/H\to G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{H}}\) by \(\psi(k,q):=\begin{bmatrix}l(q)\\ kl(q)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\). From the generators of \(\Delta_{\alpha_{K}\alpha_{H}}\), we see that \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}c\\ d\end{bmatrix}\) implies \((a,b),(c,d)\in\alpha_{K}\) and \((a,c),(b,d)\in\alpha_{H}\). Note \(\psi(k_{1},q_{1})=\psi(k_{2},q_{2})\Rightarrow\begin{bmatrix}l(q_{1})\\ k_{1}l(q_{1})\end{bmatrix}\Delta_{\alpha_{K}\alpha_{K}}\begin{bmatrix}l(q_{2}) \\ k_{2}l(q_{2})\end{bmatrix}\Rightarrow(l(q_{1}),l(q_{2}))\in\alpha_{H} \Rightarrow q_{1}=q_{2}\) since \(l\) is a lifting for \(\pi:G\to G/H\). Then we must have \((k_{1},k_{2})\in[K,H]\); thus, \(\ker\psi\leq\alpha_{[K,H]}\times 0_{G/H}\). Conversely, for \((k_{1},k_{2})\in\alpha_{[K,G]}\) and \(q\in G/H\), we have \(\begin{bmatrix}q\\ k_{1}q\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}q\\ k_{2}q\end{bmatrix}\) which shows \(\alpha_{[K,H]}\times 0_{G/H}\leq\ker\psi\).
For surjectivity, take \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\in G(\alpha_{K})/\Delta_{\alpha_{K} \alpha_{H}}\). Then \(a\) and \(b\) are in the same coset of \(K\leq H\) and so there is \(q\in G/H\), \(h_{1},h_{2}\in H\) such that \(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}=\begin{bmatrix}h_{1}l(q)\\ h_{2}l(q)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\). Since \((a,b)\in\alpha_{K}\), we
have \(h_{2}h_{1}^{-1}\in K\) and so can write \(h_{2}=kh_{1}\) for some \(k\in K\). Note \((l(q),h_{1}^{-1}l(q))\in\alpha_{H}\) which implies \(\begin{bmatrix}l(q)\\ l(q)\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}h_{1}^{-1}l(q)\\ h_{1}^{-1}l(q)\end{bmatrix}\) as a generator of the congruence. Then
\[\begin{bmatrix}a\\ b\end{bmatrix}=\begin{bmatrix}h_{1}l(q)\\ h_{2}l(q)\end{bmatrix}=\begin{bmatrix}h_{1}\\ h_{2}\end{bmatrix}\cdot\begin{bmatrix}l(q)\\ l(q)\end{bmatrix}\Delta_{\alpha_{K}\alpha_{H}}\begin{bmatrix}h_{1}\\ h_{2}\end{bmatrix}\cdot\begin{bmatrix}h_{1}^{-1}l(q)\\ h_{1}^{-1}l(q)\end{bmatrix}=\begin{bmatrix}l(q)\\ h_{2}h_{1}^{-1}l(q)\end{bmatrix}=\begin{bmatrix}l(q)\\ kl(q)\end{bmatrix}.\]
Then \(\psi((k,q))=\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\).
We check \(\psi\) is a homomorphism:
\[\psi\left(k_{1},q_{1}\right)\cdot\psi(k_{2},q_{2}) =\begin{bmatrix}l(q_{1})\\ k_{1}l(q_{1})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\cdot\begin{bmatrix}l(q _{2})\\ k_{2}l(q_{2})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\] \[=\begin{bmatrix}l(q_{1})l(q_{2})\\ k_{1}l(q_{1})k_{2}l(q_{2})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\] \[=\begin{bmatrix}f(q_{1},q_{2})l(q_{1}q_{2})\\ k_{1}k_{2}^{l(q_{1})}f(q_{1},q_{2})l(q_{1}q_{2}))\end{bmatrix}/\Delta_{\alpha_ {K}\alpha_{H}}\] \[=\begin{bmatrix}l(q_{1}q_{2})\\ k_{1}k_{2}^{l(q_{1})}l(q_{1}q_{2})\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{H}}\] \[=\psi\left(k_{1}k_{2}^{l(q_{1})},q_{1}q_{2}\right)=\psi\left((k_{ 1},q_{1})\cdot(k_{2},q_{2})\right).\]
(2) This follows from (1) since \(G(\alpha_{K})/\Delta_{\alpha_{K}1}\approx K/[K,G]\rtimes_{\phi}G/G\approx K/[K,G]\).
(3) This is just Theorem 2.4.
(4) Use (1) and (2) and the fact that \(K\) is central.
**Remark 2.11**.: From Lemma 2.10(1), for \(K\) abelian we see that \(G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{K}}\) is isomorphic to the semidirect product induced by the extension \(\pi:G\to G/K\). The inverse to the map \(\psi\) is given by \(\sigma:\begin{bmatrix}x\\ y\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto(yx^{-1},\pi(x))\).
At this point, it is possible to use Corollary 2.6 and the group example in Lemma 2.10 to define analogues of the machinery for abelian \(2^{\mathrm{nd}}\)-cohomology groups in varieties with a difference term. We will observe how to derive the central extension theory as a specialization to those varieties from the more general extension theory of affine datum developed in the next section.
## 3. Deconstruction/reconstruction with affine datum
We start be defining the notion of action. For \(0<n\in\mathds{N}\), write \([n]=\{1,\ldots,n\}\) for the initial segment of positive integers and \([n]^{*}=\{s\subseteq\{1,\ldots,n\}:0<|s|<n\}\) for the non-empty proper subsets of \([n]\). Given sets \(Q\) and \(A\) and \(s\in[n]^{*}\), we define the set of coordinates \([Q,A]^{s}=\{\vec{x}\in Q\cup A:\vec{x}(i)\in A\text{ for }i\in s,\vec{x}(i)\in Q \text{ for }i\notin s\}\).
**Definition 3.1**.: Let \(Q,A\) be sets and \(f\) an n-ary operation on \(Q\) with \(n\geq 2\). A _pairing_ of \(Q\) on \(A\) with respect to \(f\) is a choice of subsets \(\sigma(f)\subseteq[n]^{*}\) and a sequence of functions \(a(f,s):[Q,A]^{s}\to A\) with \(s\in\sigma(f)\).
Let \(Q\) be an algebra in the signature \(\tau\). For a set \(A\), an _action_\(Q*A\) is a sequence of pairings \(\{a(f,s):f\in\tau,\operatorname{ar}f\geq 2,s\in\sigma(f)\subseteq[\operatorname{ar }f]^{*}\}\).
In the following definition, we demonstrate how an action can be associated to a variety (or equational theory). Given a fixed action \(Q*A\), for any term \(t\) in the signature \(\tau\), there are no operations on \(A\) available in which to define an interpretation of \(t\) as an operation \(t^{A}:A^{\operatorname{ar}t}\to A\); however, via the pairings
\(a(f,s)\) associated to each \(f\in\tau\) with \(\operatorname{ar}f\geq 2\), it is sometimes possible to define tuples \(\bar{c}\in(Q\cup A)^{\operatorname{ar}t}\) on which to define a value \(t(\bar{c})\in(Q\cup A)\) using compositions of the available operations on \(Q\) and pairings. If we consider the composition tree of the term with variables on the leaves, the idea is to choose values in \(Q\cup A\) for the variables so that when propagated through the nodes of the tree yields a meaningful evaluation using either the function symbols \(f\in\tau\) for the algebra \(Q\) or the pairing functions \(a(f,s)\).
**Definition 3.2**.: Let \(Q\) be an algebra in the signature \(\tau\) and \(A\) a set. Let \(Q*A\) be an action. For each term \(f\), we will define a subset of sequences \(\bar{a}\in C_{f}\subseteq(Q\cup A)^{\operatorname{ar}f}\)_compatible_ with \(f\) and an operation \(f^{*}:C_{f}\to Q\cup A\). If \(f=p_{i}\) is the n-ary i-th projection, then \(C_{f}:=(Q\cup A)^{n}\) and \(f^{*}:=p_{i}\). If \(f\in\tau\) with \(\operatorname{ar}f<1\), then define \(C_{f}:=Q\) and \(f^{*}(a):=f^{Q}(a)\). If \(f\in\tau\) with \(\operatorname{ar}f\geq 2\), then define \(C_{f}:=Q^{\operatorname{ar}f}\cup\bigcup_{s\in\sigma(f)}[Q,A]^{s}\) and
\[f^{*}(\bar{a}):=\begin{cases}f^{Q}(\bar{a})&,\bar{a}\in Q^{\operatorname{ar}f} \\ a(f,s)(\bar{a})&,\bar{a}\in[Q,A]^{s}\end{cases}\]
for \(\bar{a}\in C_{f}\). Let \(f(x_{1},\ldots,x_{m})=h(g_{1}(x_{11},\ldots,x_{1n_{1}}),\ldots,g_{m}(x_{m1}, \ldots,x_{mn_{m}}))\) where \(h\in\tau\), \(g_{i}\) are terms or projections and \(C_{g_{i}}\) are the compatible sequences for \(g_{i}\). Then take \(\bar{a}\in C_{f}\) provided \(\bar{a}_{i}=(a_{i1},\ldots,a_{in_{i}})\in C_{g_{i}}\) for \(i=1,\ldots,m\) and \((g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\in C_{h}\). Then for \(\bar{a}\in C_{f}\), define \(f^{*}(\bar{a}):=h^{Q}(g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\) in the case \((g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\in Q^{m}\) and \(f^{*}(\bar{a}):=a(h,s)(g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\) in the case \((g_{1}^{*}(\bar{a}_{1}),\ldots,g_{m}^{*}(\bar{a}_{m}))\in[Q,A]^{s}\) for some \(s\in[\operatorname{ar}h]^{*}\).
Let \(\operatorname{var}t\) be the set of variables of the term \(t\). For an equation \(f(\vec{x})=g(\vec{y})\), a pair \((\vec{a},\vec{b})\in C_{f}\times C_{g}\) is _appropriate_ if there is an assignment \(\epsilon:\operatorname{var}f\cup\operatorname{var}g\to Q\cup A\) such that \((\epsilon(\vec{x}),\epsilon(\vec{y}))=(\vec{a},\vec{b})\) and \(f^{*}(\vec{a})\in A\Leftrightarrow g^{*}(\vec{b})\in A\).
Let \(\Sigma\) be a set of equation in the signature \(\tau\). An action \(Q*A\) is _compatible_ with \(\Sigma\) if for any equation \(f(\vec{x})\approx g(\bar{y})\in\Sigma\) and appropriate pair \((\vec{a},\vec{b})\in C_{f}\times C_{g}\) we have \(f^{*}(\bar{a})=g^{*}(\bar{b})\). We write \(Q*A\vDash_{*}\Sigma\) if the action \(Q*A\) is compatible with \(\Sigma\).
If \(\mathcal{V}\) is a variety in the same signature \(\tau\), then an action \(Q*A\) is _compatible_ with \(\mathcal{V}\) if \(Q*A\vDash_{*}\operatorname{Id}\mathcal{V}\).
We emphasize that for an action \(Q*A\) with an algebra \(Q\) in the signature \(\tau\), the compatible sequences for a term are determined according to the composition tree in the fundamental operations given by \(\tau\). Note from Definition 3.2, \(Q^{\operatorname{ar}f}\subseteq C_{f}\) for a term \(f\) and so an action \(Q*A\) compatible with \(\mathcal{V}\) implies \(Q\in\mathcal{V}\). An action \(Q*A\) is _full_ if \(\sigma(f)=[\operatorname{ar}f]^{*}\) for each \(f\in\tau\) with \(\operatorname{ar}f\geq 2\).
**Example 3.3**.: Let \(G\in\mathcal{G}\) the variety of groups and \(X\) a set. Asserting an action \(G*X\) is compatible with \(\mathcal{G}\) recovers the classic definition of group-action. For the binary group operation \(f\), we write different notation for the actions by \(a(f,1)(x,g)=x\circ g\) and \(a(f,2)(g,x)=g\cdot x\). For the terms \(f(x,f(y,z))\) and \(f(f(x,y),z)\), the compatible sequences can be seen to be
\[C_{f(f(x,y),z)}=C_{f(x,f(y,z))}=\{(g_{1},g_{2},g_{3}),(g_{1},g_{2},x),(x,g_{1},g_ {2}),(q_{1},x,q_{2}):g_{1},g_{2},g_{3}\in G,x\in X\}.\]
Then enforcing compatibility with associativity we calculate
\[g_{1}\cdot(g_{2}\cdot x)=a(f,2)(g_{1},a(f,2)(g_{2},x)) =f(x,f(y,z))^{*}(g_{1},g_{2},x)\] \[=f(f(x,y),z)^{*}(g_{1},g_{2},x)=a(f,2)(f^{G}(g_{1},g_{2}),x)=(g_{1} g_{2})\cdot x.\]
Similarly, the consequences for compatibility of a full action with the associative equation of \(\mathcal{G}\) yields
1. \((g_{1}g_{2})g_{3}=g_{1}(g_{2}g_{3})\) (associativity in the group \(G\in\mathcal{G}\))
2. \(g_{1}\cdot(g_{2}\cdot x)=(g_{1}g_{2})\cdot x\) (left-action of \(G\) on \(X\))
3. \((x\circ g_{1})\circ g_{2}=x\circ(g_{1}g_{2})\) (right-action of \(G\) on \(X\))
4. \((g_{1}\cdot x)\circ g_{2}=g_{1}\cdot(x\circ g_{2})\)
By choosing an action with \(\sigma(f)=\{2\}\) we would recover only consequences (1) and (2) above, or an action with \(\sigma(f)=\{1\}\) would recover only (1) and (3) above.
**Example 3.4**.: It is possible for an action to be vacuously compatible with a set of equations in the following sense. Take \(\tau=\{f\}\) a single binary symbol and \(\Sigma=\{f(x,y)=f(y,x)\}\). For an action \(Q*A\) such that \(\sigma(f)=\{2\}\), we have \(C_{f(x,y)}\cap C_{f(y,x)}=Q^{2}\). If \(Q\vDash f(x,y)=f(y,x)\), then the action is compatible with \(\Sigma\) independently with how \(a(f,1):Q\times A\to A\) is defined.
In the case of an action \(Q*A\) on a partial-algebra \(A\), there is a modification in the definition of a compatible sequence to reflect the fact that we have partial access to operations on the universe \(A\). For \(f\in\tau\), the corresponding partial-operation on \(A\) may only be defined on a subset \(\operatorname{dom}f\subseteq A^{\operatorname{ar}f}\); consequently, we define \(C_{f}:=\operatorname{dom}f\cup Q^{\operatorname{ar}f}\cup\bigcup_{s\in\sigma( f)}[Q,A]^{s}\) and
\[f^{*}(\bar{a}):=\begin{cases}f^{Q}(\bar{a})&,\bar{a}\in Q^{\operatorname{ar} f}\\ a(f,s)(\bar{a})&,\bar{a}\in[Q,A]^{s}\\ f^{A}(\bar{a})&,\bar{a}\in\operatorname{dom}f\end{cases}\]
for \(\bar{a}\in C_{f}\). The rest of Definition 3.2 follows in a similar manner with appropriate allowances made for the partial-operations over \(A\).
The notion of action is explicitly concerned with the operations of the signature which are binary or of higher arity. The nullary and unary operations will be encoded into a new structure comprising datum which will play the role of the normal subgroup associated to the kernel of the extension. The information in the new structure will have to compensate for the fact that congruences in general are not determined by a privileged class as is the case of normal subgroups and kernels of group homomorphisms.
**Definition 3.5**.: Fix signature \(\tau\) and ternary term symbol \(m\). Define \(A^{\alpha,\tau}=\left\langle A,\alpha,\{f^{\Delta}:f\in\tau\}\right\rangle\) where
* \(A=\left\langle A,m\right\rangle\) is an algebra in the single operation symbol \(m\);
* \(\alpha\in\operatorname{Con}A\);
* \(\{f^{\Delta}:f\in\tau\}\) is a sequence of operations \(f^{\Delta}:A(\alpha)/\Delta_{\alpha\alpha}\times\delta(A)^{\operatorname{ar} f-1}\to A(\alpha)/\Delta_{\alpha\alpha}\) where \(\delta:A\to A(\alpha)/\Delta_{\alpha\alpha}\) is the diagonal map.
In order to make the treatment of the operations in reconstructed extensions more uniform, the partial operations \(f^{\Delta}\) in the above definition are taken over the full signature \(\tau\); however, we shall see that we are really concerned with only the nullary and unary operations of \(\tau\). For higher arity operation symbols \(f\in\tau\), we shall declare \(f^{\Delta}\) to agree with action terms \(a(f,1)\) and for unary and nullary operation symbols we shall take \(f^{\Delta}\) to be independent of all but the first coordinate. This will be made formal in the definition of datum (Definition 3.9).
For a surjective map \(\pi:A\to Q\) with \(\alpha=\ker\pi\), there is a surjective map \(\rho:A(\alpha)\to Q\) with \(\hat{\alpha}=\ker\rho\). Note that a lifting \(l:Q\to A\) for \(\pi\) has the property that \((x,l(q))\in\alpha\Leftrightarrow\rho\left(\begin{bmatrix}l(q)\\ x\end{bmatrix}\right)=q\). This differs with the notion from group theory in that \(l\) is no longer taken as a right-inverse for \(\rho\) as a set map, but that \(\rho\circ(\delta\circ l)=\operatorname{id}\). The \(\alpha\)-trace \(r:A\to A\) associated to the lifting is defined by \(r=l\circ\rho\).
In analogy with the group case, an important distinction arises when an action \(Q*K\) operates by an automorphism; that is, there is an induced homomorphism \(\gamma:Q\to\operatorname{Aut}K\) such that \(q*k=\gamma(q)(k)\) for all \(q\in Q,k\in G\). This is what Definition 3.6 addresses.
**Definition 3.6**.: Fix \(A^{\alpha,\tau}\), a ternary term \(m\), an algebra \(Q\) in the signature \(\tau\) and surjective map \(\rho:A(\alpha)\to Q\) with \(\hat{\alpha}=\ker\rho\). For any \(u\in A\), define the polynomial operation \(x+_{u}y:=m\left(x,\delta(u),y\right)\). An action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) is _homomorphic_ with respect to \(m\) if the following holds:
* for all \(f\in\tau\) with \(\operatorname{ar}f\geq 2\) and \(s\in\sigma(f)\), all \(\vec{u}\in A^{s}\) and all \(\vec{a},\vec{b}\in\left(A(\alpha)/\Delta_{\alpha\alpha}\right)^{s}\) such that \(a_{i}\ \hat{\alpha}/\Delta_{\alpha\alpha}\ \delta(u_{i})\ \hat{\alpha}/\Delta_{\alpha\alpha}\ b_{i}\) for \(i\in s\),
* if we let \(\vec{x},\vec{y},\vec{z}\in[Q,A(\alpha)/\Delta_{\alpha\alpha}]^{s}\) such that \(x_{i}=y_{i}=z_{i}\in Q\) for \(i\not\in s\) and \(x_{i}=a_{i}\), \(y_{i}=b_{i}\) and \(z_{i}=a_{i}+_{u_{i}}b_{i}\) for \(i\in s\),
then we have
\[a(f,s)(\vec{z})=a(f,s)(\vec{x})+_{v}a(f,s)(\vec{y})\]
where \(v=l(f^{A/\alpha}(q_{1},\ldots,q_{i},u_{1},\ldots,u_{k}))\) for any lifting \(l:Q\to A\) associated to \(\rho\).
An action \(Q*A\) is _unary_ if \(\sigma(f)\subseteq[\operatorname{ar}f]^{*}\) consists of exactly all the singleton subsets. In the case of unary actions, the homomorphic property is more directly written as
\[a(f,i)(q_{1},\ldots,a+_{u}b,\ldots,q_{n})=a(f,i)(q_{1},\ldots,a,\ldots,q_{n})+ _{v}a(f,i)(q_{1},\ldots,b,\ldots,q_{n})\]
for \(\operatorname{ar}f=n\) and \(v=l(f^{A/\alpha}(q_{1},\ldots,u,\ldots,q_{n}))\).
**Definition 3.7**.: The structure \(A^{\alpha,\tau}=\left\langle A,\alpha,\{f^{\Delta}:f\in\tau\}\right\rangle\) is _homomorphic_ with respect to \(m\) if for each \(f\in\tau\) and \(a\)\(\hat{\alpha}/\Delta_{\alpha\alpha}\)\(\delta(u)\)\(\hat{\alpha}/\Delta_{\alpha\alpha}\)\(b\), \(\vec{x}\in A^{\operatorname{ar}f-1}\) we have
\[f^{\Delta}\left(a+_{u}b,\delta(\vec{x})\right)=f^{\Delta}\left(a,\delta(\vec{ x})\right)+_{v}f^{\Delta}\left(b,\delta(\vec{x})\right)\]
where \(v=l(f^{A/\alpha}(u,\vec{x}))\) for any lifting \(l:A/\alpha\to A\) associated to \(\rho:A(\alpha)\to Q\).
**Remark 3.8**.: Note Definitions 3.6 and 3.7 do not depend on the lifting. If \(l,l^{\prime}:Q\to A\) are liftings associated to \(\rho\), then \((l(q),l^{\prime}(q))\in\alpha\) for all \(q\in Q\) which implies \(\delta(l(q))=\delta(l^{\prime}(q))\) since \(\left(\begin{bmatrix}l(q)\\ l(q)\end{bmatrix},\begin{bmatrix}l^{\prime}(q)\\ l^{\prime}(q)\end{bmatrix}\right)\) is a generator of \(\Delta_{\alpha\alpha}\).
When referring to homomorphic actions the choice of the operation \(m\) will consistently be chosen to be the same operation given by \(A^{\alpha,\tau}\). A ternary operation \(m:A^{3}\to A\) is a _ternary abelian group operation_ if it is a Mal'cev operation which is _self-commuting_; that is,
\[A \vDash m(x,y,y)=x\wedge m(y,y,x)=x\] \[A \vDash m(m(x_{1},x_{2},x_{3}),m(y_{1},y_{2},y_{3}),m(z_{1},z_{2}, z_{3}))=m(m(x_{1},y_{1},z_{1}),m(x_{2},y_{2},z_{2}),m(x_{3},y_{3},z_{3})).\]
A ternary abelian group operation \(m\) may also be referred to as an _affine operation_ since for any choice of \(a\in A\), the definitions \(x+y:=m(x,a,y)\) and \(-x:=m(a,x,a)\) yields an abelian group \(\left\langle A,+,-,a\right\rangle\) in which \(m(x,u,z)=x-y+z\)[1, Thm 7.34]. We now arrive at the definition of datum and what it means for an extension to realize the datum.
**Definition 3.9**.: Fix a signature \(\tau\). A triple \((Q,A^{\alpha,\tau},*)\) is _datum_ if the following holds:
1. \(A^{\alpha,\tau}\) is homomorphic;
2. \(Q\) is an algebra in the signature \(\tau\);
3. The ternary operation symbol \(m\) referenced in \(A^{\alpha,\tau}\) also has an interpretation in \(Q\) which is idempotent and there is a surjective homomorphism \(\rho:A(\alpha)\to\left\langle Q,m\right\rangle\) such that \(\hat{\alpha}=\ker\rho\);
4. The action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) is homomorphic with the following property: for any \(f\in\tau\) with \(n=\operatorname{ar}f\geq 2\) and \(\rho\left(\begin{bmatrix}y_{i}\\ x_{i}\end{bmatrix}\right)=q_{i}\) for \(i=1,\ldots,n\), we have \[f^{\Delta}\left(\begin{bmatrix}y_{1}\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)\,\hat{\alpha}/\Delta_{\alpha\alpha}\;a(f,s)\left(\vec{z}\right)\] for all \(s\in\sigma(f)\) and \(\vec{z}\in[Q,A(\alpha)/\Delta_{\alpha\alpha}]^{s}\) such that \(z_{i}=q_{i}\) for \(i\not\in s\) and \(z_{i}=\begin{bmatrix}y_{i}\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\) for \(i\in s\).
We say \((Q,A^{\alpha,\tau},*)\) is _affine datum_ if in addition
1. \(\alpha\in\operatorname{Con}\left\langle A,m\right\rangle\) is an abelian congruence and \(m\) is a ternary abelian group operation when restricted on each \(\alpha\)-class;
* the action is unary and \[f^{\Delta}\left(\begin{bmatrix}y_{1}\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)=a(f,1)\left(\begin{bmatrix}y_{1}\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},q_{2},\ldots,q_{n}\right)\] for any \(f\in\tau\) with \(n=\operatorname{ar}f\geq 2\).
**Remark 3.10**.: The notion of datum requires comment. In the case of groups, from datum the \((\mathbf{K},Q,\phi)\) one can always construct two extensions of \(K\) by \(Q\): the direct product \(p_{2}:K\times Q\to Q\) with \(\alpha_{K}=\ker p_{2}\) and the semidirect product \(\pi:K\rtimes_{\phi}Q\to Q\) with \(\alpha_{K}=\ker\pi\). In this way, we can treat the direct product and semidirect product as already explicitly provided by the datum; as a consequence, the datum provides a universe on which to define the algebras for possible extensions. The partial structure \(A^{\alpha,\tau}\) includes the underlying set \(A\) from which we can calculate the set \(A(\alpha)/\Delta_{\alpha\alpha}\) which will serve as the universe of the reconstructed extensions.
It is not required that there is an interpretation of the signature \(\tau\) on the set \(A\) - only that there is a ternary operation \(m\) which is defined on \(A\) so that \(A(\alpha)/\Delta_{\alpha\alpha}\) is calculated explicitly from the congruence \(\alpha\) of the algebra \(\langle A,m\rangle\). The membership in the congruence \(\Delta_{\alpha\alpha}\) is then only determined by \(m\). The idea is to preserve this particular aspect from the example where \(\alpha\) is an abelian congruence of an algebra \(A\in\mathcal{V}\) in a variety with weak-difference term; in this case, the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) can be reconstructed from the resulting datum by the rule \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}c\\ d\end{bmatrix}\Leftrightarrow d=m(b,a,c)\) since \(\Delta_{\alpha\alpha}\leq\hat{\alpha}\). This is data which resides in \(A^{\alpha,\tau}\).
**Definition 3.11**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. An algebra \(B\)_realizes_ the datum if there is an extension \(\pi:B\to Q\) with \(\beta=\ker\pi\), a bijection \(i:B(\beta)/\Delta_{\beta\beta}\to A(\alpha)/\Delta_{\alpha\alpha}\) and a lifting \(l:Q\to B\) such that for all \(f\in\tau\):
* if \(n=\operatorname{ar}f\geq 2\), \(q_{1},\ldots,q_{n}\in Q\) and \(x_{1},\ldots,x_{n}\in B(\beta)/\Delta_{\beta\beta}\), then \[a(f,i)\left(q_{1},\ldots,i\circ x_{i},\ldots,q_{n}\right)=i\circ f^{B(\beta)/ \Delta_{\beta\beta}}\left(\delta\circ l(q_{1}),\ldots,x_{i},\ldots,\delta \circ l(q_{n})\right);\]
* if \(\operatorname{ar}f\leq 1\) and \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), then \(f^{\Delta}(i(x))=i\circ f^{B(\beta)/\Delta_{\beta\beta}}(x)\).
For special choices of the operations in \(T\), the following algebras will serve as our reconstruction of extensions which realize affine datum. This will be shown in Theorem 3.21
**Definition 3.12**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). Let \(T=\{T_{f}:f\in\tau\}\) a sequence of functions such that \(T_{f}:(Q)^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\). Fix a lifting \(l:Q\to A\) associated to \(\rho:A(\alpha)\to Q\). Define an algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) on the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) of the datum with operations
\[F_{f}\left(\begin{bmatrix}a_{1}\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}a_{n}\\ b_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right):=f^{\Delta}\left(\begin{bmatrix} a_{1}\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(b_{2}),\ldots,\delta(b_{n})\right)\] \[+_{u}\ \sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i},\begin{bmatrix}a_{i+ 1}\\ b_{i+1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(b_{i+2}),\ldots,\delta(b_{n})\right)\] \[+_{u}\ \ T_{f}(q_{1},\ldots,q_{n}) (f\in\tau).\]
a left-associated composition where \(u=l(f^{Q}(q_{1},\ldots,q_{n}))\) and \(\rho\left(\begin{bmatrix}a_{i}\\ b_{i}\end{bmatrix}\right)=q_{i}\).
Definition 3.2, Definition 3.13 and Definition 3.14 will constitute our scheme which given membership \(Q\in\mathcal{U}\) in a variety will guarantee the extension \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\):
\[\text{identities in }\mathcal{U} \Rightarrow\ \text{expressions in }\{T_{f}:f\in\tau\}\text{ and action terms }Q*A(\alpha)/\Delta_{\alpha\alpha}\] \[\Rightarrow\ \text{identities satisfied by }A_{T}(Q,A^{\alpha,\tau},*).\]
Given a term \(t(\bar{x})\) in the signature \(\tau\), we will be interested in its interpretation in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\). By using the homomorphism property of the action terms and compatibility of the ternary operation \(m\) over the blocks of \(\hat{\alpha}/\Delta_{\alpha\alpha}\), we can distribute the sum in Definition 3.12 at every point in the composition tree of \(t^{A_{T}(Q,A^{\alpha,\tau},*)}\). The end result is a large sum in mixed action terms and functions in \(T\) indexed by operation symbols of \(\tau\). The next definition will describe a way to separate the terms in the sum which use some operations from \(T\).
**Definition 3.13**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\) and \(T=\{T_{f}:f\in\tau\}\) a sequence of functions \(T_{f}:(Q)^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\). Let \(m\) be the ternary operation on \(A\) included in the datum and \(x+_{u}y=m(x,\delta(u),y)\) the binary operation defined on \(A(\alpha)/\Delta_{\alpha\alpha}\) for \(u\in A\).
For each term \(t\) in the signature \(\tau\), we inductively define along the composition tree of the term a set \(E_{t}\) of operations of the form \(\nu:Q^{\operatorname{ar}t}\to A(\alpha)/\Delta_{\alpha\alpha}\). Then we define an operation \(t^{\partial,T}:Q^{\operatorname{ar}t}\to A(\alpha)/\Delta_{\alpha\alpha}\) given by
\[t^{\partial,T}(\vec{q})=\sum_{\nu\in E_{t}}\nu(\vec{q}) \tag{2}\]
where the sum is the left-associated composition over \(+_{u}\) where \(u=l\left(t^{Q}(\vec{q})\right)\) for any lifting \(l:Q\to A\) associated to \(\rho:A(\alpha)\to Q\). By Remark 3.8, the definition does not depend on the choice of the lifting \(l\).
If \(t=f\in\tau\), define \(E_{t}=\{T_{f}\}\). Suppose \(t(\vec{y})=f(\sigma(\vec{x}))\) is derived from a fundamental operation \(f\in\tau\) by identifying or permuting variables according to the surjective map \(\sigma:\operatorname{var}f\to\operatorname{var}t\). Define \(E_{t}=\{T_{f}\circ\sigma\}\) where \(T_{f}\circ\sigma:Q^{\operatorname{ar}t}\to A(\alpha)/\Delta_{\alpha\alpha}\) denotes the corresponding operation evaluated according to \(\sigma\); that is, for each evaluation \(\epsilon:\operatorname{var}t\to Q\) there is an evaluation \(\epsilon^{\prime}:\operatorname{var}f\to Q\) such that \(\epsilon^{\prime}(\vec{x})=\epsilon\circ\sigma(\vec{x})\) and \((T_{f}\circ\sigma)(\epsilon^{\prime}(\vec{y}))=T_{f}(\epsilon(\vec{x}))\).
Suppose \(t(y_{1},\ldots,y_{n})=f(g_{1}(\bar{z}_{1}),\ldots,g_{m}(\bar{z}_{m}))\) where \(f\in\tau\) with \(\operatorname{ar}f=m\) and each \(g_{i}(\bar{z}_{i})\) is a term or variable. The set \(E_{t}\) will consist of three different types of operations. For the first type, we take
\[\nu(\vec{q}):=T_{f}\left(g_{1}^{Q}(\bar{q}_{1}),\ldots,g_{m}^{Q}(\bar{q}_{m})\right)\]
where \(\bar{q}_{i}\in Q^{\operatorname{ar}g_{i}}\) is the restriction of \(\vec{q}\) to the variables of \(g_{i}\). For the second type, it may be that \(g_{1}\) is not a variable. We take operations of the form
\[\nu(\vec{q}):=f^{\Delta}\left(\mu(\bar{q}_{1}),\delta\circ l(g_{2}^{Q}(\bar{q }_{2})),\ldots,\delta\circ l(g_{m}^{Q}(\bar{q}_{m}))\right).\]
where \(\mu\in E_{g_{1}}\) and \(\vec{q}_{i}\) is the restriction of \(\vec{q}\) to the variables of \(g_{i}\). For the third type, for any \(2\leq k\leq m\) such that \(g_{k}\) is not a variable, we take operations of the form
\[\nu(\vec{q}):=a(f,k)\left(g_{1}^{Q}(\bar{q}_{1}),\ldots,g_{k-1}^{Q}(\bar{q}_{k -1}),\mu(\vec{q}_{k}),g_{k+1}^{Q}(\delta(\bar{a}_{k+1})),\ldots,g_{m}^{Q}( \delta(\bar{a}_{m}))\right)\]
where \(\mu\in E_{g_{k}}\) and \(\vec{q}_{i}\) is the restriction of \(\vec{q}\) to the variables of \(g_{i}\).
**Definition 3.14**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\Sigma\) a set of equations in the same signature \(\tau\). A sequence of operations \(T=\{T_{f}:f\in\tau\}\) where \(T_{f}:Q^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\) is a _2-cocycle compatible with \(\Sigma\)_ if
1. for all \(f\in\tau\), \(T_{f}(q_{1},\ldots,q_{\operatorname{ar}f})\ \hat{\alpha}/\Delta_{\alpha\alpha}\)\(f^{\Delta}\left(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha\alpha},\delta\circ l(q_{2}),\ldots,\delta \circ l(q_{\operatorname{ar}f})\right)\) where \(\rho\left(\begin{bmatrix}a\\ b\end{bmatrix}\right)=q_{1}\);
2. for all \(t(\vec{x})=g(\vec{y})\in\Sigma\) and evaluations \(\epsilon:\operatorname{var}t\cup\operatorname{var}g\to Q\) we have \[t^{\partial,T}(\epsilon(\vec{x}))=g^{\partial,T}(\epsilon(\vec{y})).\]
It follows immediately from the definition, that if \(T\) is a 2-cocycle compatible with \(\operatorname{Id}\mathcal{V}\), then \(T\) is compatible for any variety \(\mathcal{U}\geq\mathcal{V}\).
**Example 3.15**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\) and \(l:Q\to A\) a lifting associated to \(\rho:A(\alpha)\to Q\). For a single binary operation symbol \(f\), enforcing compatibility with associativity yields
\[f^{\Delta}\left(T_{f}(q_{1},q_{2}),\delta\circ l(q_{3})\right)+_{u}T_{f}(f^{Q}( q_{1},q_{2}),q_{3})=a(f,1)\left(q_{1},T_{f}(q_{2},q_{3})\right)+_{v}T_{f}(q_{1},f^{Q}( q_{2},q_{3})) \tag{3}\]
where \(u=l\left(f(f(q_{1},q_{2}),q_{3})\right)\) and \(v=l\left(f(q_{1},f(q_{2},q_{3}))\right)\).
In the case of an abelian normal subgroup \(K\triangleleft G\) with \(Q=G/K\), this specializes to the classic 2-cocycle identity in the following manner. By Lemma 2.10, there is the isomorphism \(\psi:G(\alpha_{K})/\Delta_{\alpha_{K}\alpha_{K}}\ni\begin{bmatrix}b\\ a\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto\left\langle ab^{-1}, \pi(b)\right\rangle\in K\rtimes_{\phi}Q\). If we fix a lifting \(l:Q\to G\) for the canonical surjection \(\pi:G\to Q\), then this means each \(\Delta_{\alpha_{K}\alpha_{K}}\)-class is uniquely represented as \(\begin{bmatrix}l(x)\\ kl(x)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\) for some \(k\in K\), \(x\in Q\). For the binary operation we set \(T(x,y):=\begin{bmatrix}l(xy)\\ l(x)l(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\) and define action terms by
\[a(\cdot,1)\left(\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}},x\right) :=\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\cdot\delta\circ l(x)= \begin{bmatrix}l(y)l(x)\\ kl(y)l(x)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto\left\langle k,l (x)l(y)\right\rangle\] \[a(\cdot,2)\left(x,\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\right) :=\delta\circ l(x)\cdot\begin{bmatrix}l(y)\\ kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}=\left\langle l(x)l(y)\\ l(x)kl(y)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\longmapsto\left\langle x*k,l (x)l(y)\right\rangle\]
where \(x*k=l(x)kl(x)^{-1}\) is the conjugation action. Recall, the group 2-cocyle is defined by \(l(x)l(y)=f(x,y)l(xy)\).
Then using \(v=l(xyz)=u\) and \(m(x,y,z)=xy^{-1}z\) we apply \(\psi\) to calculate
\[T(x,y)\cdot\delta(l(z))+_{u}T(xy,z) =\left(\begin{bmatrix}l(xy)\\ l(x)l(y)\end{bmatrix}\begin{bmatrix}l(z)\\ l(z)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}+_{u}\begin{bmatrix}l(xyz) \\ l(xy)l(z)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\] \[=m\left(\begin{bmatrix}l(xy)l(z)\\ l(x)l(y)l(z)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xyz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xy)l(z)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}\] \[=m\left(\begin{bmatrix}l(xy)l(z)\\ f(x,y)l(xy)l(z)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xyz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ f(xy,z)l(xyz)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}\] \[\longmapsto m\left(\left\langle f(x,y),l(xyz)\right\rangle,\left\langle 0,l(xyz)\right\rangle,\left\langle f(xy,z),l(xyz)\right\rangle\right)\] \[=\left\langle f(x,y),l(xyz)\right\rangle\cdot\left\langle 0,l(xyz) \right\rangle^{-1}\cdot\left\langle f(xy,z),l(xyz)\right\rangle\] \[=\left\langle f(x,y)+f(xy,z),l(xyz)\right\rangle\]
and
\[a(\cdot,1)(x,T(y,z))+_{v}T(x,yz) =\begin{bmatrix}l(x)l(yz)\\ l(x)l(y)l(z)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}+_{v}\begin{bmatrix}l(xyz) \\ l(x)l(yz)\end{bmatrix}/\Delta_{\alpha_{K}\alpha_{K}}\] \[=m\left(\begin{bmatrix}l(x)l(yz)\\ l(x)f(y,z)l(yz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ l(xyz)\end{bmatrix},\begin{bmatrix}l(xyz)\\ f(x,yz)l(xyz)\end{bmatrix}\right)/\Delta_{\alpha_{K}\alpha_{K}}\] \[\longmapsto m\left(\left\langle x*f(y,z),l(xyz)\right\rangle, \left\langle 0,l(xyz)\right\rangle,\left\langle f(x,yz),l(xyz)\right\rangle\right)\] \[=\left\langle x*f(y,z),l(xyz)\right\rangle\cdot\left\langle 0,l(xyz) \right\rangle^{-1}\cdot\left\langle f(x,yz),l(xyz)\right\rangle\] \[=\left\langle x*f(y,z)+f(x,yz),l(xyz)\right\rangle.\]
The equality in Eq.(3) and the above calculations yields
\[f(x,y)+f(xy,z)=x*f(y,z)+f(x,yz)\]
which is the group theoretic 2-cocycle identity.
We would like to consider the interpretation of a term in an algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) defined from affine datum. Inductively along the composition tree of the term, we can use the homomorphism property of the action and datum operations to distribute across the operations in Definition 3.12 for each fundamental symbol of the signature. While a term interpreted in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) with domain \(A(\alpha)/\Delta_{\alpha\alpha}\) may have identified variables in different operations symbols in its composition tree, the interpretation of the operation symbols in Definition 3.12 depends on all the coordinates and have domains over \(Q\cup\delta(A)\cup A(\alpha)/\Delta_{\alpha\alpha}\). So while the repeated variable \(x\) in the term has a fixed evaluation \(x\mapsto\left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\), the interpretation of the term is expanded into sums of operations in which x has evaluations among the values \(\left\{\rho\left(\left[\begin{matrix}a\\ b\end{matrix}\right]\right),\delta\circ l\circ\rho\left(\left[\begin{matrix} a\\ b\end{matrix}\right]\right),\left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\right\}\) depending in which coordinates the variable \(x\) appears.
The next definitions relate the two different domains together. The first step is given a term, we produce a corresponding term with the same composition tree except for no repeated variables.
**Definition 3.16**.: Let \(f(\vec{x})\) be a term in the signature \(\tau\). Let \(f^{\sigma}\) be a term in the signature \(\tau\) and variables \(X_{\omega}=\{x_{0},x_{1},\ldots\}\) which has the same composition tree as \(f\) except the leaves have no repeated variables; in addition, reading from left-to-right the variables of \(f^{\sigma}\) are an initial segment of \(X_{\omega}\). There is a surjective map \(\sigma:\operatorname{var}f^{\sigma}\to\operatorname{var}f\) such that \(f^{\sigma}(\sigma(\operatorname{var}f^{\sigma}))=f(\vec{x})\).
Note for any evaluation \(\epsilon:\operatorname{var}f\to A\), there is a corresponding evaluation \(\epsilon^{\sigma}:\operatorname{var}f^{\sigma}\to A\) such that \(\epsilon\circ\sigma=\epsilon^{\sigma}\).
Fix \(\rho:A(\alpha)\to Q\) associated with affine datum \((Q,A^{\alpha,\tau},*)\) and a lifting \(l:Q\to A\). Let \(t(\vec{x})\) be a term in the same signature with \(n=|\operatorname{var}f^{\sigma}|\) and \(\epsilon:\operatorname{var}f\to A(\alpha)/\Delta_{\alpha\alpha}\) an evaluation. An evaluation \(\mu:\operatorname{var}f^{\sigma}\to Q\cup\delta(A)\cup A(\alpha)/\Delta_{ \alpha\alpha}\) is _consistent_ with \(\epsilon(\vec{x})\) if \(\nu(x_{i})\in\{\rho\circ\epsilon^{\sigma}(x_{i}),\delta\circ l\circ\rho\circ \epsilon^{\sigma}(x_{i}),\epsilon^{\sigma}(x_{i})\}\) for each \(x_{i}\in\operatorname{var}f^{\sigma}\). Define \(L(f,\epsilon(\vec{x}))=\{\mu(\operatorname{var}f^{\sigma})\in C_{t^{\sigma}}:\mu\) is consistent with \(\epsilon(\vec{x})\}\). This is the set of evaluations which will allow us to describe equations in semidirect products realizing affine datum.
**Definition 3.17**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\Sigma\) a set of identities in the same signature. The action in the datum is _weakly compatible_ with \(\Sigma\) if for all \(f=g\in\Sigma\),
\[\sum_{\mu\in L(f,\epsilon(\vec{x}))}(f^{\sigma})^{*}(\mu(\operatorname{var}f^ {\sigma}))=\sum_{\mu\in L(g,\epsilon(\vec{x}))}(g^{\sigma})^{*}(\mu( \operatorname{var}g^{\sigma})).\]
The action is _weakly compatible_ with a variety \(\mathcal{V}\) if it is weakly compatible with \(\operatorname{Id}\mathcal{V}\).
**Lemma 3.18**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\) and \(t(\vec{x})\) a term. For any evaluation \(\epsilon(\vec{x})=\left(\left[\begin{matrix}a_{1}\\ b_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}a_{n} \\ b_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right)\), the interpretation of the term \(t\) in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) is represented by
\[F_{t}\left(\left[\begin{matrix}a_{1}\\ b_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}a_{n} \\ b_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right)=\sum_{\mu\in L(t,\epsilon( \vec{x}))}(t^{\sigma})^{*}(\mu(\operatorname{var}t^{\sigma}))\ +_{u}\ t^{\partial,T}\left(\rho\circ\epsilon(\vec{x})\right). \tag{4}\]
where \(u=l\left(t^{\partial}(\rho\circ\epsilon(\vec{x}))\right)\) for any lifting \(l\) associated to the datum.
Proof.: By induction on the composition tree of \(t\). This is precisely what Definition 3.2, Definition 3.13 and Definition 3.16 accomplish.
The next theorem guarantees abelian congruences in varieties with a weak-difference term are sources of affine datum by decomposing an extension with an abelian kernel into appropriate datum. It will be convenient to work in terms of \(\alpha\)-traces rather than liftings associated to datum.
**Theorem 3.19**.: Let \(\mathcal{V}\) be a variety with a weak-difference term in the signature \(\tau\). Let \(A\in\mathcal{V}\) and surjective \(\pi:A\to Q\) with \(\alpha=\ker\pi\in\operatorname{Con}A\) abelian. Then there exists homomorphic \(A^{\alpha,\tau}\), a \(2\)-cocycle
\(T=\{T_{f}:f\in\tau\}\) compatible with \(\operatorname{Id}\mathcal{V}\) and homomorphic action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) compatible with \(\operatorname{Id}\mathcal{V}\) constituting affine datum such that \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\).
Proof.: We have \(Q\approx A/\alpha\). Fix a lifting \(l:Q\to A\) and associated \(\alpha\)-trace \(r:A\to A\) so that \(r=l\circ\pi\) and \(\pi\circ l=\operatorname{id}\). We also see that \(\rho:A(\alpha)\to Q\) defined by \(\rho:\begin{bmatrix}a\\ b\end{bmatrix}\mapsto\pi(a)\) is a surjective homomorphism such that \(\hat{\alpha}=\ker\rho\).
Define \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) by \(\phi(x)=\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\). Let \(m\) be a weak-difference term of \(\mathcal{V}\). Since \(\alpha\) is abelian, \(m\) is affine on \(\alpha\)-blocks. Note \(\Delta_{\alpha\alpha}\leq\hat{\alpha}\). By Lemma 2.1(2), we see that \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}c\\ d\end{bmatrix}\Leftrightarrow d=m(b,a,c)\); therefore, the universe of the algebra \(A(\alpha)/\Delta_{\alpha\alpha}\) is uniquely reconstructed from \(m\). It also follows that \(\begin{bmatrix}a\\ b\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}a\\ d\end{bmatrix}\Rightarrow b=d\). These facts guarantee that \(\phi\) is bijective.
For each \(f\in\tau\), define \(T_{f}:(Q)^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\) by
\[T_{f}(y_{1},\ldots,y_{n}):=\begin{bmatrix}r(f(x_{1},\ldots,x_{n}))\\ f(r(x_{1}),\ldots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha} \tag{5}\]
for any choice of \(x_{i}\in A\) such that \(\pi(x_{i})=y_{i}\in Q\). Since \(r(x)=r(y)\) if and only if \((x,y)\in\alpha\), \(T_{f}\) is well-defined; in this way, we can also treat \(T\) has a function with domain \(A\) which depends only on the \(\alpha\)-classes. For each \(f\in\tau\) and \(1\leq i\leq n=\operatorname{ar}f\), define the action \(Q*A(\alpha)/\Delta_{\alpha\alpha}\) according to the rule
\[a(f,i)(q_{1},\ldots,q_{i-1},x,q_{i+1},\ldots,q_{n}):=f\left(\delta\circ l(q_{ 1}),\ldots,\delta\circ l(q_{i-1}),x,\delta\circ l(q_{i+1}),\ldots,\delta \circ l(q_{n})\right). \tag{6}\]
for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), \(q_{1},\ldots,q_{n}\in Q\) and define
\[f^{\Delta}(x,\delta(a_{2}),\ldots,\delta(a_{n})):=f(x_{1},\delta(a_{2}),\ldots,\delta(a_{n})) \tag{7}\]
for all \(x\in A(\alpha)/\Delta_{\alpha\alpha}\), \(a_{2},\ldots,a_{n}\in A\). It follows that the action and \(A^{\alpha,\tau}\) are homomorphic since \(\alpha\) is abelian and \(m\) is Mal'cev on \(\alpha\)-blocks. The definitions also show that (AD2) from Definition 3.9 is satisfied.
The fact that \(\phi\) is a homomorphism is a result of the following expansion: if we first set \(u_{i}=f(r(x_{1}),\ldots,r(x_{i}),x_{i+1},\ldots,x_{n})\) we have
\[\begin{split}\begin{bmatrix}r(f(x_{1},\ldots,x_{n}))\\ f(x_{1},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}&= \begin{bmatrix}m\big{(}f(r(x_{1}),\ldots,r(x_{n})),f(r(x_{1}),\ldots,r(x_{n} )),r(f(x_{1},\ldots,x_{n}))\big{)}\\ m\big{(}f(x_{1},\ldots,x_{n}),f(r(x_{1}),\ldots,r(x_{n})),f(r(x_{1}),\ldots,r(x _{n}))\big{)}\end{bmatrix}/\Delta_{\alpha\alpha}\\ &=\begin{bmatrix}f(r(x_{1}),\ldots,r(x_{n}))\\ f(x_{1},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u_{n}}\begin{bmatrix} r(f(x_{1},\ldots,x_{n}))\\ f(r(x_{1}),\ldots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\\ &=\begin{bmatrix}m\big{(}f(r(x_{1}),x_{2},\ldots,x_{n}),f(r(x_{1}),x_{2},\ldots,x_{n}),f(r(x_{1}),\ldots,r(x_{n}))\big{)}\\ m\big{(}f(x_{1},\ldots,x_{n}),f(r(x_{1}),x_{2},\ldots,x_{n}),f(r(x_{1}),x_{2}, \ldots,x_{n})\big{)}\end{bmatrix}/\Delta_{\alpha\alpha}\\ &\quad+_{u_{n}}T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\\ &=\begin{bmatrix}f(r(x_{1}),x_{2},\ldots,x_{n})\\ f(x_{1},x_{2},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u_{n}}\begin{bmatrix} f(r(x_{1}),r(x_{2}),\ldots,r(x_{n}))\\ f(r(x_{1}),x_{2},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}\\ &\quad+_{u_{n}}T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\\ &=f\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)\end{split}/\Delta_{\begin{subarray}{c}\alpha\alpha}\\ &\quad+_{u_{2}}\begin{bmatrix}f(r(x_{1}),r(x_{2}),r(x_{3}),\ldots,r(x_{n}))\\ f(r(x_{1}),r(x_{2}),x_{3},\ldots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}\\ &=f\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n}) \right)\end{split}\]
\[+_{u_{1}}\;a(f,1)\left(\pi(x_{1}),\left[\begin{matrix}r(x_{2})\\ x_{2}\end{matrix}\right]/\Delta_{\alpha\alpha},\delta(x_{3}),\ldots,\delta(x_{n} )\right)\] \[+_{u_{2}}\left[\begin{matrix}f(r(x_{1}),r(x_{2}),r(x_{3}),\ldots,r( x_{n}))\\ f(r(x_{1}),r(x_{2}),x_{3},\ldots,x_{n})\end{matrix}\right]/\Delta_{\alpha\alpha} +_{u_{n}}\;T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\] \[\vdots\] \[=f^{\Delta}\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n })\right)+_{u_{n}}\] \[\sum_{i=2}^{n}a(f,i)\left(\pi(x_{1}),\ldots,\pi(x_{i}),\left[ \begin{matrix}r(x_{i})\\ x_{i}\end{matrix}\right]/\Delta_{\alpha\alpha},\delta(x_{i+2}),\ldots,\delta(x _{n})\right)\] \[+_{u_{n}}\;T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))\] \[=F_{f}\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(x_{n})\\ x_{n}\end{matrix}\right]\right)\]
since \(\delta(u_{i})=\delta(u_{j})=\delta(l(f(\pi(x_{1}),\ldots,\pi(x_{n}))))\) for \(i\neq j\).
We now show both the action and 2-cocycle \(T\) are compatible with \(\operatorname{Id}\mathcal{V}\). Note that in the expansion for \(f\in\tau\) previously calculated,
\[F_{f}\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}r(x_ {n})\\ x_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right) =\left[\begin{matrix}r(f(x_{1},\ldots,x_{n}))\\ f(x_{1},\ldots,x_{n})\end{matrix}\right]/\Delta_{\alpha\alpha} \tag{9}\] \[=\left[\begin{matrix}f(r(x_{1}),\ldots,r(x_{n}))\\ f(x_{1},\ldots,x_{n})\end{matrix}\right]/\Delta_{\alpha\alpha}+_{u}\;\;T_{f}( \pi(x_{1}),\ldots,\pi(x_{n}))\] (10) \[=f\left(\left[\begin{matrix}r(x_{1})\\ x_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(x_{n})\\ x_{n}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}+_{u}\;\;T_{f}(\pi(x_{1}), \ldots,\pi(x_{n})) \tag{8}\]
in the last line Eq.(10) the \(\Delta_{\alpha\alpha}\)-class of the first term is expanded using the action. This is a reflection of the fact that in the algebra \(A(\alpha)/\Delta_{\alpha\alpha}\) the action represents a "twisting" of the product structure.
Take \(t(\bar{x})=g(\bar{y})\in\operatorname{Id}\mathcal{V}\). Fix an assignment \(\epsilon:\operatorname{var}\;t\cup\operatorname{var}\;g\to A\). By the isomorphism \(\phi\) we have \(t^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{x}))=g^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{y}))\) since \(A\in\mathcal{V}\). We can use Eq.(8), Eq.(10) and the homomorphism property of the action to recursively expand the interpretation of the term \(t\) in the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) in order to write
\[t^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{x})) =t^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\phi\circ\epsilon( \bar{x})\right)\;+_{u}\;\;t^{\partial,T}(\pi\circ\epsilon(\bar{x})) \tag{12}\] \[=t^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\left[\begin{matrix}r (a_{1})\\ a_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(a_{n})\\ a_{n}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}\;+_{u}\;\;t^{\partial,T}( \pi\circ\epsilon(\vec{x})) \tag{11}\]
where \(u=l(t^{Q}(\pi\circ\epsilon(\vec{x})))\) and \(\left(\left[\begin{matrix}r(a_{1})\\ a_{1}\end{matrix}\right]/\Delta_{\alpha\alpha},\ldots,\left[\begin{matrix}r(a_ {n})\\ a_{n}\end{matrix}\right]/\Delta_{\alpha\alpha}\right)=\phi\circ\epsilon(\vec{x})\). The second term in Eq.(12) incorporates all the appearances of the transfers \(T_{f}\). By comparison with Lemma 3.18, we see that
\[t^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\left[\begin{matrix}r(a_{1})\\ a_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(a_{n})\\ a_{n}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}=\sum_{\mu\in L(t, \epsilon(\vec{x}))}(t^{\sigma})^{*}(\mu(\operatorname{var}t^{\sigma})).\]
A similar calculation produces
\[g^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\bar{y})) =g^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\left[\begin{matrix}r( b_{1})\\ b_{1}\end{matrix}\right],\ldots,\left[\begin{matrix}r(b_{m})\\ b_{m}\end{matrix}\right]\right)/\Delta_{\alpha\alpha}\;+_{v}\;\;g^{\partial,T}( \epsilon(\pi\circ\vec{y}))\] \[=\sum_{\nu\in L(g,\epsilon(\vec{x}))}(g^{\sigma})^{*}(\nu( \operatorname{var}g^{\sigma}))\;+\;g^{\partial,T}(\epsilon(\pi\circ\vec{y})).\]
where \(v=l(g^{Q}(\pi\circ\epsilon(\bar{y})))\) and \(\left(\begin{bmatrix}r(b_{1})\\ b_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}r(b_{m})\\ b_{m}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\phi\circ\epsilon(\vec{y})\).
Since \(A(\alpha)\in\mathcal{V}\), we conclude that
\[\sum_{\mu\in L(t,\epsilon(\vec{x}))}(t^{\sigma})^{*}(\mu(\operatorname{ var}t^{\sigma}))=t^{\sigma}\left(\begin{bmatrix}r(a_{1})\\ a_{1}\end{bmatrix},\ldots,\begin{bmatrix}r(a_{n})\\ a_{n}\end{bmatrix}\right)/\Delta_{\alpha\alpha} =g^{\sigma^{\prime}}\left(\begin{bmatrix}r(b_{1})\\ b_{1}\end{bmatrix},\ldots,\begin{bmatrix}r(b_{m})\\ b_{m}\end{bmatrix}\right)/\Delta_{\alpha\alpha} \tag{14}\] \[=\sum_{\nu\in L(g,\epsilon(\vec{x}))}(g^{\sigma})^{*}(\nu( \operatorname{var}g^{\sigma})). \tag{13}\]
This shows the action is compatible with \(\operatorname{Id}\mathcal{V}\). Since \(Q\in\mathcal{V}\), we have \(t^{Q}(\pi\circ\epsilon(\bar{x}))=g^{Q}(\pi\circ\epsilon(\bar{y}))\) which implies \(u=v\). Using this we then have
\[t^{\partial,T}(\pi\circ\epsilon(\vec{x})) =t^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\vec{x}))\ \,-_{u}\ t^{A(\alpha)/\Delta_{\alpha\alpha}}\,(\phi\circ\epsilon(\vec{x}))\] \[=g^{A_{T}(Q,A^{\alpha,\tau},*)}(\phi\circ\epsilon(\vec{y}))\ \,-_{v}\ g^{A(\alpha)/\Delta_{\alpha\alpha}}\,(\phi\circ\epsilon(\vec{y}))=g^ {\partial,T}(\epsilon(\pi\circ\vec{y}))\]
which shows that \(T\) is a \(2\)-cocycle compatible with \(\operatorname{Id}\mathcal{V}\) by Definition 3.14.
**Remark 3.20**.: If \(l\) is lifting associated to \(\rho:A(\alpha)\to Q\) from affine datum, then it useful to observe
\[\delta(l(f^{Q}(\bar{x})))=f^{A(\alpha)/\Delta_{\alpha\alpha}}(\delta(\bar{x}))\]
which follows from the property \((a,l(q))\in\alpha\Longleftrightarrow\rho\left(\begin{bmatrix}a\\ l(q)\end{bmatrix}\right)=q\).
The next theorem is complimentary to Theorem 3.19 in that it starts with affine datum and membership \(Q\in\mathcal{U}\) for a variety with compatible equational theory and reconstructs an extension in \(\mathcal{U}\) which determines the given datum.
**Theorem 3.21**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). Assume the action is weakly compatible with \(\operatorname{Id}\mathcal{U}\) and \(T=\{T_{f}:f\in\tau\}\) is a \(2\)-cocycle compatible \(\operatorname{Id}\mathcal{U}\). Then there is an extension \(\pi:A_{T}(Q,A^{\alpha,\tau},*)\to Q\) which realizes the datum with \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\).
Proof.: Since the action is weakly compatible with \(\operatorname{Id}\mathcal{U}\), we have \(Q\in\mathcal{U}\). Define \(\pi:A_{T}(Q,A^{\alpha,\tau},*)\to Q\) by \(\pi\left(\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\right):=\rho\left(\begin{bmatrix}a\\ b\end{bmatrix}\right)\). By Definition 3.14.(C1) and Definition 3.9.(D3)-(D4) we see that \(\pi\) is a surjective homomorphism and \(\ker\pi=\hat{\alpha}/\Delta_{\alpha\alpha}\). Fix a lifting \(l:Q\to A\) for \(\rho\) and attendant \(\alpha\)-trace \(r:A\to A\).
We show \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\). Take \(t(\bar{x})=g(\bar{y})\in\operatorname{Id}\ \mathcal{U}\). Let \(\epsilon:\operatorname{var}t\cup\operatorname{varg}\to A(\alpha)/\Delta_{\alpha\alpha}\) be an assignment. If we set \(u=l(t^{Q}(\pi\circ\epsilon(\bar{x})))\) and \(v=l(g^{Q}(\pi\circ\epsilon(\bar{y})))\), then \(Q\in\mathcal{V}\) implies \(u=v\). Then because the action and \(T\) are both separately \(\mathcal{U}\)-compatible, by Lemma 3.18 we have
\[t^{A_{T}(Q,A^{\alpha,\tau},*)}(\epsilon(\bar{x})) =\sum_{\mu\in L(t,\epsilon(\bar{x}))}(t^{\sigma})^{*}(\mu( \operatorname{var}t^{\sigma}))\ \,+_{u}\ t^{\partial,T}\left(\pi\circ\epsilon(\bar{x})\right)\] \[=\sum_{\nu\in L(g,\epsilon(\bar{x}))}(g^{\sigma})^{*}(\nu( \operatorname{var}g^{\sigma}))\ \,+_{u}\ g^{\partial,T}\left(\pi\circ\epsilon(\bar{y})\right)=g^{A_{T}(Q,A^{ \alpha,\tau},*)}(\epsilon(\bar{y})).\]
This shows the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\) satisfies \(\operatorname{Id}\ \mathcal{U}\) and so \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\).
We will put an isomorphic algebraic structure on the set \(A\). Define the bijection \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) by \(\phi(a):=\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) and a new algebra \(\check{A}=\left\langle A,\{\check{f}:f\in\tau\}\right\rangle\) with operations \(\check{f}(a_{1},\ldots,a_{n}):=\phi^{-1}\left(F_{f}\left(\phi(a_{1}),\ldots, \phi(a_{n})\right)\right)\) for \(f\in\tau\). It is immediate that \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) and \(\ker(\pi\circ\phi)=\alpha\). We
now show \(A_{T}(Q,A^{\alpha,\tau},*)\) realizes the datum. In order to verify Definition 3.11, we take \(1\leq i\leq n=\operatorname{ar}f\) and evaluate
\[\check{f}\left(\delta(r(x_{1})),\ldots,\delta(r(x_{i-1})),\begin{bmatrix}r (x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(r(x_{i+1})),\ldots,\delta(r(x_{n }))\right) \tag{16}\] \[=\begin{bmatrix}\check{f}(r(x_{1}),\ldots,r(x_{i-1}),r(x_{i}),r(x _{i+1}),\ldots,r(x_{n}))\\ \check{f}(r(x_{1}),\ldots,r(x_{i-1}),x_{i},r(x_{i+1}),\ldots,r(x_{n}))\end{bmatrix} /\Delta_{\alpha\alpha}\] (17) \[=\begin{bmatrix}\phi^{-1}\left(F_{f}\left(\phi(r(x_{1})),\ldots, \phi(r(x_{i-1})),\phi(r(x_{i})),\phi(r(x_{i+1})),\ldots,\phi(r(x_{n}))\right) \right)\\ \phi^{-1}\left(F_{f}\left(\phi(r(x_{1})),\ldots,\phi(r(x_{i-1})),\phi(x_{i}), \phi(r(x_{i+1})),\ldots,\phi(r(x_{n}))\right)\right)\end{bmatrix}/\Delta_{ \alpha\alpha}. \tag{15}\]
We will evaluate the operations \(F_{f}\) on the above tuples. Let us write \(u=l(f^{Q}(q_{1},\ldots,q_{n}))\) where \(q_{i}=\pi\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\) and note \(q_{i}=\pi\left(\delta(r(x_{i}))\right)=\pi\left(\delta(x_{i})\right)\). First, for any \(1\leq j<n=\operatorname{ar}f\), by the homomorphism property of the action we always have
\[a(f,j)\left(q_{1},\ldots,q_{j-1},\delta(x_{j}),q_{j+1},\ldots,q_ {n}\right)=a(f,j)\left(q_{1},\ldots,q_{j-1},\delta(x_{j}),q_{j+1},\ldots,q_{n}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+_{u}\,\,\,a(f,j)\left(q_{1},\ldots,q_{j-1},\delta(x_{j}),q_{j+1},\ldots, q_{n}\right)\]
which implies \(a(f,j)(q_{1},\ldots,q_{j},\delta(x_{j+1}),\ldots,\delta(x_{n})=\delta(u)\); similarly, \(f^{\Delta}\left(\delta(x_{1}),\delta(x_{2}),\ldots,\delta(x_{n})\right)= \delta(u)\). Second, note by Definition 3.9.D3 it follows that we can write
\[a(f,j)\left(q_{1},\ldots,q_{j-1},\begin{bmatrix}r(x_{j})\\ x_{j}\end{bmatrix}/\Delta_{\alpha\alpha},q_{j+1},\ldots,q_{n}\right)=\begin{bmatrix} u\\ a_{j}\end{bmatrix}/\Delta_{\alpha\alpha}\]
for some \(a_{j}\in A\); similarly, \(T_{f}(q_{1},\ldots,q_{n})=\begin{bmatrix}u\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(b\in A\). Then if we recall \(r\circ r=r\) for an \(\alpha\)-trace, we have
\[F_{f}\left(\phi(r(x_{1})),\ldots,\phi(r(x_{i-1})),\phi(x_{i}), \phi(r(x_{i+1})),\ldots,\phi(r(x_{n}))\right)\] \[\quad=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ r(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\delta(r(x_{2})),\ldots,\delta(r(x _{i})),\delta(x_{i+1}),\ldots,\delta(x_{n})\right)\] \[\qquad\qquad\qquad+_{u}\,\,\sum_{j=2}^{i-1}a(f,j)\left(q_{1}, \ldots,q_{j-1},\delta(r(x_{j})),q_{j+1},\ldots,q_{i},\ldots,q_{n}\right)\] \[\qquad\qquad\qquad+_{u}\,\,\,a(f,i)\left(q_{1},\ldots,q_{i-1}, \begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\] \[\qquad\qquad\qquad+_{u}\sum_{j=i+1}^{n}a(f,j)\left(q_{1},\ldots,q _{j-1},\delta(r(x_{j})),q_{j+1},\ldots,q_{n}\right)\,\,+_{u}\,\,\,T_{f}(q_{1},\ldots,q_{n})\] \[\qquad\qquad=a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(x _{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\,\,+_{u} \,\,\,T_{f}(q_{1},\ldots,q_{n})\] \[\qquad\qquad=\begin{bmatrix}u\\ a_{i}\,\,\,\,\hat{\uparrow}_{u}\,\,\,b\end{bmatrix}/\Delta_{\alpha\alpha}\]
where we have written \(x\,\,\,\hat{\uparrow}_{u}\,\,y=m(x,u,y)\) for the induced sum on \(A\). Note \(\hat{\alpha}/\Delta_{\alpha\alpha}\) abelian implies the sum \(a_{i+2}\,\,\,\hat{\uparrow}_{u}\,\,\hat{\uparrow}_{u}\,\,a_{n}\,\,\hat{\uparrow }_{u}\,\,b\) is unique up to association. In the same manner we have,
\[F_{f}\left(\phi(r(x_{1})),\ldots,\phi(r(x_{i})),\phi(x_{i+1}),\phi(x_{i+2}), \ldots,\phi(x_{n})\right)=T_{f}(q_{1},\ldots,q_{n})=\begin{bmatrix}u\\ b\end{bmatrix}/\Delta_{\alpha\alpha}.\]
Putting the above together with Eq.(17) we see that
\[\check{f}\left(\delta(r(x_{1})),\ldots,\delta(r(x_{i-1})),\genfrac{[}{ }{0.0pt}{}{r(x_{i})}{x_{i}}\,\big{]}\,/\Delta_{\alpha\alpha},\delta(r(x_{i+1})), \ldots,\delta(r(x_{n}))\right)\] \[=\genfrac{[}{]}{0.0pt}{}{\phi^{-1}\left(\genfrac{[}{]}{0.0pt}{}{u }{b}\,/\Delta_{\alpha\alpha}\right)}{\phi^{-1}\left(\genfrac{[}{]}{0.0pt}{}{u }{a_{i}\,\,\hat{+}_{u}\,\,b}\big{]}\,/\Delta_{\alpha\alpha}\right)}\,/\Delta_{ \alpha\alpha}\] \[=\genfrac{[}{]}{0.0pt}{}{b}{a_{i}\,\,\hat{+}_{u}\,\,b}\big{]}\,/ \Delta_{\alpha\alpha}\] \[=\genfrac{[}{]}{0.0pt}{}{b}{m(a_{i},u,b)}\,\big{]}\,/\Delta_{ \alpha\alpha}\] \[=\genfrac{[}{]}{0.0pt}{}{u}{a_{i}}\,/\Delta_{\alpha\alpha}=a(f,i) (q_{1},\ldots,q_{i-1},\genfrac{[}{]}{0.0pt}{}{r(x_{i})}{x_{i}}\,\big{]}\,/ \Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n})\]
which shows \(\check{A}\approx A_{T}(Q,A^{\alpha,\tau},*)\) realizes the datum.
We say the variety \(\mathcal{U}\)_contains_ the datum \((Q,A^{\alpha,\tau},*)\) if the action \(*\) is weakly compatible with \(\operatorname{Id}\mathcal{U}\). The following is a characterization of the internal semidirect product by retractions for algebras which realize affine datum; in particular, this holds for any algebra with an abelian congruence in a variety with a weak-difference term. Note a retraction \(r:A\to A\) is always a \(\ker r\)-trace.
**Proposition 3.22**.: Let \(A\) be an algebra which realizes affine datum \((A^{\alpha,\tau},Q,*)\). Let \(\pi:A\to A/\alpha\) be the canonical homomorphism for \(\alpha\). The following are equivalent:
1. \(A\approx A(\alpha)/\Delta_{\alpha\alpha}\);
2. there is a homomorphism \(l:A/\alpha\to A\) such that \(\pi\circ l=\operatorname{id}_{A/\alpha}\);
3. there is a retraction \(r:A\to A\) with \(\ker r=\alpha\).
Proof.: \((2)\Leftrightarrow(3)\): Suppose \(l:A/\alpha\to A\) is a homomorphism such that \(\pi\circ l=\operatorname{id}_{A/\alpha}\). Define \(r=l\circ\pi\). Then \(r\) is a homomorphism and \(r^{2}=l\circ\pi\circ l\circ\pi=l\circ\operatorname{id}_{A/\alpha}\circ\pi=l \circ\pi=r\); thus, \(r\) is a retraction.
If we assume \(r:A\to A\) is a retraction, then \(r^{2}(x)=r(x)\) implies \((x,r(x))\in\ker r=\alpha\); thus, \(r\) is a \(\alpha\)-trace. Define \(l:A/\alpha\to A\) by \(l(q)=r(x)\) for any \(x\in A\) such that \(\pi(x)=x\). Since \(r\) is an \(\alpha\)-trace, \(l\) is well-defined. Take \(q_{i}\in A/\alpha\) and \(x_{i}\in A\) such that \(\pi(x_{i})=q_{i}\) for \(i=1,\ldots,n=\operatorname{ar}f\). Then \(\pi\circ r(f(x_{1},\ldots,x_{n}))=f(\pi\circ r(x_{1}),\ldots,\pi\circ r(x_{n} ))=f(\pi(x_{1}),\ldots,\pi(x_{n}))=f(q_{1},\ldots,q_{n})\). By definition we have \(l(f(q_{1},\ldots,q_{n}))=r(f(x_{1},\ldots,x_{n}))=f(r(x_{1}),\ldots,r(x_{n} ))=f(l(q_{1}),\ldots,l(q_{n}))\) which shows \(l\) is a homomorphism.
\((3)\Rightarrow(1)\): Assume \(r:A\to A\) is a retraction. Notice \(r\) is also an \(\alpha\)-trace. By Theorem 3.21, we have \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) defined using an \(\alpha\)-trace \(r\) and associated lifting \(l\). By the construction, we have \(T_{f}(\pi(x_{1}),\ldots,\pi(x_{n}))=\genfrac{[}{]}{0.0pt}{}{l(f(\pi(x_{1}), \ldots,\pi(x_{n})))}{\phi(l(\circ\pi(x_{1}),\ldots,l\circ\pi(x_{n})))}\,/\Delta _{\alpha\alpha}=\delta(f(r(x_{1}),\ldots,r(x_{n})))\) since \(l\) is a homomorphism.
homomorphism. Then taking \(u=l(f(\pi(x_{1}),\dots,\pi(x_{n}))=f(r(x_{1}),\dots,r(x_{n}))\) we have in \(A_{T}(Q,A^{\alpha,\tau},*)\)
\[F_{f}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\dots,\begin{bmatrix}r(x_{n})\\ x_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\begin{bmatrix}r(f(x_{1},\dots,x_{n}))\\ f(r(x_{1}),\dots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(r(x_{1}),\dots,r(x_{n}))\\ f(x_{1},\dots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}r( f(x_{1},\dots,x_{n}))\\ f(r(x_{1}),\dots,r(x_{n}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(r(x_{1}),\dots,r(x_{n}))\\ f(x_{1},\dots,x_{n})\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}T_{f}(\pi(x_{1}), \dots,\pi(x_{n}))\] \[=f\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix},\dots,\begin{bmatrix}r(x_{n})\\ x_{n}\end{bmatrix}\right)/\Delta_{\alpha\alpha}\]
which shows \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\approx A(\alpha)/\Delta_{\alpha\alpha}\).
The constructions have required a fixed lifting to define, but up to isomorphism do not depend on the particular choice. Making different choices for the liftings leads to an equivalence on extensions realizing datum which is defined by a combinatorial condition on 2-cocycles.
**Proposition 3.23**.: Suppose \(\pi:A\to Q\) with \(\alpha=\ker\pi\) is an extension realizing affine datum. If \(T\) is a 2-cocycle defined by the lifting \(l\) and \(T^{\prime}\) is a 2-cocycle defined by the lifting \(r^{\prime}\), then there exists a map \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =f^{\Delta}(h(x_{1}),\delta(l(x_{2})),\dots,\delta(l(x_{n})))-_{u} h(f^{Q}(x_{1},\dots,x_{n}))\] \[+_{u}\sum_{i=2}^{n}a(f,i)\,(x_{1},\dots,x_{i-1},h(x_{i}),x_{i+1},\dots,x_{n}) (f\in\tau)\]
where \(u=l(f^{Q}(x_{1},\dots,x_{n}))\).
Proof.: The action is defined as in Eq.(6) from Theorem 3.19. Define \(h(x):=\begin{bmatrix}l(x)\\ l^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}\). We first show
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x})=f^{A(\alpha)/\Delta_{\alpha\alpha}} \left(h(\bar{x})\right)-_{u}h(f^{Q}(\bar{x})) (f\in\tau). \tag{18}\]
If \(l,l^{\prime}:Q\to A\) are liftings associated with \(\pi\) such that \(r=l\circ\pi\) and \(r^{\prime}=l^{\prime}\circ\pi\), then recall the 2-cocycles are defined by \(T^{\prime}_{f}(\bar{x})=\begin{bmatrix}l^{\prime}(f(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\) and \(T_{f}(\bar{x})=\begin{bmatrix}l(f(\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}\Delta_{\alpha\alpha}\). If we set \(v=f(l(x_{1}),\dots,l(x_{n}))\), then note \(\begin{bmatrix}u\\ u\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}v\\ v\end{bmatrix}\). Then we can expand
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =\begin{bmatrix}l^{\prime}(f(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}-_{u}\begin{bmatrix}l (f(\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}l^{\prime}(f(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}l (l(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=m\left(\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix},\begin{bmatrix}l(\bar{x})\\ f(l(\bar{x}))\end{bmatrix},\begin{bmatrix}l^{\prime}f((\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}\right)/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}f(l (\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{v}\begin{bmatrix}l ^{\prime}(f(\bar{x}))\\ f(l(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}f(l(\bar{x})) \\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{v}m\begin{pmatrix}l ^{\prime}(f(\bar{x}))\\ l(l(\bar{x}))\end{bmatrix},\begin{bmatrix}l(f(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}\begin{bmatrix}l(l(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}\begin{pmatrix}\Delta_{\alpha\alpha}\\ \end{array}\] \[=\begin{bmatrix}f(l(\bar{x}))\\ f(l^{\prime}(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}+_{u}\begin{bmatrix}l ^{\prime}(f(\bar{x}))\\ l(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}\]
\[=f^{A(\alpha)/\Delta_{\alpha\alpha}}\left(\begin{bmatrix}l(\bar{x})\\ l^{\prime}(\bar{x})\end{bmatrix}\right)/\Delta_{\alpha\alpha}-_{u}\begin{bmatrix} l(f(\bar{x}))\\ l^{\prime}(f(\bar{x}))\end{bmatrix}/\Delta_{\alpha\alpha}=f^{A(\alpha)/\Delta_{ \alpha\alpha}}(h(\bar{x}))-_{u}h(f^{Q}(\bar{x})).\]
In a similar manner, if we set \(u_{i}=f(l^{\prime}(x_{1}),\ldots,l^{\prime}(x_{i}),l(x_{i+1}),\ldots,l(x_{n}))\), then using realization we can expand
\[T^{\prime}_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =f(h(\bar{x}))-_{u}h(f(\bar{x}))\] \[=f\left(\begin{bmatrix}l(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix},\ldots,\begin{bmatrix}l(x_{n})\\ l^{\prime}(x_{n})\end{bmatrix}\right)/\Delta_{\alpha\alpha}-_{u}h(f(\bar{x}))\] \[=m\left(\begin{bmatrix}f(l(x_{1}),l(x_{2}),\ldots,l(x_{n}))\\ f(l^{\prime}(x_{1}),l(x_{2}),\ldots,l(x_{n}))\end{bmatrix},\begin{bmatrix}u_{ 1}\\ u_{1}\end{bmatrix},\begin{bmatrix}f(l^{\prime}(x_{1}),l(x_{2}),\ldots,l(x_{n}) )\\ f(l^{\prime}(x_{1}),l^{\prime}(x_{2}),\ldots,l^{\prime}(x_{n}))\end{bmatrix} \right)/\Delta_{\alpha\alpha}\] \[\quad-_{u}h(f(\bar{x}))\] \[=f^{\Delta}\left(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n} ))\right)+_{u_{1}}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{2}),\ldots,h(x_{n}) \right)-_{u}h(f(\bar{x}))\] \[=f^{\Delta}\left(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n} ))\right)+_{u_{1}}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{2}),\delta(l(x_{3})), \ldots,\delta(l(x_{n}))\right)\] \[\quad+_{u_{2}}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}l^{\prime}(x _{2})\\ l^{\prime}(x_{2})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{3}),\ldots,h(x_{n}) \right)-_{u}h(f(\bar{x}))\] \[=f^{\Delta}\left(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n} ))\right)+_{u}a(f,1)(x_{1},h(x_{2}),x_{3},\ldots,x_{n})\] \[\quad+_{u}f\left(\begin{bmatrix}l^{\prime}(x_{1})\\ l^{\prime}(x_{1})\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}l^{\prime}(x _{2})\\ l^{\prime}(x_{2})\end{bmatrix}/\Delta_{\alpha\alpha},h(x_{3}),\ldots,h(x_{n}) \right)-_{u}h(f(\bar{x}))\] \[\vdots\] \[=f^{\Delta}(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n})))-_{ u}h(f(\bar{x}))\] \[\quad+_{u}\ \sum_{i=2}^{n}a(f,i)\left(x_{1},\ldots,x_{i-1},h(x_{i}),x_{i+1}, \ldots,x_{n}\right)\]
since each \(\delta(u_{i})=\delta(u)\) and \(\delta(l(x))=\delta(l^{\prime}(x))\).
**Definition 3.24**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). A sequence of operations \(G=\{G_{f}:f\in\tau\}\) where \(G_{f}:Q^{\operatorname{ar}f}\to A(\alpha)/\Delta_{\alpha\alpha}\) is a _2-coboundary_ of the datum if there is a function \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that for any lifting \(l:Q\to A\) associated to the datum
1. \(h(x)\ \hat{\alpha}/\Delta_{\alpha\alpha}\ \delta\circ l(x)\);
2. for each \(f\in\tau\), \[G_{f}(x_{1},\ldots,x_{n}) =f^{\Delta}(h(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n})))-_{u} h(f^{Q}(x_{1},\ldots,x_{n}))\] \[+_{u}\sum_{i=2}^{n}a(f,i)(x_{1},\ldots,x_{i-1},h(x_{i}),x_{i+1}, \ldots,x_{n})\]
where \(u=l(f(x_{1},\ldots,x_{n}))\).
The function \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) referenced in the above definition is said to witness the 2-coboundary.
**Definition 3.25**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. The set of 2-cocycles compatible with \(\mathcal{U}\) is denoted by \(Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\). The set of 2-coboundaries of the datum is denoted by \(B^{2}(Q,A^{\alpha,\tau},*)\).
Notice the notation for the class of 2-coboundaries omits a subscript for the variety \(\mathcal{U}\). We shall see in the next lemma that 2-coboundaries are compatible with an variety containing the datum.
**Lemma 3.26**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. For any lifting \(l:Q\to A\) associated to the datum, the operations
\[(T_{f}+T^{\prime}_{f})(\bar{x}) :=T_{f}(\bar{x})+_{l(f(\bar{x}))}T^{\prime}_{f}(\bar{x}) (f\in\tau,\bar{x}\in Q^{\mathrm{ar}\,f})\] \[(h+h^{\prime})(x) :=h(x)+_{l(x)}h^{\prime}(x) (x\in Q)\]
makes \(Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) into an abelian group with subgroup \(B^{2}(Q,A^{\alpha,\tau},*)\leq Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\).
Proof.: From previous remarks, note the definition of the operation is not dependent on the lifting. The fact that the operation defines an abelian group follows from noting that the ternary operation \(m\) of the datum is affine on the congruence blocks of \(\hat{\alpha}/\Delta_{\alpha\alpha}\). The sum of 2-cocycles compatible with \(\mathrm{Id}\,\mathcal{U}\) is again a 2-cocycle compatible with \(\mathrm{Id}\,\mathcal{U}\) follows from the homomorphism property of the action which distributes the action terms over the witnesses of the respective 2-cocycles. These facts also show properties \((B1)\) and \((B2)\) are preserved by the sum and so 2-coboundaries form an abelian group.
What requires a little explanation is why a 2-coboundary is a 2-cocycle compatible with \(\mathcal{U}\). Let \(G=\{G_{f}:f\in\tau\}\) be a 2-coboundary. For any choice of 2-cocycle \(\{T_{f}:f\in\tau\}\) compatible with \(\mathcal{U}\), Theorem 3.21 provides an algebra \(\mathcal{U}\ni A_{T}(Q,A^{\alpha,\tau},*)\stackrel{{\pi}}{{ \rightarrow}}Q\) on the universe \(A(\alpha)/\Delta_{\alpha\alpha}\) which realizes the datum. Note there is at least one such 2-cocycle which is provided by the datum. The condition \((B2)\) now takes the form
\[G_{f}(x_{1},\dots,x_{n})=F^{A_{T}(Q,A^{\alpha,\tau},*)}_{f}(h(x_{1}),\dots,h(x _{n}))\,-_{u}\,\,T_{f}(x_{1},\dots,x_{n})\,-_{u}\,\,h(f^{Q}(x_{1},\dots,x_{n})) \tag{19}\]
where \(u=l(f(x_{1},\dots,x_{n}))\). We have preserved the superscripts here to indicate the realization. Consider a term \(t(\bar{x})\) and assignment \(\epsilon:\mathrm{var}\,\,t\to A(\alpha)/\Delta_{\alpha\alpha}\). By induction on the composition tree of a term \(t(\bar{x})\) with Eq.(19) as the base case, we can evaluate
\[t^{\partial,G}(\pi\circ\epsilon(\bar{x}))=t^{A_{T}(Q,A^{\alpha,\tau},*)}(h \circ\pi\circ\epsilon(\bar{x}))\,-_{v}\,\,t^{\partial,T}(\pi\circ\epsilon( \bar{x}))\,-_{v}\,\,h(t^{Q}(\pi\circ\epsilon(\bar{x}))) \tag{20}\]
where \(v=l(t^{Q}(\pi\circ\epsilon(\bar{x})))\). Then using Eq.(20), it follows from the fact that \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}\), \(T\) is 2-cocycle compatible with \(\mathcal{U}\) and \(Q\in\mathcal{U}\), that \(G\) is also a 2-cocycle compatible with \(\mathcal{U}\).
**Definition 3.27**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. The _second cohomology group relative to \(\mathcal{U}\)_ is the quotient group
\[H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*):=Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau },*)/B^{2}(Q,A^{\alpha,\tau},*).\]
**Definition 3.28**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. Two extensions \(A\) and \(A^{\prime}\) realizing the datum are _equivalent_ if there is a 2-cocycle \(T\) associated to \(A\) and 2-cocycle \(T^{\prime}\) associated to \(A^{\prime}\) such that \(T^{\prime}-T\in B^{2}(Q,A^{\alpha,\tau},*)\).
If \(A\) realizes the datum, we write \([A]\) for the equivalence class determined by Definition 3.28. The _trivial 2-cocycle_ for datum \((Q,A^{\alpha,\tau},*)\) has operations \(T_{f}(x_{1},\dots,x_{\mathrm{ar}\,f})=\delta\circ l(f^{Q}(x_{1},\dots,x_{arf}))\) for \(f\in\tau\) and any lifting \(l:Q\to A\) associated to the datum. When the context is clear, we denote the trivial 2-cocycle by \(T=0\).
**Theorem 3.29**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum and \(\mathcal{U}\) a variety in the same signature which contains the datum. The set of equivalence classes of extensions in \(\mathcal{U}\) realizing the datum with the operation \([A_{T}(Q,A^{\alpha,\tau},*)]+[A_{T^{\prime}}(Q,A^{\alpha,\tau},*)]:=[A_{T+T^{ \prime}}(Q,A^{\alpha,\tau},*)]\) is an abelian group isomorphic with \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\). The zero of the abelian group is the class of the semidirect product \([A_{0}(Q,A^{\alpha,\tau},*)]\).
For affine datum \((Q,A^{\alpha,\tau},*)\), let \(\mathcal{L}(Q,A^{\alpha,\tau},*)\) be the set of varieties in the signature \(\tau\) which contain the datum ordered by inclusion. It is a complete sublattice of the lattice of varieties in the signature \(\tau\). Let \(Z^{2}(Q,A^{\alpha,\tau},*)\) denote the abelian group generated by 2-cocycles compatible with some \(\mathcal{U}\in\mathcal{L}(Q,A^{\alpha,\tau},*)\).
Since \(B^{2}(Q,A^{\alpha,\tau},*)\leq Z^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) for all \(\mathcal{U}\in L(Q,A^{\alpha,\tau},*)\) by Lemma 3.26, we can define the second cohomology group of the datum
\[H^{2}(Q,A^{\alpha,\tau},*):=Z^{2}(Q,A^{\alpha,\tau},*)/B^{2}(Q,A^{\alpha,\tau},*).\]
For any algebra \(A\), \(\operatorname{Sub}A\) denotes the lattice of subalgebras ordered by inclusion.
**Proposition 3.30**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum in the signature \(\tau\). The map
\[\Psi:\mathcal{L}(Q,A^{\alpha,\tau},*)\to\operatorname{Sub}H^{2}(Q,A^{\alpha, \tau},*)\]
defined by \(\Psi(\mathcal{U}):=H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) is a meet-homomorphism which is an upper adjoint for a Galois connection \((-,\Psi)\) between \(\operatorname{Sub}H^{2}(Q,A^{\alpha,\tau},*)\) and \(\mathcal{L}(Q,A^{\alpha,\tau},*)\). The subset of varieties generated by their algebras realizing the datum is join-complete.
Proof.: It is easy to see that \(\psi\) is monotone which yields the immediate inclusions
\[\psi(\mathcal{U}_{1}\wedge\mathcal{U}_{2})\leq\psi(\mathcal{U}_{1})\wedge\psi (\mathcal{U}_{2})\qquad\text{and}\qquad\psi(\mathcal{U}_{1})\vee\psi(\mathcal{ U}_{2})\leq\psi(\mathcal{U}_{1}\vee\mathcal{U}_{2}).\]
For the reverse inclusion for the meet operation, we use the description of the operation in terms of the class operators \(\mathcal{U}_{1}\wedge\mathcal{U}_{2}=\operatorname{Mod}\left(\operatorname{ Id}\mathcal{U}_{1}\cup\operatorname{Id}\mathcal{U}_{2}\right)\). Any \([T]\in\psi(\mathcal{U}_{1})\wedge\psi(\mathcal{U}_{2})\), is both \(\mathcal{U}_{1}\)-compatible and \(\mathcal{U}_{2}\)-compatible; according to Theorem 3.21, the algebra \(A_{T}(Q,A^{\alpha,\tau},*)\in\mathcal{U}_{1}\wedge\mathcal{U}_{2}\). Then by Theorem 3.19 we have \([T]\in H^{2}_{\mathcal{U}_{1}\wedge\mathcal{U}_{2}}(Q,A^{\alpha,\tau},*)=\psi( \mathcal{U}_{1}\wedge\mathcal{U}_{2})\). We conclude \(\psi\) is a meet-homomorphism.
The lower adjoint to \(\Psi\) is the monotone map \(\theta:\operatorname{Sub}H^{2}(Q,A^{\alpha,\tau},*)\longrightarrow\mathcal{L }(Q,A^{\alpha,\tau},*)\) defined by \(\theta(E):=\bigvee_{[T]\in E}\mathcal{V}(A_{T})\). It is not to hard to see that \(\theta\circ\Psi(\mathcal{U})\leq\mathcal{U}\) and \(\Psi\circ\theta(E)\geq E\). It follows that \(\Psi\circ\theta\) is a closure operator and the closed sets are precisely the cohomology groups corresponding to varieties; that is, of the form \(\Psi(\mathcal{U})\).
The combinatorial equivalence on extensions defined on their associated \(2\)-cocycles can also be given by special isomorphisms between the extensions
**Theorem 3.31**.: Let \((A(\alpha),Q,*)\) be affine datum in the signature \(\tau\). Let \(\pi:A\to Q\) and \(\pi^{\prime}:A^{\prime}\to Q\) be extensions realizing the datum. Then \(A\) and \(A^{\prime}\) are equivalent if and only if there is an isomorphism \(\gamma:A\to A^{\prime}\) such that
1. \(\pi^{\prime}\circ\gamma=\pi\), and
2. \(\gamma=m(\gamma\circ r,r,\operatorname{id})\) for all \(\alpha\)-traces \(r:A\to A\).
Proof.: First, assume \(A\) and \(A^{\prime}\) are equivalent extensions realizing the datum. We make take \(\mathcal{U}=\mathcal{V}(A,A^{\prime})\) and note \(Q\in\mathcal{U}\) and \(\mathcal{U}\) contains the datum since it contains algebras which realize the datum. By Theorem 3.21, we have isomorphisms \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) and \(\phi^{\prime}:A^{\prime}\to A(\alpha,*,T^{\prime})\) by the explicit construction detailed in that theorem. By Definition 3.28, there is a \(2\)-coboundary defined by \(h:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) such that
\[T_{f}(\bar{x})-T_{f}^{\prime}(\bar{x}) =f^{\Delta}\left(h(x_{1}),\delta(\bar{l}(x_{2})),\ldots,\delta( \bar{l}(x_{n}))\right)-_{u}h(f^{Q}(x_{1},\ldots,x_{n}))\] \[+_{u}\sum_{i=2}^{n}a(f,i)(x_{1},\ldots,x_{i},h(x_{i}),x_{i+1}, \ldots,x_{n})\qquad(f\in\tau,n=\operatorname{ar}f)\]
where \(u=f(x_{1},\ldots,x_{n})\) and \(\bar{l}:Q\to A\) is a lifting for the datum. Let \(r:A\to A\) and \(r^{\prime}:A^{\prime}\to A^{\prime}\) be \(\alpha\)-traces used to define the extensions \(A\) and \(A^{\prime}\), respectively. Let \(l:Q\to A\) and \(l:Q\to A^{\prime}\) be the associated liftings for \(r\) and \(r^{\prime}\), respectively.
Note every element in \(A_{T}(Q,A^{\alpha,\tau},*)\) has a unique representation of the form \(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\) with \(x\in A\). Define \(\bar{\gamma}:A_{T}(Q,A^{\alpha,\tau},*)\to A^{\prime}(\alpha,*,T^{\prime})\) by \(\bar{\gamma}:\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\longmapsto\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}h\circ\rho\left(\begin{bmatrix}r (x)\\ x\end{bmatrix}\right)\) and
\(\gamma=\phi^{\prime-1}\circ\bar{\gamma}\circ\phi:A\to A^{\prime}\). Since \(\phi(\ker\pi)=\phi^{\prime}(\ker\pi^{\prime})=\hat{\alpha}/\Delta_{\alpha\alpha}\), we see that
\[\pi^{\prime}\circ\phi^{\prime-1}\circ\bar{\gamma}\left(\begin{bmatrix}r(x) \\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right) =\pi^{\prime}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}h\circ\rho\left(\begin{bmatrix}r (x)\\ x\end{bmatrix}\right)\right)\] \[=m\left(\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right),\rho\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}\right),\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)\right)=\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)=\pi(x)\]
by idempotence in \(Q\). This verifies \(\pi=\pi^{\prime}\circ\gamma\) and surjectivity follows readily. We check \(\bar{\gamma}\) is a homomorphism. For simplicity, set \(q_{i}=\rho\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}\right)\). Calculating for \(f\in\tau\) and \(\bar{x}\in A^{\operatorname{ar}f}\),
\[\bar{\gamma}\left(F_{f}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) =\bar{\gamma}\left(\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(f(\bar{x}))}h\circ\rho \left(\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}\right)\] \[=\begin{bmatrix}r(f(\bar{x}))\\ f(\bar{x})\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(f(\bar{x}))}h(f^{Q}(q_{1}, \ldots,q_{n}))\] \[=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}\Delta_{\alpha\alpha},\delta(x_{2}),\ldots,\delta(x_{n})\right)\] \[\quad+_{r(f(\bar{x}))}\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i- 1},\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\] \[\quad+_{r(f(\bar{x}))}T_{f}(\bar{q})+_{r(f(\bar{x}))}h(f^{Q}(q_{1 },\ldots,q_{n}))\] \[=F_{f}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{r(f(\bar{x}))}h(f^{Q}(q_{1 },\ldots,q_{n})).\]
Since \(\left(h\circ\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right),\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in\hat{\alpha}/\Delta_{\alpha\alpha}\), we can write each
\[h\circ\rho\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}\right)=\begin{bmatrix}r(x_{i})\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r^{\prime}(x_{i})\\ m(a_{i},r(x_{i}),r^{\prime}(x_{i}))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix} r^{\prime}(x_{i})\\ w_{i}\end{bmatrix}/\Delta_{\alpha\alpha} \tag{21}\]
for some \(a_{i}\in A\) and \(w_{i}=m(a_{i},r(x_{i}),r^{\prime}(x_{i}))\). Then \(\gamma\left(\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}r^{\prime}(x_{ i})\\ z_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\) where \(z_{i}=m(x_{i},r(x_{i}),w_{i})\). Then
\[F_{f}\left(\bar{\gamma}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\right) =F_{f}\left(\begin{bmatrix}r^{\prime}(x_{1})\\ z_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\ldots,\begin{bmatrix}r^{\prime}(x_{ n})\\ z_{n}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=f^{\Delta}\left(\begin{bmatrix}r^{\prime}(x_{1})\\ z_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(z_{2}),\ldots,\delta(z_{n}) \right)+_{r(f(\bar{z}))}\] \[\quad\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix} r^{\prime}(x_{i})\\ z_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\] \[\quad+_{r(f(\bar{z}))}T^{\prime}_{f}(q_{1},\ldots,q_{n})\] \[=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(z_{2}),\ldots,\delta(z_{n}) \right)+_{r(f(\bar{x}))}f^{\Delta}\left(h(q_{1}),\delta(z_{2}),\ldots,\delta(z _{n})\right)\] \[\quad+_{r(f(\bar{z}))}\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i- 1},\begin{bmatrix}r(x_{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)+_{r(f( \bar{x}))}\]
\[\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i-1},h(q_{i}),q_{i+1}, \ldots,q_{n}\right)+_{r(f(\bar{z}))}T_{f}^{\prime}(q_{1},\ldots,q_{n})\] \[=f^{\Delta}\left(\begin{bmatrix}r(x_{1})\\ x_{1}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(z_{2}),\ldots,\delta(z_{n}) \right)+_{r(f(\bar{z}))}\] \[\sum_{i=2}^{n}a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(x _{i})\\ x_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)+_{r(f(\bar {x}))}\] \[T_{f}(q_{1},\ldots,q_{n})+_{u}h(f^{Q}(q_{1},\ldots,q_{n}))\] \[=F_{f}\left(\begin{bmatrix}r(\bar{x})\\ \bar{x}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{r(f(\bar{x}))}h(f^{Q}(q_{1 },\ldots,q_{n})).\]
Since \(\delta(z_{i})=\delta(x_{i})\) and \(\delta(r(f(\bar{x})))=\delta(r(f(\bar{z})))=\delta(u)\), we conclude that \(\bar{\gamma}\), and thus \(\gamma\), is a homomorphism.
Now assume \(\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(y)\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in\ker\bar{\gamma}\). Note we have \(h\circ\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)=\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) and \(h\circ\rho\left(\begin{bmatrix}r(y)\\ y\end{bmatrix}\right)=\begin{bmatrix}r(y)\\ b\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(a,b\in A\). Then
\[\begin{bmatrix}r(x)\\ m(x,r(x),a)\end{bmatrix}/\Delta_{\alpha\alpha}=\bar{\gamma}\left( \begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\bar{\gamma}\left(\begin{bmatrix} r(y)\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\right)=\begin{bmatrix}r(y)\\ m(y,r(y),b)\end{bmatrix}/\Delta_{\alpha\alpha}. \tag{22}\]
Then \(\pi=\pi^{\prime}\circ\gamma\) yields \(\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(y)\\ y\end{bmatrix}/\Delta_{\alpha\alpha}\right)\in\ker\pi=\hat{\alpha}/\Delta_{ \alpha\alpha}\) which implies \(r(x)=r(y)\), and so \(a=b\) since the 2-coboundary \(h\) only depends on \(Q\). Then Eq (22) implies \(m(x,r(x),a)=m(y,r(y),b)=m(y,r(x),a)\). Since \(m\) is affine on \(\alpha\)-blocks, we must have \(x=y\); thus, \(\bar{\gamma}\) is injective.
For the second condition on the isomorphism \(\gamma\), note \(\rho\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}\right)=\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)\) implies \(h\circ\rho\left(\begin{bmatrix}r(x)\\ r(x)\end{bmatrix}\right)=h\circ\rho\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}\right)=\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) for some \(a\in A\) with \((a,r(x))\in\alpha\). Then
\[\left[\begin{matrix}r(x)\\ \gamma(r(x))\end{matrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}r(x)\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)\end{matrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\right)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ m(a,r(x),r^{\prime}(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(m(a,r(x),r^{\prime}(x)),r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(m(x,r(x),a),r(x),r^{\prime}(x))\end{bmatrix}/\Delta_{\alpha\alpha}\]
\[=\begin{bmatrix}x\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\end{bmatrix}\right)/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ \phi^{\prime-1}\left(\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}h\circ\rho\left(\begin{bmatrix}r(x) \\ x\end{bmatrix}\right)\right)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}x \\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha}.\]
Conversely, assume there is a homomorphism \(\gamma:A\to A^{\prime}\) which satisfies the conditions. The condition \(\pi=\pi^{\prime}\circ\gamma\) implies \(\gamma\circ l:Q\to A^{\prime}\) is a lifting for \(\pi^{\prime}:A^{\prime}\to Q\). Condition (2) implies that for any \(f\in\tau\) and \(\bar{x}\in A^{\operatorname{ar}f}\), if we apply the operation
\[\begin{bmatrix}f(\bar{x})\\ \gamma(f(\bar{x}))\end{bmatrix}=\begin{bmatrix}f(\bar{x})\\ f(\gamma(\bar{x}))\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}f(l(\bar{x} ))\\ f(\gamma(l(\bar{x})))\end{bmatrix}=\begin{bmatrix}f(l(\bar{x}))\\ \gamma(f(l(\bar{x})))\end{bmatrix}\]
and then also by substitution
\[\begin{bmatrix}f(\bar{x})\\ \gamma(f(\bar{x}))\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}l(f(\bar{x} ))\\ \gamma(l(f(\bar{x})))\end{bmatrix}.\]
From the above we conclude
\[\begin{bmatrix}\gamma\circ l(f(\bar{x}))\\ l\circ f(\bar{x})\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}\gamma(f(l( \bar{x})))\\ f(l(\bar{x}))\end{bmatrix},\]
and since \(\Delta_{\alpha\alpha}\leq\hat{\alpha}\) we have
\[\begin{bmatrix}\gamma\circ l(f(\bar{x}))\\ f(\gamma\circ l(\bar{x}))\end{bmatrix}=\begin{bmatrix}\gamma\circ l(f(\bar{x }))\\ \gamma(f(l(\bar{x})))\end{bmatrix}\Delta_{\alpha\alpha}\begin{bmatrix}l\circ f( \bar{x})\\ f(l(\bar{x}))\end{bmatrix}. \tag{23}\]
Now define \(T_{f}^{\gamma}(\bar{x}):=\begin{bmatrix}\gamma\circ l(f(\bar{x}))\\ f(\gamma\circ l(\bar{x}))\end{bmatrix}\) for \(f\in\tau\). It follows that \(T^{\gamma}\) is \(2\)-cocycle for \(A^{\prime}\). By a similar argument as in Proposition 3.23, there is \(h\) such that
\[T_{f}^{\prime}(\bar{x})-T_{f}^{\gamma}(\bar{x})=f(h(\bar{x}))-h(f(\bar{x})) \qquad\qquad\qquad(f\in\tau)\]
which can be expanded out to represent a \(2\)-coboundary. The second condition guarantees that \(T^{\gamma}=T\) and so \(A\) and \(A^{\prime}\) are equivalent.
**Definition 3.32**.: Let \((A^{\alpha,\tau},Q,*)\) be affine datum. A function \(d:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) is a \(1\)-_cocycle_ if for any \(f\in\tau,\operatorname{ar}f=n\),
\[d(f^{Q}(x_{1},\ldots,x_{n}))=f^{\Delta}(d(x_{1}),\delta(l(x_{2})),\ldots, \delta(l(x_{n})))+_{u}\sum_{i=2}^{n}a(f,i)(x_{1},\ldots,x_{i-1},d(x_{i}),x_{i+1 },\ldots,x_{n})\]
where \(l:Q\to A\) is any lifting associated to the datum. The set of \(1\)-cocycles is denoted by \(Z^{1}(A^{\alpha,\tau},Q,*)\)
We will also refer to \(1\)-cocycles as _derivations_ of the datum.
**Lemma 3.33**.: Let \((A^{\alpha,\tau},Q,*)\) be affine datum. For any lifting \(l:Q\to A\) associated to the datum, the operation \((d+d^{\prime})(x):=d(x)+_{l(x)}d^{\prime}(x)\) makes \(Z^{1}(A^{\alpha,\tau}Q,*)\) into an abelian group.
Proof.: See the first paragraph of Lemma 3.26.
Let \((A^{\alpha,\tau},Q,*)\) be affine datum. If \(\pi:A\to Q\) is an extension realizing the datum, let \(\operatorname{Stab}(\pi:A\to Q)\) denote the set of _stabilizing automorphisms_; that is, the automorphism \(\gamma\in\operatorname{Aut}A\) which satisfy conditions (1) and (2) in Theorem 3.31. The following is our characterization of the derivations of datum by stabilizing automorphisms.
**Theorem 3.34**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. Let \(\pi:A\to Q\) be an extension realizing the datum. Then \(Z^{1}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A\to Q)\).
Proof.: Let \(\phi:A\to A(\alpha,*,T)\) witness the isomorphism from Theorem 3.21. Take \(\gamma\in\operatorname{Stab}(\pi:A\to Q)\) and \(\alpha\)-trace \(r\). Since \(\pi\circ\gamma=\pi\), we have \((x,\gamma(x))\in\alpha=\ker\pi\) which implies \(r(\gamma(x))=r(x)\). Then by condition (2) in Theorem 3.31 we can write
\[\phi\circ\gamma(x)=\begin{bmatrix}r(\gamma(x))\\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r(x)\\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}\begin{bmatrix}x\\ \gamma(x)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}\begin{bmatrix}r(x)\\ \gamma(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\phi(x)+_{r(x)}d_{\gamma}(\pi(x))\]
where we have defined \(d_{\gamma}(x):=\begin{bmatrix}l(x)\\ \gamma(l(x))\end{bmatrix}/\Delta_{\alpha\alpha}:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) for the lifting associated to \(r\). Note \(r\) and \(\gamma\circ r\) are both \(\alpha\)-traces. Then the argument in Proposition 3.23 shows for all \(f\in\tau\) and \(\bar{x}\in Q^{\operatorname{ar}f}\),
\[\delta(u)=T_{f}(\bar{x})-_{u}T_{f}(\bar{x}) =f(d_{\gamma}(x_{1}),\delta(l(x_{2})),\ldots,\delta(l(x_{n})))-_{u }d_{\gamma}(f(x_{1},\ldots,x_{n}))\] \[+_{u} \sum_{i=2}^{n}a(f,i)\left(x_{1},\ldots,x_{i-1},d_{\gamma}(x_{i}),x_{i+1},\ldots,x_{n}\right))\]
where \(u=l(f(\bar{x}))\); therefore, \(d_{\gamma}\) is a derivation. We should note \(\phi\circ\gamma(r(x))=d_{\gamma}(\pi(x))\).
We show \(\operatorname{Stab}(\pi:A\to Q)\) is closed under composition. Take \(\gamma,\gamma^{\prime}\in\operatorname{Stab}(\pi:A\to Q)\); clearly, condition (1) in Theorem 3.31 holds. We first calculate
\[\phi\circ(\gamma^{\prime}\circ\gamma)(x)=\phi(\gamma(x))+_{r(\gamma(x))}d_{ \gamma^{\prime}}(\pi(\gamma(x)))=\phi(x)+_{r(x)}d_{\gamma}(\pi(x))+_{r(x)}d_{ \gamma^{\prime}}(\pi(x)). \tag{24}\]
Since \(d_{\gamma}\) is a derivation, we have \(\pi\circ d_{\gamma}=\operatorname{id}\) which implies we can write \(d_{\gamma}(x)=\begin{bmatrix}l(x)\\ \beta(x)\end{bmatrix}/\Delta_{\alpha\alpha}\) for some function \(\beta:Q\to A\); similarly, \(d_{\gamma^{\prime}}(x)=\begin{bmatrix}l(x)\\ \beta^{\prime}(x)\end{bmatrix}/\Delta_{\alpha\alpha}\) for some function \(\beta^{\prime}:Q\to A\). Then using Eq. (24), we have
\[\begin{bmatrix}x\\ (\gamma^{\prime}\circ\gamma)(x)\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}x\\ \phi^{-1}\left(\phi(x)+_{r(x)}d_{\gamma}(\pi(x))+_{r(x)}d_{\gamma^{\prime}}( \pi(x))\right)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ \phi^{-1}\left(\begin{bmatrix}m(r(x),r(x),m(r(x),r(x),r(x)))\\ m(x,r(x),m(\beta^{\prime}(\pi(x)),r(x),\beta(\pi(x))))\end{bmatrix}/\Delta_{ \alpha\alpha}\right)\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(x,r(x),m(\beta^{\prime}(x),r(x),\beta(\pi(x))))\end{bmatrix}/\Delta_{ \alpha\alpha}\] \[=m\left(\begin{bmatrix}x\\ x\end{bmatrix},\begin{bmatrix}r(x)\\ r(x)\end{bmatrix},m\left(\begin{bmatrix}r(x)\\ \beta^{\prime}(\pi(x))\end{bmatrix},\begin{bmatrix}r(x)\\ r(x)\end{bmatrix},\begin{bmatrix}r(x)\\ \beta(\pi(x))\end{bmatrix}\right)\right)/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ \beta^{\prime}(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}+_{r(x)}\begin{bmatrix}r (x)\\ \beta(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=d_{\gamma^{\prime}}(\pi(x))+_{r(x)}d_{\gamma}(\pi(x))\] \[=\phi\circ(\gamma^{\prime}\circ\gamma)(r(x))=\begin{bmatrix}r(x)\\ (\gamma^{\prime}\circ\gamma)(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}.\]
since \(r(\gamma^{\prime}(\gamma(r(x))))=r(\gamma(r(x)))=r(r(x))=r(x)\). This shows \(\gamma^{\prime}\circ\gamma\) satisfies condition (2) and so \(\gamma^{\prime}\circ\gamma\in\operatorname{Stab}(\pi:A\to Q)\).
Define \(\Psi:\operatorname{Stab}(\pi:A\to Q)\to Z^{1}(Q,A^{\alpha,\tau},*)\) by \(\Psi(\gamma):=d_{\gamma}\) where \(\phi\circ\gamma(r(x))=d_{\gamma}(\pi(x))\). It is easy to see that \(\Psi\) is injective. To show surjectivity, take a derivation \(d\) and define \(\gamma_{d}\) by \(\phi\circ\gamma_{d}(x):=\phi(x)+_{r(x)}d(\pi(x))\). The proof of Theorem 3.31 shows \(\gamma_{d}\) is a stabilizing automorphism. We show it
satisfies condition (2) above. Since \(d\) is a derivation, we can write \(d(x)=\begin{bmatrix}l(x)\\ \beta(x)\end{bmatrix}/\Delta_{\alpha\alpha}\) for some function \(\beta:Q\to A\). We calculate
\[\begin{bmatrix}x\\ \gamma_{d}(x)\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}x\\ \phi^{-1}\left(\phi(x)+_{r(x)}d(\pi(x))\right)\end{bmatrix}/\Delta_{\alpha\alpha }=\begin{bmatrix}x\\ \phi^{-1}\left(\begin{bmatrix}m\big{(}r(x),r(x),r(x)\big{)}\\ m\big{(}x,r(x),\beta(\pi(x))\big{)}\end{bmatrix}/\Delta_{\alpha\alpha}\right) \end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}x\\ m(x,r(x),\beta(\pi(x)))\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=m\left(\begin{bmatrix}x\\ x\end{bmatrix},\begin{bmatrix}r(x)\\ r(x)\end{bmatrix},\begin{bmatrix}r(x)\\ \beta(\pi(x))\end{bmatrix}\right)/\Delta_{\alpha\alpha}\] \[=\begin{bmatrix}r(x)\\ \beta(\pi(x))\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}r(x)\\ \gamma_{d}(r(x))\end{bmatrix}/\Delta_{\alpha\alpha}\]
This implies \(m(\gamma_{d}(r(x)),r(x),x)=\gamma_{d}(x)\) and so \(\gamma_{d}\in\operatorname{Stab}(\pi:A\to Q)\). We see that \(\Psi(\gamma_{d})=d\) since \(\phi\circ\gamma(r(x))=\phi(r(x))+_{r(x)}d(\pi(r(x)))=d(\pi(x))\).
To finish the theorem, we show \(\Psi\) is a homomorphism. This follows from Eq. (24) since
\[\phi\circ(\gamma^{\prime}\circ\gamma)(r(x))=d_{\gamma^{\prime}}(\pi(x))+_{r( x)}d_{\gamma}(\pi(x))=(d_{\gamma^{\prime}}+d_{\gamma})(\pi(x))\]
implies \(\Psi(\gamma^{\prime}\circ\gamma)=d_{\gamma^{\prime}}+d_{\gamma}\).
Suppose \(\pi:A\to Q\) with \(\alpha=\ker\pi\) is an extension realizing affine datum. By Theorem 3.19 we have the isomorphism \(\phi:A\to A_{T}(Q,A^{\alpha,\tau},*)\) given by \(a\longmapsto\begin{bmatrix}r(a)\\ a\end{bmatrix}/\Delta_{\alpha\alpha}\) for any \(\alpha\)-trace \(r:A\to A\). For the semidirect product we have the homomorphisms
\[\rho:A(\alpha)/\Delta_{\alpha\alpha}\to Q, \left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\longmapsto\pi(a)\] \[\kappa:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{ \alpha 1}, \left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\longmapsto\begin{bmatrix}a\\ b\end{bmatrix}/\Delta_{\alpha 1}\]
where \(\kappa\) is the canonical homomorphism. We then take the solution in the product diagram
\[\sigma:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha 1}\times Q, \left[\begin{matrix}a\\ b\end{matrix}\right]/\Delta_{\alpha\alpha}\longmapsto\left\langle\begin{bmatrix}a \\ b\end{matrix}\right\rangle/\Delta_{\alpha 1},\pi(a)\right\rangle.\]
It is easy to see that the same definition also induces a homomorphism
\[\sigma:A_{T}(Q,A^{\alpha,\tau},*)\to A(\alpha)/\Delta_{\alpha 1}\otimes^{ \kappa\circ T}Q\]
with the following properties:
1. \(\ker\kappa\circ\ker\rho=1\) and \(\ker\sigma=\ker\kappa\wedge\ker\rho\) where \(\ker\kappa=\Delta_{\alpha 1}/\Delta_{\alpha\alpha}\), \(\ker\rho=\hat{\alpha}/\Delta_{\alpha\alpha}\);
2. if \(\mathcal{V}(A)\) has a weak-difference term and \(\alpha\) is left-central, then \(0=\phi([\alpha,1])=\ker\sigma\) and \(\sigma\) is a subdirect embedding;
3. if \(\mathcal{V}(A)\) has a difference term and \(\alpha\) is central, then \(\sigma\) is surjective.
In this way Proposition 2.4 follows from Theorem 3.19 if we assume \(\mathcal{V}\) has a difference term. In the following, we recover what appears to be a folklore result.
**Lemma 3.35**.: Let \(\mathcal{V}\) be a variety with a difference term and \(A\in\mathcal{V}\). If \(\alpha\in\operatorname{Con}A\) is central, then
\[A(\alpha)/\Delta_{\alpha\alpha}\approx A(\alpha)/\Delta_{\alpha 1}\times A/\alpha.\]
Proof.: By Corollary 2.6 and Theorem 3.19, there is a \(2\)-cocycle \(T\) and action \(*\) such that \(A_{T}(Q,A^{\alpha,\tau},*)\approx A\approx A(\alpha)/\Delta_{\alpha 1} \otimes^{\kappa\circ T}A/\alpha\). If we take the trivial \(2\)-cocycle \(T=0\), then \(A(\alpha)/\Delta_{\alpha\alpha}\approx A_{0}(Q,A^{\alpha,\tau},*)\approx A( \alpha)/\Delta_{\alpha 1}\times A/\alpha\)
**Definition 3.36**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. The action is _trivial_ if the following holds: for any \(f\in\tau\) with \(\operatorname{ar}f>1\) and \(I\subseteq\{1,\ldots,\operatorname{ar}f\}\), for any \(\bar{c},\bar{c}^{\prime}\in A^{\operatorname{ar}f}\) such that \(c_{i}=c_{i}^{\prime}\) for \(i\in I\) and \(\rho(\delta(c_{i}))=q_{i},\rho(\delta(c_{i}^{\prime}))=q_{i}\), and for any \(\alpha\)-trace \(r:A\to A\) with associated lifting \(l\), then
\[\sum_{i\in I}a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right)\]
where \(u=l\left(f^{Q}(q_{1},\ldots,q_{n})\right)\) and \(u^{\prime}=l\left(f^{Q}(q_{1},\ldots,q_{n})\right)\).
For an algebra \(A\), a congruence \(\alpha\in\operatorname{Con}A\) is _right-central_ if \([1,\alpha]=0\) and _left-central_ if \([\alpha,1]=0\).
**Proposition 3.37**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum contained in some variety with a difference term. The action is trivial if and only if every extension in a variety with a difference term realizing the datum is a central extension.
Proof.: Assume every extension in a variety with a difference term realizing the datum is a central extension. By assumption, there is at least one such extension. Taking any such extension \(\pi:A\to Q\) with \(A\in\mathcal{V}\) a variety with a difference term. By Theorem 3.21, we have \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) for some compatible \(2\)-cocycle. Fix an \(\alpha\)-trace \(r:A\to A\) with associated lifting \(l\). Take \(f\in\tau\) with \(n=\operatorname{ar}f>1\) and choose \(I\subseteq\{1,\ldots,n\}\). Choose \(\bar{c},\bar{c}^{\prime}\in A^{n}\) such that \(c_{i}=c_{i}^{\prime}\) for \(i\in I\). Let \(\pi(c_{i})=q_{i}\) and \(\pi(c_{i}^{\prime})=q_{i}^{\prime}\) for \(i=1,\ldots,n\). Then by realization we have for \(i\in I\)
\[a(f,i)\left(q_{1},\ldots,q_{i-1},\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1},\ldots,q_{n}\right) =\begin{bmatrix}u\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\] \[a(f,i)\left(q_{1}^{\prime},\ldots,q_{i-1}^{\prime},\begin{bmatrix} r(c_{i}^{\prime})\\ c_{i}^{\prime}\end{bmatrix}/\Delta_{\alpha\alpha},q_{i+1}^{\prime},\ldots,q_{n }^{\prime}\right) =\begin{bmatrix}u^{\prime}\\ b_{i}\end{bmatrix}/\Delta_{\alpha\alpha}.\]
where \(u=f(r(c_{1}),\ldots,r(c_{n}))\), \(u^{\prime}=f(r(c_{1}^{\prime}),\ldots,r(c_{n}^{\prime}))\) and \(a_{i}=f(r(c_{1}),\ldots,r(c_{i}),c_{i},r(c_{i+1}),\ldots,r(c_{n}))\), \(b_{i}=f(r(c_{1}^{\prime}),\ldots,r(c_{i}^{\prime}),c_{i}^{\prime},r(c_{i+1}^{ \prime}),\ldots,r(c_{n}^{\prime}))\). Note the diagonal is a single \(\Delta_{\alpha 1}\)-class; that is, \(\begin{bmatrix}x\\ x\end{bmatrix}\Delta_{\alpha 1}\begin{bmatrix}y\\ y\end{bmatrix}\) for all \(x,y\in A\). Denote the \(\Delta_{\alpha 1}\)-class of the diagonal by \(\hat{\delta}\). Then by passing to the quotient to \(A(\alpha)/\Delta_{\alpha 1}\) we have
\[\begin{bmatrix}u\\ \sum_{i\in I}^{u}a_{i}\end{bmatrix}/\Delta_{\alpha 1} =\sum_{i\in I}^{u}\begin{bmatrix}u\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}\] \[=\sum_{i\in I}^{u}f\left(\delta(c_{1}),\ldots,\delta(c_{i-1}), \begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(c_{i+1}),\ldots,\delta(c_{n}) \right)/(\Delta_{\alpha 1}/\Delta_{\alpha\alpha})\] \[=\sum_{i\in I}^{u}f\left(\hat{\delta},\ldots,\hat{\delta}, \begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix},\hat{\delta},\ldots,\hat{\delta}\right)/\Delta_{\alpha 1}\] \[=\sum_{i\in I}^{u}f\left(\delta(c_{1}^{\prime}),\ldots,\delta(c_{i-1 }^{\prime}),\begin{bmatrix}r(c_{i}^{\prime})\\ c_{i}^{\prime}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(c_{i+1}^{\prime}), \ldots,\delta(c_{n}^{\prime})\right)/(\Delta_{\alpha 1}/\Delta_{\alpha\alpha})\] \[=\sum_{i\in I}^{u^{\prime}}\begin{bmatrix}u^{\prime}\\ b_{i}\end{bmatrix}/\Delta_{\alpha 1}\] \[=\begin{bmatrix}u^{\prime}\\ \sum_{i\in I}^{u^{\prime}}b_{i}\end{bmatrix}/\Delta_{\alpha 1}\]
Since \(\alpha\) is central, by Lemma 2.1(1) we conclude \(\sum_{i\in I}^{u}a_{i}=\left(\sum_{i\in I}^{u^{\prime}}b_{i}\right)+_{u^{\prime} }u\). Write \(I=\{i_{1},\ldots,i_{k}\}\). Using the difference term identity \(m(u^{\prime},u^{\prime},u)=u\) we see that
\[\sum_{i\in I}^{u}\begin{bmatrix}u\\ a_{i}\end{bmatrix}/\Delta_{\alpha\alpha}=\begin{bmatrix}u\\ a_{i_{1}}+_{u}\cdots+_{u}a_{i_{k}}\end{bmatrix}/\Delta_{\alpha\alpha} =\begin{bmatrix}u^{\prime}+_{u^{\prime}}u\\ (b_{i_{1}}+_{u^{\prime}}\cdots+_{u^{\prime}}b_{i_{k}})+_{u^{\prime}}u\end{bmatrix}/ \Delta_{\alpha\alpha}\] \[=\begin{bmatrix}u^{\prime}\\ (b_{i_{1}}+_{u^{\prime}}\cdots+_{u^{\prime}}b_{i_{k}})\end{bmatrix}/\Delta_{ \alpha\alpha}+_{u^{\prime}}\delta(u)\] \[=\sum_{i\in I}^{u^{\prime}}\begin{bmatrix}u^{\prime}\\ b_{i}\end{bmatrix}/\Delta_{\alpha\alpha}+_{u^{\prime}}\delta(u)\]
Now assume the action is trivial. Consider an extension \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\) realizing the datum in a variety with a difference term. Fix an \(\alpha\)-trace \(r\) and associated lifting \(l\). We directly verify the term condition for centrality in any extension realizing the datum. Take \(f\in\tau\) with \(1\leq k<n=\operatorname{ar}f\) and \(\bar{a},\bar{b}\in A^{k}\), \(\bar{c},\bar{d}\in A^{n-k}\) such that \((c_{i},d_{i})\in\alpha\) for \(i=1,\ldots,n-k\). Assume
\[F_{f}\left(\begin{bmatrix}r(\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=F_{f}\left(\begin{bmatrix}r (\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right). \tag{25}\]
By realization, we can combine the terms in the definition of the operation \(F_{f}\) to rewrite Eq.(25) as
\[f\left(\begin{bmatrix}r(\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{c})\right)+_{u}f\left( \delta(r(\bar{a})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}T_{f}(\pi(\bar{a}),\pi( \bar{c}))\] \[=f\left(\begin{bmatrix}r(\bar{a})\\ \bar{a}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{d})\right)+_{u}f\left( \delta(r(\bar{a})),\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}T_{f}(\pi(\bar{a}),\pi( \bar{d}))\]
Then \((c_{i},d_{i})\in\alpha\Rightarrow\delta(c_{i})=\delta(d_{i})\) and \(\pi(c_{i})=\pi(d_{i})\). This implies that in the above expression the first terms on the left-side and right-side and the \(2\)-cocycle terms are equal. After canceling we conclude
\[f\left(\delta(r(\bar{a})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=f\left(\delta(r(\bar{a})), \begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right). \tag{26}\]
We can write \(u=l(f(\pi(\bar{a}),\pi(\bar{c})))=l(f(\pi(\bar{a}),\pi(\bar{d})))\) and \(u^{\prime}=l(f(\pi(\bar{b}),\pi(\bar{c})))=l(f(\pi(\bar{b}),\pi(\bar{d})))\). Then we can use realization to re-expand Eq.(26) into action terms and apply triviality to yield
\[f\left(\delta(r(\bar{b})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)\] \[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{b}),\pi(c_{1}),\ldots,\pi( c_{i-1}),\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(c_{i+1}),\ldots,\pi(c_{n-k})\right)\] \[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{a}),\pi(c_{1}),\ldots,\pi( c_{i-1}),\begin{bmatrix}r(c_{i})\\ c_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(c_{i+1}),\ldots,\pi(c_{n-k}) \right)+_{u}\delta(u^{\prime})\] \[=f\left(\delta(r(\bar{a})),\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}\delta(u^{\prime})\] \[=f\left(\delta(r(\bar{a})),\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right)+_{u}\delta(u^{\prime})\] \[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{a}),\pi(d_{1}),\ldots,\pi( d_{i-1}),\begin{bmatrix}r(d_{i})\\ d_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(d_{i+1}),\ldots,\pi(d_{n-k}) \right)+_{u}\delta(u^{\prime})\]
\[=\sum_{i=1}^{n-k}a(f,i)\left(\pi(\bar{b}),\pi(d_{1}),\ldots,\pi(d_{i-1}), \begin{bmatrix}r(d_{i})\\ d_{i}\end{bmatrix}/\Delta_{\alpha\alpha},\pi(d_{i+1}),\ldots,\pi(d_{n-k})\right)\] \[=f\left(\delta(r(\bar{b})),\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right).\]
Again, \(\delta(c_{i})=\delta(d_{i})\) and \(\pi(c_{i})=\pi(d_{i})\) imply
\[f\left(\begin{bmatrix}r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{c})\right)=f\left( \begin{bmatrix}r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\delta(\bar{d})\right)\qquad\text{ and}\qquad T_{f}(\pi(\bar{b}),\pi(\bar{c}))=T_{f}(\pi(\bar{b}),\pi(\bar{d})).\]
Putting the last three systems of equations together we conclude that
\[F_{f}\left(\begin{bmatrix}r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{c})\\ \bar{c}\end{bmatrix}/\Delta_{\alpha\alpha}\right)=F_{f}\left(\begin{bmatrix} r(\bar{b})\\ \bar{b}\end{bmatrix}/\Delta_{\alpha\alpha},\begin{bmatrix}r(\bar{d})\\ \bar{d}\end{bmatrix}/\Delta_{\alpha\alpha}\right).\]
which shows \([1,\alpha]=0\) in \(A\approx A_{T}(Q,A^{\alpha,\tau},*)\).
**Remark 3.38**.: The proof of necessity in Proposition 3.37 does not require the hypothesis of a Mal'cev condition in the variety generated by any extension of the datum; that is, if the action is trivial, then the kernel in every extension realizing the datum is right-central.
**Remark 3.39**.: Suppose \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action realized by \(A\). Let \(\kappa:A(\alpha)/\Delta_{\alpha\alpha}\to A(\alpha)/\Delta_{\alpha 1}\) be the canonical homomorphism. Then
\[\kappa\circ a(f,i)(y_{1},\ldots,y_{i-1},x,y_{i+1},\ldots,y_{n})\]
depends only on the i-th coordinate.
In the case of affine datum with trivial action, the second-cohomology \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) is an abelian group of equivalence classes of right-central extensions which realize the datum; however, to guarantee that we recover every central extension realizing the datum we must rely on Proposition 3.37 together with Theorem 3.29 in the case of varieties with a difference term.
**Theorem 3.40**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action and \(\mathcal{U}\) a variety with a difference term containing the datum. The abelian group \(H^{2}_{\mathcal{U}}(Q,A^{\alpha,\tau},*)\) is in bijective correspondence with the set of equivalence classes of central extensions in \(\mathcal{U}\) realizing the datum.
We offer a tentative definition for a \(1^{\text{st}}\)-cohomology group associated to affine datum. The abelian group of derivations is isomoprhic to the group of stabilizing automorphisms of an extension. In the case of groups, the principal derivations correspond to the stabilizing automorphisms of the semidirect product which act by conjugation. The approach to principal derivation for affine datum follows in this line and finds application in Wires [14] for a low-dimensional Hochschild-Serre sequence associated to a general extension with an additional affine action.
For any algebra \(A\) and \(\alpha\in\operatorname{Con}A\), unary functions \(f,h:A\to A\) are \(\alpha\)-twins if there is a term \(t(x,\bar{y})\) and \(\bar{c}\;\alpha\;\bar{d}\) such that \(f(x)=t(x,\bar{c})\) and \(h(x)=t(x,\bar{d})\). The set of \(\alpha\)-twins of the identity is denoted by \(\operatorname{Tw}_{\alpha}A\); that is, \(\gamma\in\operatorname{Tw}_{\alpha}A\) if there there is a term \(t(x,\bar{y})\) and \(\bar{c}\;\alpha\;\bar{d}\) such that \(\gamma(x)=t(x,\bar{c})\) and \(x=t(x,\bar{d})\). We can restrict to a subset
\[\operatorname{Tw}_{\alpha,F}A=\{\gamma\in\operatorname{Tw}_{\alpha}A:(\exists x \in A),\gamma(x)=x\}\]
of those \(\alpha\)-twins of the identity which have a fixed-point. In general, \(\operatorname{Tw}_{\alpha}A\) is closed under composition and \(\operatorname{Tw}A\) is closed under conjugation by automorphisms of \(A\), but \(\operatorname{Tw}_{\alpha,F}A\) is neither. Given affine datum \((Q,A^{\alpha,\tau},*)\) we consider the set of _principal stabilizing automorphisms_
\[\operatorname{PStab}(Q,A^{\alpha,\tau},*)=\operatorname{Tw}_{\bar{\alpha}/ \Delta_{\alpha\alpha},F}A(\alpha)/\Delta_{\alpha\alpha}\cap\operatorname{Stab} (\pi:A(\alpha)/\Delta_{\alpha\alpha}\to Q).\]
**Definition 3.41**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. A map \(d:Q\to A(\alpha)/\Delta_{\alpha\alpha}\) is a _principal derivation_ if \(d(\pi(x))+_{l(x)}x\in\operatorname{PStab}(Q,A^{\alpha,\tau},*)\) for any lifting \(l\) of \(\pi:A(\alpha)/\Delta_{\alpha\alpha}\to Q\).
If we simplify notation and write the extension of the datum \(A\approx A(\alpha)/\Delta_{\alpha\alpha}\stackrel{{\pi}}{{\to}}Q\) isomorphic to the semidirect product witnessed by \(\phi\) from Theorem 3.21, then a principal derivation \(d\) takes the form
\[\phi(\gamma(x))=d(\pi(x))+_{r(x)}\begin{bmatrix}r(x)\\ x\end{bmatrix}/\Delta_{\alpha\alpha}\]
for some \(\gamma\in\operatorname{PStab}(Q,A^{\alpha,\tau},*)\) and any \(\alpha\)-trace \(r:A\to A\), and are just those derivations which correspond under the isomorphism of Theorem 3.34 to principal stabilizing automorphisms. Denote by \(\operatorname{PDer}(Q,A^{\alpha,\tau},*)\) the subgroup of \(Z^{1}(Q,A^{\alpha,\tau},*)\) generated by the principal derivations.
**Definition 3.42**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum. The \(1^{\operatorname{st}}\)-cohomology of the datum is defined as the quotient group
\[H^{1}(Q,A^{\alpha,\tau},*):=Z^{1}(Q,A^{\alpha,\tau},*)/\operatorname{PDer}(Q, A^{\alpha,\tau},*).\]
**Lemma 3.43**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action. Then \(H^{1}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A(\alpha)/\Delta_{ \alpha\alpha}\to Q)\).
Proof.: By Remark 3.38, the kernel in the semidirect product realizing the datum is right-central. For \(\gamma\in\operatorname{PStab}(Q,A^{\alpha,\tau},*)\), we have \(\gamma(x)=t(x,\bar{c})\), \(x=t(x,\bar{d})\) for some term with \(\bar{c}\;\alpha\;\bar{d}\) and there exists \(a\in A\) such that \(a=\gamma(a)\). The matrix
\[\begin{bmatrix}a&\gamma(x)\\ a&x\end{bmatrix}=\begin{bmatrix}\gamma(a)&\gamma(x)\\ a&x\end{bmatrix}=\begin{bmatrix}t(a,\bar{c})&t(x,\bar{c})\\ t(a,\bar{d})&t(x,\bar{d})\end{bmatrix}\in M(\alpha,1)\]
implies \((\gamma(x),x)\in[1,\alpha]=0\); thus, the subgroup generated by principal stabilizing automorphism is trivial. Then Theorem 3.34 yields \(H^{1}(Q,A^{\alpha,\tau},*)\approx\operatorname{Stab}(\pi:A(\alpha)/\Delta_{ \alpha\alpha}\to Q)\).
According to Proposition 3.37, the hypothesis of the previous lemma holds for central extensions in varieties with a difference term.
**Definition 3.44**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action and \(Q\) an abelian algebra. For a variety \(\mathcal{V}\) in the same signature as the datum, \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\) denote the set of equivalence classes of \(\mathcal{V}\)-compatible \(2\)-cocycles which represent extensions realizing the datum which are abelian algebras.
The definition is well-defined since equivalence of extensions of datum is finer than isomorphism, and isomorphism preserves the abelian property of an algebra. The following is a far-reaching generalization of the classical observation from group theory where abelian extensions are characterized by symmetric \(2\)-cocycles; in the case of more general varieties with a weak-difference term, the abelian extensions are characterized by the \(2\)-cocycle identities (see Definition 3.14(C2)) corresponding to the axiomatization of the abelian subvariety.
**Corollary 3.45**.: Let \((Q,A^{\alpha,\tau},*)\) be affine datum with trivial action and \(Q\) an abelian algebra. Let \(\mathcal{V}\) be a variety with a weak-difference in the same signature as the datum. Then either \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\) is empty or \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\leq H^{2}_{\mathcal{V}}( Q,A^{\alpha,\tau},*)\).
Proof.: Since \(\mathcal{V}\) has a weak-difference term, the abelian algebras of \(\mathcal{V}\) form a subvariety \(\mathcal{A}\). If \(\mathcal{A}\) does not contain the datum, then \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)\) is empty; otherwise, \(\mathcal{A}\in\mathcal{L}(Q,A^{\alpha,\tau},*)\) and the embedding follows from Proposition 3.30 after noting \(\operatorname{Ext}_{\mathcal{V}}(Q,A^{\alpha,\tau},*)=H^{2}_{\mathcal{A}}(Q,A ^{\alpha,\tau},*)\).
**Acknowledgments 3.46**.: The research in this manuscript was supported by NSF China Grant #12071374. |
2309.10320 | Inverse Formulae for $q$-analogues of Bipartite Distance Matrix | We consider two distinct $q$-analogues of the bipartite distance matrix,
namely the $q$-bipartite distance matrix and the exponential distance matrix.
We provide formulae of the inverse for these matrices, which extend the
existing results for the bipartite distance matrix. These investigations lead
us to introduce a $q$-analogue version of the bipartite Laplacian matrix. | Rakesh Jana | 2023-09-19T05:07:39Z | http://arxiv.org/abs/2309.10320v1 | # Inverse Formulae for \(q\)-analogues of Bipartite Distance Matrix
###### Abstract
We consider two distinct \(q\)-analogues of the bipartite distance matrix, namely the \(q\)-bipartite distance matrix and the exponential distance matrix. We provide formulae of the inverse for these matrices, which extend the existing results for the bipartite distance matrix. These investigations lead us to introduce a \(q\)-analogue version of the bipartite Laplacian matrix.
keywords: Tree, Distance Matrix, Bipartite Distance Matrix Msc: [2020] 05C50, 15A15, 15A18 +
Footnote †: journal: XXXXXXXXX
## 1 Introduction
The distance matrix finds numerous applications across various fields, including chemistry, molecular biology, and telecommunications. In 1971, Graham and Pollack [11] presented a remarkable result that the determinant of the distance matrix of a tree depends solely on the number of vertices in the tree. Specifically, for a tree \(T\) with \(n\) vertices, the determinant of its distance matrix is given by \(\det D(T)=(-1)^{n-1}2^{n-2}(n-1)\). This groundbreaking discovery sparked significant interest among researchers. In the same paper [11], Graham and Pollack established a connection between the loop addressing problem in data communication systems and the count of negative eigenvalues of the distance matrix. Subsequently, in 1977, Graham, Hoffman, and Hosoya [9] demonstrated that the distance matrix \(D(G)\) of a graph \(G\) depends solely on its constituent blocks, regardless of how these blocks are connected. In 1978, Graham and Lovasz [10] provided a formula to compute the inverse of the distance matrix for trees. Bapat [1] and Bapat et al. [3] extended many results related to the distance matrix of trees to weighted trees. In 2006, Bapat, Lal, and Pati [4] introduced two types of \(q\)-analogue versions of the distance matrix: the \(q\)-distance matrix and the exponential distance matrix. These \(q\)-analogue versions generated considerable interest and were explored by various researchers (see, for example, [4; 17; 5; 13]). Let us revisit the definitions of the \(q\)-distance matrix and exponential distance matrix for a graph \(G\).
Let \(q\) be an indeterminate. For a positive integer \(k\), we define \(\{\!\!\{k\}\!\!\}:=1+q+q^{2}+\ldots+q^{k-1}\). Let us set \(\{\!\!\{0\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
store adjacency information efficiently (see [7]). Consider a labelled, connected, bipartite graph \(G\) with the vertex bipartition \((L=\{l_{1},\ldots,l_{m}\},R=\{r_{1},\ldots,r_{n}\})\). The _bipartite adjacency matrix_\(\mathbb{A}(G)\) of \(G\) is an \(m\times n\) matrix, with rows indexed by \(l_{1},\ldots,l_{m}\) and columns indexed by \(r_{1},\ldots,r_{n}\). The \((i,j)\)th entry of \(\mathbb{A}(G)\) is \(1\) if \(l_{i}\sim r_{j}\), and \(0\) otherwise. The _bipartite distance matrix_\(\mathbb{B}(G)\) of \(G\) is an \(m\times n\) matrix, with rows indexed by \(l_{1},\ldots,l_{m}\) and columns indexed by \(r_{1},\ldots,r_{n}\). The \((i,j)\)th entry of \(\mathbb{B}(G)\) is \(\mathtt{dist}(l_{i},r_{j})\).
Significant research has been conducted on the bipartite adjacency matrix of bipartite graphs with unique perfect matchings (see [8, 15, 14, 16, 19] and references therein). The bipartite distance matrix for nonsingular trees, i.e., trees with perfect matchings, studied in [2]. It was shown that unlike the determinant of the distance matrix of a tree, which is independent of the tree's structure, the determinant of the bipartite distance matrix of a nonsingular tree depends on its structure. A combinatorial expression for the determinant of the bipartite distance matrix of a nonsingular tree presented in [2]. Remarkably, the bipartite distance matrix of a nonsingular tree is always invertible, and its inverse is provided in [6]. During the investigation of the inverse of the bipartite distance matrix, the authors in [6] uncovered a nontrivial generalization of the usual Laplacian matrix for a tree, which they have termed the _bipartite Laplacian matrix_. (We provide the definition of the bipartite Laplacian matrix in Section 2.) Unlike the usual Laplacian matrix, the bipartite Laplacian matrix is not necessarily symmetric, but it possesses many elementary properties akin to the usual Laplacian matrix, see Theorem 2.
In [12], Jana studied of two distinct \(q\)-analogue versions of the bipartite distance matrix, namely, the \(q\)-bipartite distance matrix and the exponential bipartite distance matrix. Let's revisit the definitions of these matrices for a bipartite graph \(G\).
Consider a connected, labelled bipartite graph \(G\) with a vertex bipartition \((L=l_{1},\ldots,l_{m},R=r_{1},\ldots,r_{n})\). The \(q\)_-bipartite distance matrix_\(\mathfrak{B}(G)\) (respectively, the _exponential bipartite distance matrix_\(\mathbb{E}(G)\)) of \(G\) is an \(m\times n\) matrix, where the rows are indexed by \(l_{1},\ldots,l_{m}\), and the columns are indexed by \(r_{1},\ldots,r_{n}\). In \(\mathfrak{B}(G)\) (respectively, \(\mathbb{E}(G)\)), the \((i,j)\)th entry is \(\{\,\mathtt{dist}_{T}(l_{i},r_{j})\}\) (respectively, \(q^{\mathtt{dist}_{T}(l_{i},r_{j})}\)).
Let \(G\) be a connected, labelled, bipartite graph with the vertex bipartition \((L=\{l_{1},\ldots,l_{m}\},R=\{r_{1},\ldots,r_{n}\})\). The \(q\)_-bipartite distance matrix_\(\mathfrak{B}(G)\) (respectively, _exponential bipartite distance matrix_\(\mathbb{E}(G)\)) of \(G\) is an \(m\times n\) matrix, with rows indexed by \(l_{1},\ldots,l_{m}\) and columns indexed by \(r_{1},\ldots,r_{n}\). The \((i,j)\)th entry of \(\mathfrak{B}(G)\) (respectively, \(\mathbb{E}(G)\)) is \(\{\,\mathtt{dist}_{T}(l_{i},r_{j})\}\) (respectively, \(q^{\mathtt{dist}_{T}(l_{i},r_{j})}\)).
If \(q=1\) then \(\mathfrak{B}(G)=\mathbb{B}(G)\) and \(\mathbb{E}(G)=\mathds{1}\mathds{1}^{t}\). Therefore, the \(q\)-bipartite distance matrix is a generalization of the bipartite distance matrix.
Let \(T\) be a nonsingular tree on \(2p\) vertices. It is shown in [12] that \(\det\mathbb{E}(T)=q^{p}(1-q^{2})^{p-1}\) and \(\det\mathfrak{B}(T)\) is divisible by \(q^{p-1}(1+q)^{p-1}\). Define \(\mathtt{bd}_{\mathfrak{q}}(T)\), the \(q\)-bipartite distance index of \(T\), as \(\mathtt{bd}_{\mathfrak{q}}(T):=\frac{\det\mathfrak{B}(T)}{q^{p-1}(1+q)^{p-1}}.\) Therefore, \(\det\mathfrak{B}(T)=q^{p-1}(1+q)^{p-1}\mathtt{bd}_{\mathfrak{q}}(T)\). The \(q\)-bipartite distance index of \(T\) follows an inclusion-exclusion type principle, enabling the recursive calculation of the \(q\)-bipartite distance index of any nonsingular tree. For more details, please refer to [12, Theorem 20]. Additionally, a standalone combinatorial formula for the \(q\)-bipartite distance index of a tree \(T\) is provided in [12, Theorem 23], by using the \(f\)-alternating sum technique (see [2, Section 5]).
It is evident that for a nonsingular tree \(T\), the exponential bipartite distance matrix of \(T\) is invertible when \(q\neq 0,\pm 1\). Moreover, when \(q\neq 0,-1\) and \(\mathtt{bd}_{\mathfrak{q}}(T)\neq 0\), the \(q\)-bipartite distance matrix of \(T\) is also invertible. In this article, our main focus is to provide the inverse formulas for both the exponential bipartite distance matrix and the \(q\)-bipartite distance matrix of a nonsingular tree whenever they exist.
We organize the article as follows. In Section 5, we discuss how to obtain the inverse of the bipartite
distance matrix of a nonsingular tree using the bipartite Laplacian matrix. In Section 3, we define the \(q\)-analogous version of the bipartite Laplacian matrix and explore its behaviour when extending a nonsingular tree by attaching a new \(P_{2}\) at some vertex of the old tree. In Section 4, we study the inverse of the exponential bipartite distance matrix of a nonsingular tree. In Section 5, we introduce \(q\)-analogous versions of a few more vectors and finally provide the inverse formula for the \(q\)-bipartite distance matrix of a nonsingular tree.
## 2 Inverse of the bipartite distance matrix
Recently, in [6], it is shown that the inverse of the bipartite distance matrix of a nonsingular tree \(T\) is a rank one update of a Laplacian matrix, which is referred to as the bipartite Laplacian matrix. Remarkably, the usual Laplacian matrix of any tree can be seen as a special case of the bipartite Laplacian matrix. Before providing the definition of the bipartite Laplacian matrix for a nonsingular tree, let's revisit some definitions.
Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). An alternating path in \(T\) is called an _even alternating path_ if it contains even number of matching edges and an alternating path in \(T\) is called an _odd alternating path_ if it contains odd number of matching edges.
**Definition**.: Let \(T\) be a nonsingular tree \(T\) on \(2p\) vertices and \((L,R)\) be a standard vertex bipartition of \(T\). The _bipartite Laplacian matrix_ of \(T\), denoted by \(\mathfrak{L}(T)\) or simply by \(\mathfrak{L}\) is the \(p\times p\) matrix whose rows are indexed by \(r_{1},\ldots,r_{p}\) and the columns are indexed by \(l_{1},\ldots,l_{p}\). The \((i,j)\)th entry of \(\mathfrak{L}(T)\), denoted by \(\mathfrak{L}_{ij}\), is defined as
\[\mathfrak{L}_{ij}=\left\{\begin{array}{rl}d(r_{i})d(l_{i})-1&\text{if $i=j$;}\\ d(r_{i})d(l_{j})&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an odd alternating path;}\\ -d(r_{i})d(l_{j})&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an even alternating path;}\\ -1&\text{if $i\neq j$ and $r_{i}\sim l_{j}$;}\\ 0&\text{otherwise.}\end{array}\right.\]
Here we want to emphasize that in the definition of \(\mathbb{B}(T),\mathfrak{B}(T)\), and \(\mathbb{E}(T)\) we indexed rows by \(l_{1},\ldots,l_{p}\) and columns by \(r_{1},\ldots,r_{p}\). But in the definition of bipartite Laplacian matrix we indexed rows by \(r_{1},\ldots,r_{p}\) and columns by \(l_{1},\ldots,l_{p}\).
In the following result we highlight some similarities between the bipartite Laplacian matrix and the usual Laplacian matrix.
**Theorem 2**.: _[_6_, Theorem 9]_ _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(\mathfrak{L}\) is the bipartite Laplacian matrix of \(T\). Then the following assertions hold._
1. _The row and the column sums of_ \(\mathfrak{L}\) _are zero. (A similar property also holds for the usual Laplacian matrix of any graph.)_
2. _The cofactors of any two elements of_ \(\mathfrak{L}\) _are equal to one. (In the case of the usual Laplacian matrix of a graph, the cofactors of any two elements are equal to the number of spanning trees.)_
3. _The rank of_ \(\mathfrak{L}\) _is_ \(p-1\)_. (A similar result holds for the usual Laplacian matrix of a connected graph.)_
4. _The algebraic multiplicity of_ \(0\) _as an eigenvalue of_ \(\mathfrak{L}\) _is one. (This property is also true for the usual Laplacian matrix of a connected graph.)_
_._
* _If_ \(\mathbf{u}\) _is an eigenvector of_ \(\mathfrak{L}\) _corresponding to an eigenvalue_ \(\lambda\neq 0\) _then_ \(\mathds{1}^{t}\mathbf{u}=0\)_. (A similar property holds for the usual Laplacian matrix of any graph.)_
* _If_ \(T=F\circ K_{1}\) _for some tree_ \(F\) _on_ \(p\) _vertices. Then_ \(\mathcal{L}(F)=\mathfrak{L}(T)\) _where_ \(\mathcal{L}\) _is the usual Laplacian matrix of_ \(F\)_. (This means that the usual Laplacian matrix of a tree_ \(F\) _can be seen as a bipartite Laplacian matrix of another tree_ \(T\)_.)_
* _The matrix_ \(\mathfrak{L}\) _is a symmetric matrix if and only if_ \(T\) _is a corona tree, that is, if_ \(T=F\circ K_{1}\) _for some tree_ \(F\)_. (In contrast, the usual Laplacian matrix is always symmetric.)_
It's worth noting that in their work [6], Bapat, Jana, and Pati put forward a conjecture regarding the bipartite Laplacian matrix of a nonsingular tree. They proposed that this matrix is diagonalizable, and all of its eigenvalues are nonnegative real numbers. This conjecture, coupled with the properties outlined in Theorem 2, highlights the potential significance and relevance of the bipartite Laplacian matrix in the realm of nonsingular trees.
Similar to the inverse of the usual distance matrix of a tree, which can be viewed as a rank one update of its Laplacian matrix as shown in [3], the bipartite distance matrix of a nonsingular tree follows a similar pattern. It can also be seen as a rank one update of its bipartite Laplacian matrix. Before stating the result let us define some useful terminologies.
Let \(v\) be a vertex in a nonsingular tree \(T\). By \(\mathcal{A}^{+}_{T,v}\) we denote the set of all even alternating paths in \(T\) that start at \(v\). Similarly, we denote the set of all odd alternating paths in \(T\) that start at \(v\) by \(\mathcal{A}^{-}_{T,v}\). By \(\mathtt{diff}_{T}(v)\), we mean the quantity \(\mathtt{diff}_{T}(v):=|\mathcal{A}^{+}_{T,v}|-|\mathcal{A}^{-}_{T,v}|\). Let us define the vector \(\mathbf{\tau}\) by \(\mathbf{\tau}(v):=1-d(v)(1+\mathtt{diff}(v))\) for each \(v\in T\).
**Theorem 3**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\mathbb{B}(T)\) and \(\mathfrak{L}(T)\) be the bipartite distance matrix and the bipartite Laplacian matrix of \(T\), respectively. Let \(\mathbf{\tau}_{r}(T)\) and \(\mathbf{\tau}_{l}(T)\) be the restriction of the vector \(\mathbf{\tau}(T)\) on \(R\) and \(L\), respectively. Then_
\[\mathbb{B}(T)^{-1}=-\frac{1}{2}\mathfrak{L}(T)+\frac{1}{\mathtt{bd}(T)}\mathbf{ \tau}_{r}(T)\mathbf{\tau}_{l}^{t}(T).\]
In this article, we mostly employ the proof strategy used in [6], which involves using information from the existing tree to establish the inductive case for a tree formed by attaching a \(P_{2}\) to a vertex in the old tree. Below we provide the definition of attaching a \(P_{2}\) to a vertex.
**Definition** (Attaching a \(P_{2}\) to a vertex).: (a) Consider a tree \(T\) and a vertex \(v\). Let \(\widehat{T}\) denote the tree resulting from \(T\) by adding two new vertices, \(u\) and \(w\), along with the edges \([v,u]\) and \([u,w]\). We refer to this operation as _attaching a new \(P_{2}\) at the vertex \(v\)_.
(b) For a nonsingular tree \(T\) with a vertex set of size \(2p\) and a standard vertex bipartition \((L,R)\), let \(\widehat{T}\) be the tree formed by attaching a new \(P_{2}\) to some vertex \(v\in T\). To compute its \(q\)-bipartite distance matrix, exponential distance matrix and the bipartite Laplacian matrix, we label the new vertices according to the following procedure:
i) if \(v\in L\), then we put \(u=r_{p+1}\), \(w=l_{p+1}\), and ii) if \(v\in R\), then we put \(u=l_{p+1}\), \(w=r_{p+1}\).
## 3 \(q\)-bipartite Laplacian Matrix
As our primary focus is to provide the inverse of the \(q\)-bipartite distance matrix, we begin by introducing the \(q\)-analogous version of the bipartite Laplacian matrix, which we call the \(q\)_-bipartite Laplacian matrix_. For a positive integer \(k\), we define \(k_{q}\) as \(1+(k-1)q^{2}\).
**Definition**.: Let \(T\) be a nonsingular tree \(T\) on \(2p\) vertices and \((L,R)\) be a standard vertex bipartition of \(T\). The \(q\)_-bipartite Laplacian matrix_ of \(T\), denoted by \(\mathfrak{C}(T)\) or simply by \(\mathfrak{C}\) is the \(p\times p\) matrix whose rows are indexed by \(r_{1},\ldots,r_{p}\) and the columns are indexed by \(l_{1},\ldots,l_{p}\). The \((i,j)\)th entry of \(\mathfrak{C}(T)\), denoted by \(\mathfrak{C}_{ij}\), is defined as
\[\mathfrak{C}_{ij}=\left\{\begin{array}{rl}d(r_{i})_{q}d(l_{i})_{q}-q^{2}& \text{if $i=j$;}\\ d(r_{i})_{q}d(l_{j})_{q}&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an odd alternating path;}\\ -d(r_{i})_{q}d(l_{j})_{q}&\text{if $i\neq j$ and the $r_{i}$-$l_{j}$ path is an even alternating path;}\\ -q^{2}&\text{if $i\neq j$ and $r_{i}\sim l_{j}$;}\\ 0&\text{otherwise.}\end{array}\right.\]
Clearly, for a nonsingular tree \(T\), if we put \(q=1\), then \(\mathfrak{C}=\mathfrak{L}\), the bipartite Laplacian matrix of \(T\). Therefore, the \(q\)-bipartite Laplacian matrix is a generalization of the usual bipartite Laplacian matrix.
Similar to signed degree vector as defined in [6], we now introduce the concept of a \(q\)-analogues of the signed degree vector which relates the structure of the bipartite Laplacian of the new tree with that of the old one.
**Definition**.: Let \(T\) be a nonsingular tree on \(2p\) vertices with the standard vertex bipartition \((L,R)\) and \(v\) be a vertex. Then the \(q\)_-signed degree vector_\(\underset{v}{\boldsymbol{\mathcal{H}}}\) at \(v\) is defined in the following way.
1. If \(v\in L\), then for \(i=1,\ldots,p\), we define 1. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=d_{T}(r_{i}))_{q}\) if the \(v\)-\(r_{i}\) path is an odd alternating path, 2. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=-(d_{T}(r_{i}))_{q}\) if the \(v\)-\(r_{i}\) path is an even alternating path, and 3. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=0\) if the \(v\)-\(r_{i}\) path is not an alternating path.
2. Similarly, if \(v\in R\), then for \(i=1,\ldots,p\), we define 1. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=(d_{T}(l_{i}))_{q}\) if the \(v\)-\(l_{i}\) path is an odd alternating path, 2. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=-(d_{T}(l_{i}))_{q}\) if the \(v\)-\(l_{i}\) path is an even alternating path, and 3. \(\underset{v}{\boldsymbol{\mathcal{H}}}(i)=0\) if the \(v\)-\(l_{i}\) path is not an alternating path.
Clearly, if \(q=1\), then \(\underset{v}{\boldsymbol{\mathcal{H}}}=\underset{v}{\boldsymbol{\mathcal{H}}}\) for each \(v\in T\).
In the following result, we discuss how the structure of \(\mathfrak{C}\) changes after attaching a \(P_{2}\) to a vertex.
**Lemma 4**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\widehat{T}\) be the tree obtained from \(T\) by attaching a new \(P_{2}\) at \(v\). Let \(\underset{v}{\boldsymbol{\mathcal{H}}}_{v}\) be the signed degree vector at \(v\) of \(T\)._
1. _If_ \(v=l_{k}\) _for some_ \(k\)_, then_ \(\mathfrak{C}(\widehat{T})=\begin{bmatrix}\mathfrak{C}(T)+q^{2}\underset{v}{ \boldsymbol{\mathcal{H}}}_{v}e_{k}^{t}&-\underset{v}{\boldsymbol{\mathcal{H}}} _{v}\\ -q^{2}e_{k}^{t}&1\end{bmatrix}\)_._
2. _If_ \(v=r_{k}\) _for some_ \(k\)_, then_ \(\mathfrak{C}(\widehat{T})=\begin{bmatrix}\mathfrak{C}(T)+q^{2}e_{k}\underset{v} {\boldsymbol{\mathcal{H}}}_{v}^{t}&-q^{2}e_{k}\\ -\underset{v}{\boldsymbol{\mathcal{H}}}_{v}^{t}&1\end{bmatrix}\)_._
Proof.: We only provide the proof of item (a) as the proof of item (b) can be dealt similarly. Without loss of any generality, let us assume that \(\widehat{T}\) obtained from \(T\) by adding a new path \([l_{k},r_{p+1},l_{p+1}]\) for some \(1\leq k\leq p\). Clearly \(\widehat{L}=L\cup\{l_{p+1}\}\) and \(\widehat{R}=R\cup\{r_{p+1}\}\) is a standard vertex bipartition of \(\widehat{T}\). Let \(\mathfrak{C}\) and \(\widehat{\mathfrak{C}}\) be the bipartite Laplacian matrix of \(T\) and \(\widehat{T}\), respectively. Since \([r_{p+1},l_{p+1}]\) is the only
alternating path that starts at \(r_{p+1}\) and \(d_{\widehat{T}}(r_{p+1})=2\) with \([r_{p+1},l_{k}]\) is not a matching edge, it follows that the all entries of the \((p+1)\)th row of \(\mathfrak{P}\) is zero except \(\widehat{\mathfrak{C}}(p+1,p+1)=d_{\widehat{T}}(r_{p+1})_{q}d_{\widehat{T}}(l _{p+1})_{q}-1=1\) and \(\widehat{\mathfrak{C}}(p+1,k)=-1\). Hence \(\widehat{\mathfrak{C}}(p+1,:)=\begin{bmatrix}-q^{2}e_{k}^{t}&1\end{bmatrix}\).
Let us take \(i=1,\ldots,p\). Then \(r_{i}\nsim l_{p+1}\). Note that the \(l_{k}\)-\(r_{i}\) path is an odd alternating path if and only if the \(l_{p+1}\)-\(r_{i}\) path is an even alternating path. Similarly, the \(l_{k}\)-\(r_{i}\) path is an even alternating path if and only if the \(l_{p+1}\)-\(r_{i}\) path is an odd alternating path. Since \(d_{\widehat{T}}(l_{p+1})=1\), it follows that \(\widehat{\mathfrak{C}}(\{1,\ldots,p\},p+1)=-\begin{smallmatrix}-\boldsymbol{ \mu}_{l_{k}}\end{smallmatrix}\), where \(\begin{smallmatrix}\boldsymbol{\mu}_{l_{k}}\end{smallmatrix}\) is the \(q\)-signed degree vector of \(T\) at \(l_{k}\).
Since \(d_{T}(u)=d_{\widehat{T}}(u)\) for each \(u\in T\) other than \(l_{k}\), it follows that \(\widehat{\mathfrak{C}}(i,j)=\mathfrak{P}(i,j)\) for each \(i=1,\ldots,p\) and \(j=1,\ldots,k-1,k+1,\ldots,p\).
Finally, notice that \(d_{\widehat{T}}(l_{k})=d_{T}(l_{k})+1\). Therefore, for \(i=1,\ldots,p\), we have
\[\widehat{\mathfrak{C}}(i,k)=\left\{\begin{array}{rl}d_{T}(r_{i})_{q}(d_{T}( l_{k})_{q}+q^{2})-q^{2}&\text{if $i=k$;}\\ d_{T}(r_{i})_{q}(d_{T}(l_{k})_{q}+q^{2})&\text{if $i\neq k$ and the $r_{i}$-$l_{k}$ path is an odd alternating path;}\\ -d_{T}(r_{i})_{q}(d_{T}(l_{k})_{q}+q^{2})&\text{if $i\neq k$ and the $r_{i}$-$l_{k}$ path is an even alternating path;}\\ -q^{2}&\text{if $i\neq k$ and $r_{i}\sim l_{k}$;}\\ 0&\text{otherwise.}\end{array}\right.\]
Therefore, \(\widehat{\mathfrak{C}}(\{1,\ldots,p\},k)=\mathfrak{P}(\{1,\ldots,p\},k)+q^{2 }\boldsymbol{\mu}_{l_{k}}\). This completes the proof.
In the next result we extend the result [6, Lemma 7] which tells that the sum of all entries in a signed degree vector is always one.
**Lemma 5**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard bipartition \((L,R)\). Let \(u\) be any vertex in \(T\) and \(\boldsymbol{\mu}_{u}\) be the signed degree vector at \(u\). Then \(\mathds{1}^{t}\boldsymbol{\mu}_{u}=(\mathtt{diff}(u)+1)q^{2}-\mathtt{diff}(u)\)._
Proof.: We proceed by induction on \(p\geq 1\). For \(p=1\) the result is trivial. Assume the result to be true for nonsingular trees with less than \(2p\) vertices. Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard bipartition \((L,R)\). Let \(u\in R\). (The case of \(u\in L\) can be dealt similarly.) Let \(\boldsymbol{\mathfrak{H}}\) be the \(q-\)signed degree vector of \(u\) in \(T\).
Suppose \([v_{0},v_{1},\ldots,v_{k}]\) is a longest path in \(T\). As \(p>1\), we have \(k\geq 3\) and so we may assume that \(v_{0},v_{1}\neq u\). As \(T\) is nonsingular and this is a longest path, we have \(d(v_{0})=1\) and \(d(v_{1})=2\). Without loss of any generality, let us assume \(v_{0},v_{1}\in\{l_{p},r_{p}\}\). Let \(\widehat{T}=T-\{v_{0},v_{1}\}\) be the tree obtained from \(T\) by removing the vertices \(v_{0}\) and \(v_{1}\). Clearly, \(u\in\widehat{T}\). Let \(\widehat{\boldsymbol{\mathfrak{H}}}\) be the signed degree vector of \(u\) in \(\widehat{T}\). Note that \(\widehat{\boldsymbol{\mathfrak{H}}}\) is vector of size \(p-1\). Clearly, \(d_{T}(v)=d_{\widehat{T}}(v)\) for each \(v\in\widehat{T}-v_{2}\) and \(d_{T}(v_{2})=d_{\widehat{T}}(v_{2})+1\). It follows that \(\boldsymbol{\mathfrak{H}}(i)=\widehat{\boldsymbol{\mathfrak{H}}}(i)\) for each \(l_{i}\in L\setminus\{v_{2}\}\).
If either \(v_{2}\in R\) or the \(u\)-\(v_{2}\) path is not an alternating path then \(\boldsymbol{\mathfrak{H}}=\begin{bmatrix}\widehat{\boldsymbol{\mathfrak{H}}}&0 \end{bmatrix}\) and the result follows by induction. Now we assume that \(v_{2}\in L\) and the \(u\)-\(v_{2}\) path is an alternating path. Then \(v_{2}\sim r_{p}\) and \(d_{T}(l_{p})=1\). Let \(v_{2}=l_{k}\) for some \(1\leq k<p\). Note that the \(u\)-\(l_{p}\) path is also an alternating path and so we have \(\widehat{\boldsymbol{\mu}}(k)=(-1)^{t}d_{\widehat{T}}(v_{2})_{q}\) for some \(t\) and \(\boldsymbol{\mu}(p)=(-1)^{t+1}\). Since \(\boldsymbol{\mu}(k)=(-1)^{t}d_{T}(v_{2})_{q}=\widehat{\boldsymbol{\mu}}(k)+(-1) ^{t}q^{2}\), it follows that
\[\boldsymbol{\mu}=\begin{bmatrix}\widehat{\boldsymbol{\mu}}^{t}&(-1)^{t+1} \end{bmatrix}^{t}+(-1)^{t}\begin{bmatrix}q^{2}e_{k}&0\end{bmatrix}^{t}.\]
Clearly, \(\mathtt{diff}_{T}=\mathtt{diff}_{\widehat{T}}+(-1)^{t}\). Hence, the result follows by applying induction.
**Theorem 6**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices and \(\mathfrak{P}\) be the \(q\)-Laplacian matrix. Then \(\det\mathfrak{P}=1-q^{2}\)_
_Proof._ We use induction on \(p\). The result clearly true for \(n=1\) and \(2\). Let us assume the result to be true for nonsingular tree on \(2p\) vertices. Let \(\widehat{T}\) be a nonsingular tree on \(2p+2\) vertices. Then \(\widehat{T}\) can be viewed as obtained from some nonsingular tree \(T\) with \(2p\) vertices by attaching a new \(P_{2}\) at a vertex \(v\). Without loss of generality, let's assume that \(v=l_{k}\) for some \(1\leq k\leq p\). Let \(\mathfrak{P}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\). By Lemma 4, we have
\[\mathfrak{P}(\widehat{T})=\begin{bmatrix}\mathfrak{P}(T)+q^{2}\mathbf{ \mu}_{v}\mathbf{e}_{k}^{t}&-\mathbf{\mu}_{v}\\ -q^{2}\mathbf{e}_{k}^{t}&1\end{bmatrix}\]
By using Schur's formula for the determinant, we get
\[\det\mathfrak{P}(T)=\det\left(\mathfrak{P}(T)+q^{2}\mathbf{\mu}_{v} \mathbf{e}_{k}^{t}-q^{2}\mathbf{\mu}_{v}\mathbf{e}_ {k}^{t}\right)=\det\mathfrak{P}(T).\]
Hence, the result follows by applying induction hypothesis.
In the following remark we discuss how the \(q\)-bipartite Laplacian matrix of a nonsingular tree can be obtained from some of its nonsingular subtrees. This plays a crucial rule proving our subsequent results.
**Remark 7**.: Consider the tree \(T\) with a matching edge \([l_{k_{1}},r_{k_{1}}]\), see Figure 1. Let the degree of \(l_{k_{1}}\) be \(s\), \(s\geq 1\). Let \(r_{k_{1}+1},r_{k_{2}+1},\ldots,r_{k_{s-1}+1}\) be some distinct vertices other than \(r_{k_{1}}\) that are adjacent to \(l_{k_{1}}\). Note that when we delete the edges \([l_{k_{1}},r_{k_{1}+1}]\), \(\ldots,[l_{k_{1}},r_{k_{s-1}+1}]\), we obtain \(s\) many smaller nonsingular trees, say \(T_{1},\ldots,T_{s}\). Assume that the vertex set of \(T_{1}\) is \(\{l_{1},\ldots,l_{k_{1}},r_{1},\ldots,r_{k_{1}}\}\), the vertex set of \(T_{2}\) is \(\{l_{k_{1}+1},\ldots,l_{k_{2}},r_{k_{1}+1},\ldots,r_{k_{2}}\}\), and so on up to the vertex set of \(T_{s}\) is \(\{l_{k_{s-1}+1},\ldots,l_{k_{s}},r_{k_{s-1}+1},\ldots,r_{k_{s}}\}\). Let us put an arrow on the edge \([l_{k_{1}},r_{k_{1}}]\) from \(r_{k_{1}}\) to \(l_{k_{1}}\). This arrow indicates that, from a vertex \(r_{i}\) in \(T_{2}\), we do not have an alternating path to a vertex in \(T_{1}\). Similarly, from a vertex \(r_{i}\) in \(T_{3}\), we do not have an alternating path to a vertex in \(T_{1},T_{2},T_{4},T_{5},\ldots,T_{s}\). Similar statements are true for vertices \(r_{i}\) in \(T_{4},\ldots,T_{s}\). Also, from a vertex \(l_{i}\) in \(T_{1}\), we only have alternating paths to vertices in \(T_{1}\) but not to a vertex in \(T_{2},\ldots,T_{s}\). Let us take \(F_{1}\) be the tree \(T_{1}\). For \(i=2,\ldots,s\), let \(F_{i}\) be the subtree of \(T\) obtained by taking \(F_{i-1}\) and \(T_{i}\) and by inserting the edge \([l_{k_{1}},r_{k_{i-1}+1}]\). Clearly \(F_{s}\) is the original tree \(T\).
a) Let \(\mathbf{\mu}_{l_{k_{1}}}\) be the \(q\)-signed degree vector at \(l_{k_{1}}\) of \(T_{1}\) and let \(\mathfrak{P}\) be the signed degree vector at \(l_{k_{1}}\) of \(T\). Then \(\mathbf{\mu}=\begin{bmatrix}\mathbf{\mu}_{l_{k_{1}}}&\mathbf{0}\end{bmatrix}^{t}\).
Figure 1: Understanding \(\mathfrak{L}(T)\).
b) Let \(\mathfrak{C}(F_{i})\) be the \(q\)-bipartite Laplacian matrix of \(F_{i}\) for \(i=1,\ldots,s\) and \(\mathfrak{C}(T_{i})\) be the \(q\)-bipartite Laplacian matrix of \(T_{i}\) for \(i=1,\ldots,s\). Clearly, \(\mathfrak{C}(T_{1})=\mathfrak{C}(F_{1})\). Let \(\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{i}+1}}\) be the \(q\)-signed degree vector at \(r_{k_{i}+1}\) of \(T_{i+1}\), for \(i=1,\ldots,s-1\). By \(\boldsymbol{E}^{ij}\) we denote the matrix of an appropriate size with \(1\) at position \((i,j)\) and zero elsewhere. Then, for \(i=2,\ldots,s\), we have
\[\mathfrak{L}(F_{i})=\left[\begin{array}{c|c|c|c}\mathfrak{L}(T_{1})+(i-1)q^ {2}\boldsymbol{\mu}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{e}_{k_{1}}^{t}&- \boldsymbol{\eta}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{\Psi}_{\boldsymbol{r}_{ k_{1}+1}}^{t}&\ldots&-\boldsymbol{\eta}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{ \Psi}_{\boldsymbol{r}_{k_{i-1}+1}}^{t}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\mathfrak{L}(T_{2})+q^{2}\boldsymbol{e}_{ 1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{1}+1}}^{t}&\ldots&\boldsymbol{0}\\ \hline\vdots&\boldsymbol{0}&\ddots&\boldsymbol{0}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\boldsymbol{0}&\cdots&\mathfrak{L}(T_{i}) +q^{2}\boldsymbol{e}_{1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{i-1}+1}}^{t} \end{array}\right]\]
In particular,
\[\mathfrak{L}(T)=\left[\begin{array}{c|c|c|c}\mathfrak{L}(T_{1})+(s-1)q^{2} \boldsymbol{\mu}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{e}_{k_{1}}^{t}&- \boldsymbol{\Psi}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{\Psi}_{\boldsymbol{r}_{ k_{1}+1}}^{t}&\ldots&-\boldsymbol{\Psi}_{\boldsymbol{l}_{k_{1}}}\boldsymbol{ \Psi}_{\boldsymbol{r}_{k_{i-1}+1}}^{t}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\mathfrak{L}(T_{2})+q^{2}\boldsymbol{e}_{ 1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{1}+1}}^{t}&\ldots&\boldsymbol{0}\\ \hline\vdots&\boldsymbol{0}&\ddots&\boldsymbol{0}\\ \hline-q^{2}\boldsymbol{E}^{1k_{1}}&\boldsymbol{0}&\cdots&\mathfrak{L}(T_{s}) +q^{2}\boldsymbol{e}_{1}\boldsymbol{\Psi}_{\boldsymbol{r}_{k_{s-1}+1}}^{t} \end{array}\right]\]
## 4 Exponential bipartite distance matrix
Let us recall the following result which tells that the exponential bipartite distance matrix of a nonsingular tree is independent of tree structure.
**Theorem 8**.: _[_12_]_ _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Then \(\det\mathbb{E}(T)=q^{p}(1-q^{2})^{p-1}\)._
By Theorem 8, we observe that \(\mathbb{E}(T)\) is invertible whenever \(q\neq 0,\pm 1\). In the following result, we present the inverse of \(\mathbb{E}(T)\) under the condition that \(q\neq 0,\pm 1\).
**Theorem 9**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(q\neq 0,\pm 1\). Then_
\[\mathbb{E}(T)^{-1}=\frac{\mathfrak{C}}{q(1-q^{2})}.\]
_Proof._ We proceed by induction on \(p\). The base can be verified easily. Assuming the result holds for \(p\). Let \(\widehat{T}\) be a nonsingular tree with \(2p+2\) vertices. We can obtain \(\widehat{T}\) from some nonsingular tree \(T\) with \(2p\) vertices by attaching a new \(P_{2}\) at a vertex \(v\). Without loss of generality, let's assume that \(v=l_{k}\) for some \(1\leq k\leq p\). Let \(\boldsymbol{\Psi}_{\boldsymbol{\mu}}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\).
By item (a) of Lemma 4, we have
\[\mathfrak{C}(\widehat{T})=\begin{bmatrix}\mathfrak{C}(T)+q^{2}\boldsymbol{ \Psi}_{\boldsymbol{\mu}}\boldsymbol{e}_{k}^{t}&-\boldsymbol{\Psi}_{ \boldsymbol{\mu}}\\ -q^{2}\boldsymbol{e}_{k}^{t}&1\end{bmatrix}\]
Let \(\boldsymbol{x}\) be a vector of size \(p\) such that \(\boldsymbol{x}(i)=q^{\mathtt{dist}(l_{i},r_{p+1})}\) for each \(i=1,\cdots,p\). Then the exponential bipartite distance matrix of \(\widehat{T}\) can be written as
\[\mathbb{E}(\widehat{T})=\begin{bmatrix}\mathbb{E}(T)&\boldsymbol{x}\\ q^{2}\boldsymbol{e}_{k}^{t}\mathbb{E}(T)&q\end{bmatrix}.\]
Now note that
\[\mathfrak{R}(\widehat{T})\mathbb{E}(\widehat{T}) =\begin{bmatrix}\mathfrak{R}(T)+q^{2}\boldsymbol{\Psi}\boldsymbol{e }_{k}^{t}&-\boldsymbol{\Psi}\boldsymbol{\mu}\\ -q^{2}\boldsymbol{e}_{k}^{t}&1\end{bmatrix}\begin{bmatrix}\mathbb{E}(T)& \boldsymbol{x}\\ q^{2}\boldsymbol{e}_{k}^{t}\mathbb{E}(T)&q\end{bmatrix}\] \[=\begin{bmatrix}\mathfrak{C}(T)\mathbb{E}(T)&\mathfrak{R}(T) \boldsymbol{x}+q^{2}\boldsymbol{\Psi}\boldsymbol{e}_{k}^{t}\boldsymbol{x}-q \boldsymbol{\Psi}\boldsymbol{\mu}\\ \boldsymbol{0}&-q^{2}\boldsymbol{e}_{k}^{t}\boldsymbol{x}+q\end{bmatrix}\] \[=\begin{bmatrix}q(1-q^{2})I&\mathfrak{R}(T)\boldsymbol{x}+q(q^{2 }-1)\boldsymbol{\Psi}\boldsymbol{\mu}\\ \boldsymbol{0}&-q^{2}\boldsymbol{e}_{k}^{t}\boldsymbol{x}+q\end{bmatrix}. \tag{1}\]
The last equality follows from induction hypothesis and by using \(\boldsymbol{e}_{k}^{t}\boldsymbol{x}=\boldsymbol{x}(k)=\mathtt{dist}(l_{k},r_ {p+1})=1\). Therefore, to complete the proof, we only need to show that \(\mathfrak{R}(T)\boldsymbol{x}=q(1-q^{2})\boldsymbol{\Psi}\).
Let the degree of \(l_{k}\) in \(T\) be \(s\), (\(s\geq 1\)). Let \(T_{1}\) be the tree obtained from \(T\) by removing all vertices adjacent to \(l_{k}\) except the vertex \(r_{k}\). If \(s=1\) then \(T_{1}\) is the same as \(T\). Without loss of any generality, let us assume that \(T_{1}\) has the vertex set \(\{l_{1},r_{1},\ldots,l_{k-1},r_{k-1},l_{k},r_{k}\}\). Let \(\widehat{\boldsymbol{\mu}}\) be the \(q\)-signed degree vector at \(v\) of \(T_{1}\). By Remark 7, \(\boldsymbol{\Psi}=\begin{bmatrix}\widehat{\boldsymbol{\Psi}}&\boldsymbol{0} \end{bmatrix}^{t}\). Further note that
\[\boldsymbol{x}=\mathbb{E}(T)\boldsymbol{e}_{k}+(q^{2}-1)\Big{[}\ q^{ \mathtt{dist}(l_{1},r_{k})}&\cdots&q^{\mathtt{dist}(l_{k-1},r_{k})}&0&0&\cdots &0\end{bmatrix}^{t}. \tag{2}\]
Let \(\boldsymbol{z}\) be a vector of size \((p-k)\) defined as follows. For \(i=1,\ldots,(p-k)\), \(\boldsymbol{z}(i)=-1\) if \(r_{k+i}\) adjacent to \(l_{k}\) and \(\boldsymbol{z}(i)=0\) otherwise.
Hence, by Remark 7, we have
\[\mathfrak{R}(T)\begin{bmatrix}\mathbb{E}(T_{1})\boldsymbol{e}_{k }-q\boldsymbol{e}_{k}\\ \boldsymbol{0}\end{bmatrix}^{t} =\begin{bmatrix}\mathfrak{R}(T_{1})+(s-1)\widehat{\boldsymbol{ \Psi}}\boldsymbol{e}_{k}^{t}&*\\ q^{2}\boldsymbol{z}\boldsymbol{e}_{k}^{t}&*\end{bmatrix}\begin{bmatrix} \mathbb{E}(T_{1})\boldsymbol{e}_{k}-q\boldsymbol{e}_{k}\\ \boldsymbol{0}\end{bmatrix}^{t}\] \[=\begin{bmatrix}\mathfrak{R}(T_{1})\mathbb{E}(T_{1}) \boldsymbol{e}_{k}+(s-1)\widehat{\boldsymbol{\Psi}}\boldsymbol{e}_{k}^{t} \mathbb{E}(T_{1})\boldsymbol{e}_{k}\\ q^{3}\boldsymbol{z}\end{bmatrix}-\begin{bmatrix}q\mathfrak{R}(T_{1}) \boldsymbol{e}_{k}+q(s-1)\widehat{\boldsymbol{\Psi}}\boldsymbol{\mu}\\ q^{3}\boldsymbol{z}\end{bmatrix}\] \[=\begin{bmatrix}q(1-q^{2})\boldsymbol{e}_{k}-q(\widehat{ \boldsymbol{\Psi}}-q^{2}\boldsymbol{e}_{k})\\ \boldsymbol{0}\end{bmatrix}\] \[=q(e_{k}-\mathfrak{H}). \tag{3}\]
The second last equality holds by using the induction hypothesis and the fact that \(\mathfrak{R}(T_{1})\boldsymbol{e}_{k}=\widehat{\boldsymbol{\Psi}}-q^{2} \boldsymbol{e}_{k}\). The last equality holds by using Remark 7. Hence, by (2) and (3), it follows that
\[\mathfrak{R}(T)\boldsymbol{x}=q(1-q^{2})\boldsymbol{e}_{k}-q(1-q^{2})(e_{k}- \mathfrak{H})=q(1-q^{2})\mathfrak{H}.\]
This completes the proof.
## 5 \(q\)-bipartite distance matrix
In this section, first, we recall some terminologies from [12] and using them we present a formula for the inverse of the \(q\)-bipartite distance matrix of a nonsingular tree.
Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). The vector \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\), or simply \(\mathfrak{q}_{\boldsymbol{\tau}}\), of size \(2p\) is defined by \(\mathfrak{q}_{\boldsymbol{\tau}}(v):=\big{(}1-d_{T}(v)\big{)}\big{(}1+ \mathtt{diff}_{T}(v)\big{)}q^{2}-\mathtt{diff}_{T}(v)\) for each \(v\) in \(T\). The entries of \(\mathfrak{q}_{\boldsymbol{\tau}}\) are ordered according to \(l_{1},\ldots,l_{p},r_{1},\ldots,r_{p}\). Clearly, for \(q=1\), \(\mathfrak{q}_{\boldsymbol{\tau}}\) is the vector \(\boldsymbol{\tau}\), as defined in [2].
By \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\), or simply by \(\mathfrak{q}_{\boldsymbol{\tau}}\), we mean the restriction of \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\) on \(R\). Similarly, by \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\), or simply by \(\mathfrak{q}_{\boldsymbol{\tau}}\), we mean the restriction of \(\mathfrak{q}_{\boldsymbol{\tau}}(T)\) on \(L\).
The next result relates the structures the \(\mathfrak{V}_{r}\) vectors of the new tree with that of the old one under attaching a new \(P_{2}\) at a vertex. The first item was proved in [12, Lemma 13] and the proof of the second item is routine.
**Lemma 10**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\widehat{T}\) be the tree obtained from \(T\) by attaching a new \(P_{2}\) at \(v\)._
_(a) If \(v=r_{k}\) for some \(k\), then \(\mathfrak{V}_{r}(\widehat{T})=\begin{bmatrix}\mathfrak{V}_{r}(T)\\ 0\end{bmatrix}-(1+\mathtt{diff}_{T}(v))\begin{bmatrix}\boldsymbol{e}_{k}\\ -1\end{bmatrix}\)._
_(b) If \(v=l_{k}\) for some \(k\), then \(\mathfrak{V}_{r}(\widehat{T})=\begin{bmatrix}\mathfrak{V}_{r}(T)\\ 1\end{bmatrix}-\begin{bmatrix}\mathfrak{V}_{v}(T)\\ 0\end{bmatrix}\), where \(\mathfrak{V}_{v}(T)\) is the \(q\)-signed degree vector at \(v\) of \(T\)._
In the following result, we present details regarding the row sums and column sums of the \(q\)-bipartite Laplacian matrix.
**Theorem 11**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(\mathfrak{R}(T)\) is the bipartite Laplacian matrix of \(T\). Then we have_
\[\mathds{1}^{t}\mathfrak{R}(T)=(1-q^{2})(\mathfrak{V}_{l}(T))^{t}\qquad\text{ and}\qquad\mathfrak{R}(T)\mathds{1}=(1-q^{2})\mathfrak{V}_{r}(T).\]
_Proof._ We use induction on \(p\). Let \(\widehat{T}\) be a nonsingular tree on \(2p+2\) vertices. Suppose \(\widehat{T}\) is obtained from some nonsingular tree \(T\) on \(2p\) vertices by attaching a new \(P_{2}\) at some vertex \(v\). Notice that either \(v\in L\) or \(v\in R\). (We shall proof the case \(v\in L\) as the other case can be dealt similarly.)
Suppose \(v=l_{k}\) and \(l_{k}\sim r_{p+1}\). Clearly, \(l_{p+1},r_{p+1}\notin T\). Let \(\mathfrak{H}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\). By Lemma 4, we get
\[\mathfrak{R}(\widehat{T})=\begin{bmatrix}\mathfrak{R}(T)+q^{2}\boldsymbol{ \mu}\boldsymbol{e}_{k}^{t}&-\boldsymbol{\mu}\\ -q^{2}\boldsymbol{e}_{k}^{t}&1\end{bmatrix}.\]
Suppose \((z_{l}(\widehat{T}))_{i}=(1-q^{2})\mathfrak{V}_{r}(\widehat{T})(l_{i})\) and \((z_{r}(\widehat{T}))_{i}=(1-q^{2})\mathfrak{V}_{r}(\widehat{T})(r_{i})\) for \(1\leq i\leq p\). Clearly, \(\boldsymbol{z}_{\widehat{T}}(r_{p+1})=1-q^{2}\) and \(\boldsymbol{z}_{\widehat{T}}(l_{p+1})=\mathtt{diff}_{\widehat{T}}(l_{p+1})(q^ {2}-1)\). By Lemma 5, \(1-\mathds{1}^{t}\boldsymbol{\mu}=(1+\mathtt{diff}_{T}(l_{k}))-(1+\mathtt{diff }_{T}(l_{k}))q^{2}=\mathtt{diff}_{\widehat{T}}(l_{p+1})(q^{2}-1)\), as \(\mathtt{diff}_{\widehat{T}}(l_{p+1})=-(1+\mathtt{diff}_{T}(l_{k}))\). Therefore, it follows that
\[(\mathds{1}^{t}\mathfrak{R}(\widehat{T}))_{p+1}=1-\mathds{1}^{t}\boldsymbol{ \mu}=(\boldsymbol{z}_{l}(\widehat{T}))_{p+1}\quad\text{and}\quad(\mathfrak{R} (\widehat{T})\mathds{1})_{p+1}=1-q^{2}=(\boldsymbol{z}_{r}(\widehat{T}))_{p+1}.\]
For each \(1\leq i\leq p\), we have \(d_{\widehat{T}}(r_{i})=d_{T}(r_{i})\) and \(\mathtt{diff}_{\widehat{T}}(r_{i})=\mathtt{diff}_{T}(r_{i})+\boldsymbol{x}(r _{i})\), where \(\boldsymbol{x}\) is a vector such that \(\boldsymbol{x}(r_{i})=0\) if the \(r_{i}\)-\(l_{p+1}\) path is not an alternating path, \(1\) if the \(r_{i}\)-\(l_{p+1}\) path is an even alternating path, and \(-1\) if the \(r_{i}\)-\(l_{p+1}\) path is an odd alternating path. Therefore, by applying induction hypothesis, we get
\[\left(\mathfrak{R}(T)\mathds{1}+(q^{2}-1)\mathfrak{H}\right)_{i} =(\mathtt{diff}_{T}(r_{i})+1)(d_{T}(r_{i})-1)q^{4}+\left[\mathtt{ diff}_{T}(r_{i})(2-d_{T}(r_{i}))+(1-d_{T}(r_{i}))\right]q^{2}\] \[\quad-\mathtt{diff}_{T}(r_{i})+(q^{2}-1)(1+(d_{T}(r_{i})-1)q^{2}) \boldsymbol{x}(i)\] \[=(1-q^{2})\mathfrak{V}_{r}(\widehat{T})(r_{i}).\]
Further, for \(1\leq i\leq p\), note that \(\mathtt{diff}_{\widehat{T}}(l_{i})=\mathtt{diff}_{T}(l_{i})\), \(d_{\widehat{T}}(l_{i})=d_{T}(l_{i})\) for \(i\neq k\) and \(d_{\widehat{T}}(l_{k})=d_{T}(l_{k})+1\). Then, \((z_{l}(\widehat{T}))_{i}=(z_{l}(T))_{i}\) for each \(1\leq i\leq p\) and \(i\neq k\). Therefore, by induction hypotheses, it follows that \((\mathds{1}^{t}\mathfrak{R}(\widehat{T}))_{i}=(z_{l}(\widehat{T}))_{i}\) for each \(i\neq k\). For \(i=k\), by using induction hypothesis and
Lemma 5, we have
\[\left(\mathds{1}^{t}\mathfrak{A}(\widehat{T})\right)_{k} =(\mathbf{z}_{l}(T))_{k}+q^{2}(\mathds{1}^{t}\mathfrak{H}-1)\] \[=(\mathbf{z}_{l}(T))_{k}+q^{2}(q^{2}-1)(1+\mathtt{diff}_{T}(l_{k}))=(1 -q^{2})^{q_{\overline{t}}}(\widehat{T})(l_{k})\]
This completes the proof.
**Theorem 12**.: _[_12_]_ _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Then following assertions hold._
_(a) \(\mathfrak{B}(T)\mathfrak{T}_{\mathbf{r}}(T)=\mathtt{bd}_{\mathfrak{q}}(T)\mathds{1}\) and \(\mathfrak{Tr}_{l}(T)^{t}\mathfrak{B}(T)=\mathtt{bd}_{\mathfrak{q}}(T)\mathds{ 1}^{t}\)._
_(b) Let \(v\) be a vertex and let \(\widehat{T}\) be the tree obtained from \(T\) by attaching a new \(P_{2}\) at \(v\). Then \(\mathtt{bd}_{\mathfrak{q}}(\widehat{T})=\mathtt{bd}_{\mathfrak{q}}(T)+(1+q)( 1+\mathtt{diff}_{T}(v))\)._
In the next result we discuss a relationship between the \(q\)-bipartite distance matrix and the \(q\)-bipartite Laplacian matrix of a nonsingular tree.
**Lemma 13**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Let \(\mathfrak{B}(T)\) and \(\mathfrak{A}(T)\) be the bipartite distance matrix and the bipartite Laplacian matrix of \(T\), respectively. Let \(\mathfrak{Tr}_{r}(T)\) be the restriction of \(\mathfrak{Tr}(T)\) on \(R\). Then_
\[-\mathfrak{A}(T)\mathfrak{B}(T)+(1+q)^{q_{\overline{t}}}r(T)\mathds{1}^{t}=q( 1+q)I\]
Proof.: We proceed by induction on \(p\). The base can be verified easily. Assuming the result holds for \(p\). Let \(\widehat{T}\) be a nonsingular tree with \(2p+2\) vertices. We can obtain \(\widehat{T}\) from some nonsingular tree \(T\) with \(2p\) vertices by attaching a new \(P_{2}\) at a vertex \(v\). Without loss of generality, let's assume that \(v=l_{k}\) for some \(1\leq k\leq p\). Let \(\mathfrak{H}\) be the \(q\)-signed degree vector at \(l_{k}\) of \(T\).
By item (a) of Lemma 4, we have
\[\mathfrak{A}(\widehat{T})=\begin{bmatrix}\mathfrak{A}(T)+q^{2}\mathbf{\mu}\mathbf{e} _{k}^{t}&-\mathbf{\mu}\\ -q^{2}\mathbf{e}_{k}^{t}&1\end{bmatrix}\]
Let \(\mathbf{x}\) be a vector of size \(p\) such that \(\mathbf{x}(i)=\{\,\mathtt{dist}(l_{i},r_{p+1})\}\) for each \(i=1,\cdots,p\). Then the \(q\)-bipartite distance matrix of \(\widehat{T}\) can be written as
\[\mathfrak{B}(\widehat{T})=\begin{bmatrix}\mathfrak{B}(T)&\mathbf{x}\\ (1+q)\mathds{1}^{t}+q^{2}\mathbf{e}_{k}^{t}\mathfrak{B}(T)&1\end{bmatrix}.\]
Now note that
\[\mathfrak{A}(\widehat{T})\mathfrak{B}(\widehat{T}) =\begin{bmatrix}\mathfrak{A}(T)+q^{2}\mathbf{\mu}\mathbf{e}_{k}^{t}&- \mathbf{\mu}\\ -q^{2}\mathbf{e}_{k}^{t}&1\end{bmatrix}\begin{bmatrix}\mathfrak{B}(T)&\mathbf{x}\\ (1+q)\mathds{1}^{t}+q^{2}\mathbf{e}_{k}^{t}\mathfrak{B}(T)&1\end{bmatrix}\] \[=\begin{bmatrix}\mathfrak{A}(T)\mathfrak{B}(T)-(1+q)^{q}\mathbf{\mu} \mathds{1}^{t}&\mathfrak{A}(T)\mathbf{x}+(q^{2}-1)^{q}\mathbf{\mu}\\ (1+q)\mathds{1}^{t}&1-q^{2}\end{bmatrix}\] \[=\begin{bmatrix}(1+q)^{q_{\overline{t}}}r(T)\mathds{1}^{t}-q(1+q )I-(1+q)^{q}\mathbf{\mu}\mathds{1}^{t}&\mathfrak{A}(T)\mathbf{x}+(q^{2}-1)^{q}\mathbf{\mu }\\ (1+q)\mathds{1}^{t}&1-q^{2}\end{bmatrix} \tag{4}\]
The last equality follows from induction hypothesis. By part (b) of Lemma 10, we get
\[-\mathfrak{A}(\widehat{T})\mathfrak{B}(\widehat{T})+(1+q)^{q_{ \overline{t}}}r(\widehat{T})\mathds{1}^{t} =-\mathfrak{A}(\widehat{T})\mathfrak{B}(\widehat{T})+(1+q) \begin{bmatrix}(\mathfrak{Tr}_{r}(T)-\mathbf{\mu})\mathds{1}^{t}\\ \mathds{1}^{t}\end{bmatrix}\] \[=\begin{bmatrix}q(1+q)I&-\mathfrak{A}(T)\mathbf{x}+(1+q)^{q_{ \overline{r}}}r(T)-q(1+q)\mathbf{\mu}\\ 0&q(1+q)\end{bmatrix}\]
Therefore, we only remain to show that \(\mathfrak{C}(T)\mathbf{x}=(1+q)^{q}\mathbf{\tau}_{r}(T)-q(1+q)\mathbf{\mathcal{H}}\).
Let the degree of \(l_{k}\) in \(T\) be \(s\), (\(s\geq 1\)). Let \(T_{1}\) be the tree obtained from \(T\) by removing all vertices adjacent to \(l_{k}\) except the vertex \(r_{k}\). If \(s=1\) then \(T_{1}\) is the same as \(T\). Without loss of any generality, let us assume that \(T_{1}\) has the vertex set \(\{l_{1},r_{1},\ldots,l_{k-1},r_{k-1},l_{k},r_{k}\}\). Further, note that
\[\mathbf{x}=\mathfrak{B}(T)\mathbf{e}_{k}+(1+q)\Big{[}\begin{array}{cccccccc}q^{ \mathtt{dist}(l_{1},r_{k})}&\cdots&q^{\mathtt{dist}(l_{k-1},r_{k})}&0&0&\cdots& 0\end{array}\Big{]}^{t}. \tag{5}\]
By (3), we already have
\[\mathfrak{C}(T)\left[\begin{matrix}\mathbb{E}(T_{1})\mathbf{e}_{k}-q\mathbf{e}_{k}\\ \mathbf{0}\end{matrix}\right]^{t}=q(\mathbf{e}_{k}-\mathfrak{H}). \tag{6}\]
By using induction hypothesis, \(\mathfrak{C}(T)\mathfrak{B}(T)=(1+q)^{q}\mathbf{\tau}_{r}(T)\mathds{1}^{t}-q(1+q)I\). Hence, by (5) and (6), it follows that
\[\mathfrak{C}(T)\mathbf{x}=(1+q)^{q}\mathbf{\tau}_{r}(T)-q(1+q)\mathbf{e}_{k}+q(1+q)(\mathbf{e} _{k}-\mathfrak{H})=(1+q)^{q}\mathbf{\tau}_{r}(T)-q(1+q)^{q}\mathbf{\mathcal{H}}.\]
This completes the proof.
We are now in a position to supply a formula for the inverse of the \(q\)-bipartite distance matrix of a nonsingular tree \(T\).
**Theorem 14**.: _Let \(T\) be a nonsingular tree on \(2p\) vertices with a standard vertex bipartition \((L,R)\). Suppose \(q\neq 0,-1\) and \(\mathsf{bd_{q}}(T)\neq 0\). Then_
\[\mathfrak{B}(T)^{-1}=-\frac{1}{q(1+q)}\mathfrak{C}(T)+\frac{1}{q\,\mathsf{bd_ {q}}(T)}^{q}\mathbf{\tau}_{r}(T)^{q}\mathbf{\tau}_{l}^{t}(T).\]
_Proof._ By using Theorem 12 and Lemma 13 we have
\[\left(-\frac{1}{q(1+q)}\mathfrak{C}(T)+\frac{1}{q\,\mathsf{bd_{q}}(T)}\mathbf{ \tau}_{r}(T)\mathfrak{V}_{l}^{t}(T)\right)\mathfrak{B}(T)=\frac{1}{q(1+q)} \left(-\mathfrak{C}(T)\mathfrak{B}(T)+(1+q)^{q}\mathbf{\tau}_{r}(T)\mathds{1}^{t} \right)=I\]
This completes the proof.
|
2309.04432 | Nonlinear Stability of Static Néel Walls in Ferromagnetic Thin Films | In this paper, the nonlinear (orbital) stability of static 180^\circ N\'eel
walls in ferromagnetic films, under the reduced wave-type dynamics for the
in-plane magnetization proposed by Capella, Melcher and Otto [CMO07], is
established. It is proved that the spectrum of the linearized operator around
the static N\'eel wall lies in the stable complex half plane with non-positive
real part. This information is used to show that small perturbations of the
static N\'eel wall converge to a translated orbit belonging to the manifold
generated by the static wall. | A. Capella, C. Melcher, L. Morales, R. G. Plaza | 2023-09-08T16:53:59Z | http://arxiv.org/abs/2309.04432v2 | # Nonlinear stability of static Neel walls in ferromagnetic thin films
###### Abstract.
In this paper, the nonlinear (orbital) stability of static \(\pi\)-shifted Neel walls in ferromagnetic films, under the reduced wave-type dynamics for the in-plane magnetization proposed by Capella, Melcher and Otto [1], is established. It is proved that the spectrum of the linearized operator around the static Neel wall lies in the stable complex half plane with non-positive real part. This information is used to show that small perturbations of the static Neel wall converge to a translated orbit belonging to the manifold generated by the static wall.
###### Contents
* 1 Introduction
* 2 Preliminaries and main result
* 3 The linearized operator around the static Neel wall's phase
* 4 Perturbation equations and spectral stability
* 5 Semigroup generation and decay
* 6 Nonlinear (orbital) stability
## 1. Introduction
In order to study the motion of magnetization vectors in ferromagnetic materials, in 1935 Landau and Lifshitz [1] introduced a model system of equations, later reformulated and re-derived by Gilbert [14, 15], which constitutes the fundamental and best accepted mathematical model that describes the magnetization in ferromagnets. Since ferromagnetic thin films exhibit a wide range of applications to the design and manufacturing of magnetic storage devices, the Landau-Lifshitz-Gilbert (LLG) model has attracted a great deal of attention from physicists and mathematicians alike in the last decades. A great variety of patterns of magnetization vectors appear in ferromagnetic films. For instance, narrow transition regions between opposite magnetization domains are called _domain walls_. Some of the most common wall types in such materials are called Neel walls, separating two opposite magnetization regions by an in-plane rotation, oriented along an axis; Bloch walls, for which the magnetization rotates about the normal of the domain wall, pointing
along the domain wall plane in a 3D system; or Walker walls, which are formed under the presence of an external magnetic field (see, e.g., Hubert and Schafer [10] for further information).
One of the main objectives of recent mathematical studies is to understand the behavior of these dynamical coherent structures developed by the magnetization of a ferromagnet. The stability under small perturbations of these microstructures is important, not only to validate the mathematical model but also to enhance the numerical simulations performed by physicists and engineers to optimize and design new ferromagnetic materials (see, e.g., [1]). Up to our knowledge, the literature on the dynamical stability theory for magnetic domain walls is scarce. The stability of one-dimensional Bloch walls has been addressed by Krukowski [11] using a spectral (linearized) calculation of energies of ground states, and by Carbou and Labbe [12], under the nanowire, one-dimensional approximation by Sanchez [13]. Takasao [14] improved this last result for Walker walls, also in one dimension and in the presence of an external magnetic field. Carbou [1] proved the stability of a Walker wall in the three-dimensional model using the energy method and under a simplifying assumption that gets rid of the non-local part of the operator. Most of these works employ energy methods to conclude stability, that is, the analyses are based on performing _a priori_ energy estimates on the equations of magnetization evolution and relying on their intrinsic structure.
This paper is devoted to studying the dynamical stability of static Neel walls. Our departure point is the one-dimensional thin film reduction of the micromagnetic energy proposed by Capella, Melcher, and Otto [15] (outlined previously in [15] for numerical purposes), which establishes an effective system for the in-plane magnetization by taking the thin film layer limit. The resulting system underlies a wave-type dynamics for the Neel wall's phase. The authors prove the existence and uniqueness of a static Neel wall's phase profile in the absence of external fields, as well as the emergence of traveling wave solutions near the static profile under the influence of a small constant external forcing. The authors also outline the stability of these structures under small one-dimensional perturbations. The present analysis constitutes a follow-up of such formulation and a full study of the nonlinear stability of the static Neel wall under small, one-dimensional perturbations of the phase itself. As far as we know, this problem has not been studied before in the literature.
One of the main technical difficulties pertains to the non-locality of the dynamical equation, even at a linear level. In contrast to previous studies, we adopt a spectral approach to the problem. Motivated by the ideas in [15], in which the linearized operator around the static phase is defined and previously studied, we profit from this information and perform a full spectral stability analysis of this operator, that includes a proof of its relative compactness with respect to an asymptotic operator. In contrast with standard techniques, which are usually applied to local differential operators with bounded coefficients and which are based on truncating such coefficients with their asymptotic limits (see, e.g., [16], Section 3.1), in this work and by necessity (because we are studying a non-local operator) we develop a novel procedure that focuses on describing totally bounded sets in terms of \(L^{2}\)-equicontinuity and uniform decay in Fourier space (see Theorem 3.23 below). This relative compactness plays a crucial role in the location of the essential spectrum of a block operator matrix that encodes the linearization of the nonlinear
wave equation for perturbations of the static wall. It is proved that both essential and point spectra are stable, that is, they belong to the stable half-plane of complex numbers with negative real part, except for the origin, which is associated with translations of the Neel wall (see Theorem 4.11). An important feature is the presence of an _spectral gap_, which is a positive distance from the eigenvalue zero to the rest of the spectrum. This allows us to establish the exponential decay of the solutions to the spectral problem when projected outside the one-dimensional vector space generated by translations of the static profile. Upon application of the well-known Gearhart-Pruss theorem [10, 11] and after the establishment of uniform resolvent estimates, we conclude that the semigroup generated by the linear block matrix operator is exponentially decaying in the appropriate subspace. This information is then used to prove nonlinear stability. For that purpose, we apply an abstract result, due originally to Sattinger [14] and adapted to a Hilbert space setting by Lattanzio _et al._[13], that establishes nonlinear stability from spectral stability by controlling the growth of nonlinear terms and profiting from the fact that the manifold generated by the wave is one-dimensional (the group of translations). We regard our contributions not only new in the context of ferromagnetic wall stability analysis, but also of methodological nature: we advocate for spectral and nonlinear analysis as a feasible and effective method in the study of this type of problems. The unpublished note by Huber [15] warrants note as the only work (as far as we know) that performs a rigorous spectral analysis of the linearized operator around a Neel wall for a layer of small (but positive) thickness, \(\epsilon>0\). Huber does not prove the spectral stability of this structure but employs the spectral information to obtain time-periodic solutions in a vicinity of it. (We note that in layers with positive thickness, the linearized operators are sectorial, in contrast with the present case of a thin-film limit.)
### Plan of the paper
This paper is structured as follows. Section 2 contains a brief description of the thin-film dynamical model in [1], recalls some of the main properties of the static Neel wall's phase, and states the main result of this paper. Section 3 is devoted to the full, rigorous study of the linearized (scalar) operator around the static Neel wall defined in [1]. In particular, it is shown that it is relatively compact to an asymptotic operator, a feature that plays a key role in the stability analysis. Section 4 establishes the spectral stability of the Neel wall's phase. The spectral problem is posed in terms of a block operator matrix and the stability of both its essential and point spectra is established. Section 5 is devoted to generating the associated semigroup and to showing the exponential decay of solutions to the linearized equations outside a one-dimensional space related to translations of the profile. The final Section 6 contains the proof of Theorem 2.3.
### Notations
Along this manuscript, we denote the spaces \(L^{2}(\mathbb{R},\mathbb{C}),\ H^{1}(\mathbb{R},\mathbb{C})\) and \(H^{2}(\mathbb{R},\mathbb{C})\) of complex-valued functions by \(L^{2},\ H^{1}\) and \(H^{2}\). Meanwhile, their real-valued version are denoted by \(L^{2}(\mathbb{R}),\ H^{1}(\mathbb{R})\) and \(H^{2}(\mathbb{R})\) respectively. The set of unitary vectors in \(\mathbb{R}^{n}\) is denoted by \(\mathbb{S}^{n-1}\). The operators \(\hat{\cdot}:L^{2}\to L^{2}\) and \(\hat{\cdot}:L^{2}\to L^{2}\) stand for the Fourier transform and its inverse respectively. Also, \(\xi\) represents the variable in the frequency domain. In the same fashion, the half-Laplacian is defined by the relation \((-\nabla)^{1/2}u=(|\xi|\hat{u})\), and \(\|u\|_{\dot{H}^{1/2}}\) denotes the fractional \(H^{1/2}\)-norm of the function \(u\in L^{2}\) given by \(\|u\|_{\dot{H}^{1/2}}:=\big{\|}|\xi|^{1/2}\hat{u}\big{\|}_{L^{2}}\)
Finally, for two linear operators, say \(\mathcal{A}\) and \(\mathcal{T}\), the commutator \([\mathcal{A},\mathcal{T}]\) is given by the difference \(\mathcal{A}\mathcal{T}-\mathcal{T}\mathcal{A}\).
## 2. Preliminaries and main result
### The micromagnetic model
The Landau and Lifshitz continuum theory of ferromagnetic materials [10] is based on a magnetization field \(\mathbf{m}:\tilde{\Omega}\to\mathbb{S}^{2}\), that represents the local average magnetic moment, and a variational principle in term of the _micromagnetic energy_. In the absence of an external field, the micromagnetic energy is given by
\[\mathbb{E}(\mathbf{m})=\frac{1}{2}\Big{(}d^{2}\int_{\tilde{\Omega}}|\nabla \mathbf{m}|^{2}\,dx+\int_{\mathbb{R}^{3}}|\nabla U|^{2}+Q\int_{\tilde{\Omega}} \Phi(\mathbf{m})\,dx\Big{)}.\]
where \(d>0\) is the exchange length and \(\nabla U\) is the _stray field_ defined uniquely via the distribution equation \(\Delta U=\mathrm{div}\,(\mathbf{m}\mathbf{1}_{\tilde{\Omega}})\) (\(\mathbf{1}_{A}\) denotes the indicator function of the set \(A\)). The stray-field energy favors vanishing distributional divergence, namely, \(\nabla\cdot\mathbf{m}=0\) in \(\tilde{\Omega}\) and \(\mathbf{m}\cdot n=0\) on \(\partial\tilde{\Omega}\), where \(n\) is the outward normal to the boundary. The last integral models crystalline anisotropies via a penalty energy, for which \(\Phi\) acts as a penalty function, and it usually has the form of an even polynomial in \(\mathbf{m}\in\mathbb{S}^{2}\). The parameter \(Q>0\) measures the relative strength of anisotropy penalization against stray-field interaction.
The combination of the stray-field energy (which is a non-local term) and the non-convex saturation constraint \(|\mathbf{m}|=1\) gives rise to pattern formation among magnetic domains where the magnetization is almost constant. Thin transition layers separating the magnetic domains are known as domain walls and may form complex patterns [14].
### Neel wall in soft magnetic thin films
A thin film is an infinitely extended magnetic material \(\tilde{\Omega}=\mathbb{R}^{2}\times(0,\delta)\) where \(\delta\ll d\). In this regime, it is safe to assume that the magnetization is independent of the \(x_{3}\) variable. By assuming further that the magnetization is \(\ell\)-periodic in the \(\mathbf{e}_{2}\) direction, namely,
\[\mathbf{m}(x_{1},x_{2}+\ell)=\mathbf{m}(x_{1},x_{2})\quad\text{for any }x=(x_{1},x_{2})\in \mathbb{R}^{2},\]
and that the material has a uniaxial anisotropy in the \(e_{2}\) direction, with \(\Phi(\mathbf{m})=1-m_{2}^{2}\), we consider transition layers connecting antipodal states on the easy axis
\[\mathbf{m}:\mathbb{R}^{2}\to\mathbb{S}^{2}\quad\text{ with }\mathbf{m}(\pm \infty,x_{2})=(0,\pm 1,0)\quad\text{for any }x_{2}\in\mathbb{R}.\]
In this case, the stray filed energy is approximated at leading order by
\[E_{stray}(\mathbf{m})=\frac{1}{2}\int_{0}^{\ell}\int_{\mathbb{R}}\left(\frac{ \delta}{2}\left||\nabla|^{\frac{1}{2}}\mathcal{H}(m)\right|^{2}+m_{3}^{2} \right)dx\]
where \(\mathbf{m}=(m,m_{3})\) with \(m=(m_{1},m_{2})\) and formally \(\mathcal{H}(m)=\nabla\Delta^{-1}\mathrm{div}\;m\) ( see [1] for further details).Thus, the micromagnetic energy becomes
\[\mathbb{E}(\mathbf{m})=\frac{1}{2}\int_{0}^{\ell}\int_{\mathbb{R}}\left(d^{2} |\nabla m|^{2}+\frac{\delta}{2}\left||\nabla|^{\frac{1}{2}}\mathcal{H}(m) \right|^{2}+Q(1-m_{2})^{2}+m_{3}^{2}\right)dx.\]
Neel walls are one-dimensional transition layers observed in soft ferromagnetic thin films, that is, magnetic materials with relatively weak anisotropic energy. Here, we consider a parameter regime of soft thin films so that the anisotropy and relative thickness are balanced, more precisely
\[Q\ll 1,\quad\kappa=d/\delta\ll 1\quad\text{while }\mathcal{Q}=4\kappa^{2}Q. \tag{2.1}\]
Therefore, it is feasible to introduce the small parameter \(\varepsilon=\sqrt{Q}\). By rescaling the length \(x\) by \(w=\delta/(2Q)\), and the energy by \(\delta/2\), the micromagnetic energy becomes
\[E_{\varepsilon}(\mathbf{m})=\frac{1}{2}\int_{0}^{L}\int_{\mathbb{R}}\bigg{(} \mathcal{Q}|\nabla m|^{2}+\Big{|}|\nabla|^{\frac{1}{2}}\mathcal{H}(m)\Big{|}^{ 2}+(1-m_{2})^{2}+\Big{(}\frac{m_{3}}{\varepsilon}\Big{)}^{2}\bigg{)}\,dx, \tag{2.2}\]
where \(L=\ell/w\) and we assumed \(\varepsilon\ll\mathcal{Q}\ll 1\). Assuming further that \(m=m(x_{1})\) then \(\mathcal{H}(m)=m_{1}\mathbf{e}_{1}\) is independent of \(x_{2}\) and the reduced variational principle for the one-dimensional wall transition is
\[\begin{split} E_{\varepsilon}(\mathbf{m})=\frac{1}{2}\int_{ \mathbb{R}}&\Big{(}\mathcal{Q}|\mathbf{m}^{\prime}|^{2}+||\nabla| ^{\frac{1}{2}}m_{1}|^{2}+(1-m_{2})^{2}+\Big{(}\frac{m_{3}}{\varepsilon} \Big{)}^{2}\,\Big{)}dx\to\min,\\ &\mathbf{m}:\mathbb{R}\to\mathbb{S}^{2}\qquad\text{with}\ \,\,\mathbf{m}(\pm\infty)=(0,\pm 1,0),\end{split} \tag{2.3}\]
where \(\mathbf{m}^{\prime}=\frac{d\mathbf{m}}{dx_{1}}\). In [1] it is shown that for \(\varepsilon_{k}\to 0\) there exists a sequence of minimizers \(\mathbf{m}_{\varepsilon_{k}}\) of (2.3) with a subsequence that locally converges to \(\mathbf{m}=(m,0)\) and satisfies
\[\begin{split} E_{0}(m)=\frac{1}{2}\int_{\mathbb{R}}& \Big{(}\mathcal{Q}|m^{\prime}|^{2}+||\nabla|^{\frac{1}{2}}m_{1}|^{2}+m_{1}^{2 }\Big{)}dx\to\min,\\ & m:\mathbb{R}\to\mathbb{S}^{2}\qquad\text{with}\ \,\,m(\pm\infty)=(0,\pm 1).\end{split} \tag{2.4}\]
Since \(|m^{\prime}|=(m_{1}^{\prime})^{2}/(1-m_{2}^{2})\) is a strictly convex functional of \(m_{1}\), the variational principle (2.4) has a minimizer for any \(\mathcal{Q}>1\). The minimizer is called a Neel wall. We refer to the energy in (2.4) as the Neel wall energy, and for convenience, we write it as
\[E_{0}(m)=\frac{1}{2}\left(\mathcal{Q}\|m^{\prime}\|_{L^{2}}^{2}+\|m_{1}\|_{ \dot{H}^{1/2}}^{2}+\|m_{1}\|_{L^{2}}^{2}\right).\]
Since the left translation is an \(L^{2}\)-isometry, the expression of \(E_{0}(m)\) is invariant to translations in the spatial coordinate \(x\). This invariance is inherited by the energy, yielding that minimizers of (2.5) are unique up to a constant translation.
For our analysis, we introduce the phase \(\theta:\mathbb{R}\to\mathbb{R}\) so that \(m=(\cos\theta,\sin\theta)\) and the Neel wall energy becomes
\[\begin{split}&\mathcal{E}(\theta)=\frac{1}{2}\big{(}\mathcal{Q}\| \theta^{\prime}\|_{L^{2}}^{2}+\|\cos\theta\|_{\dot{H}^{1/2}}^{2}+\|\cos\theta \|_{L^{2}}^{2}\big{)}\ \to\ \min\\ &\theta:\mathbb{R}\to(-\pi/2,\pi/2),\qquad\text{with}\ \,\,\theta(\pm\infty)=\pm\pi/2.\end{split} \tag{2.5}\]
Since we are interested in Neel wall dynamic properties, we refer to minimizers of (2.5) as the _static_ Neel wall phase. From now on, we assume \(\mathcal{Q}=1\) and we abuse notation by letting \(\partial_{x}\theta=\theta^{\prime}\) and \(\partial_{x}^{2}\theta=\theta^{\prime\prime}\) to avoid confusion.
The following proposition summarizes the basic properties of the static Neel wall phase.
**Proposition 2.1** (properties of the static Neel wall [1, 1]).: _There exists a static Neel wall solution with phase \(\overline{\theta}=\overline{\theta}(x)\), \(\overline{\theta}:\mathbb{R}\to(-\pi/2,\pi/2)\), satisfying the following:_
* \(\overline{\theta}\) _is a strict minimizer of the variational problem (_2.5_), with center at the origin,_ \(\overline{\theta}(0)=0\)_, and monotone increasing,_ \(\partial_{x}\overline{\theta}>0\ \,\,\forall x\in\mathbb{R}\)_._
* \(\overline{\theta}\) _is a smooth solution to_ \[\partial_{x}^{2}\theta+\sin\theta(1+(-\Delta)^{1/2})\cos\theta=0,\] (2.6) _which is the Euler-Lagrange equation for the variational problem (_2.5_)._
* \(\partial_{x}\overline{\theta}\in H^{2}\)
_._
4. _For all centered variation_ \(u\in H^{1}\) _with_ \(u(0)=0\) _there holds_ \[\operatorname{Hess}\,\mathcal{E}(\overline{\theta})\langle u,u\rangle_{L^{2}} \geq\|u\,\partial_{x}\overline{\theta}\|_{L^{2}}^{2}+\operatorname{Re}\,b[u \sin\overline{\theta},u\sin\overline{\theta}],\] (2.7) _where the bilinear form_ \(b[\cdot,\cdot]:H^{1}\times H^{1}\to\mathbb{C}\)_, defined as,_ \[b[f,g]=\int_{\mathbb{R}}(1+|\xi|)\hat{f}(\xi)\hat{g}(\xi)^{*}\,d\xi,\qquad f,g \in H^{1},\] (2.8) _is equivalent to the standard inner product in_ \(H^{1/2}\)_._
Proof.: Property 1 results from combining Lemma 1 in [10] with the main results of [13] (Propositions 1 and 2). The proof of the smoothness of the Neel wall can be found in [13] (Proposition 2). Since it is a minimizer it satisfies equation (2.6) (see Lemma 1 in [10]). This shows 2. Moreover, it is proved in [10] (Theorem 1 and Lemma 1) that \(\partial_{x}\overline{\theta}\), \(\partial_{x}^{2}\overline{\theta}\in L^{2}(\mathbb{R})\). As pointed out by the authors, from the Euler-Lagrange equation (2.6) the regularity arguments of Lemma 1 can be bootstrapped to show that \(\partial_{x}^{3}\overline{\theta}\in L^{2}(\mathbb{R})\). This shows 3. Finally, property 4 is the content of Lemma 3 in [10].
**Corollary 2.2**.: _There exists a uniform constant \(C>0\) such that_
\[\|\partial_{x}\overline{\theta}\|_{\infty},\|\partial_{x}^{2}\overline{\theta }\|_{\infty}\leq C.\]
Proof.: Follows immediately from the fact that \(\partial_{x}\overline{\theta}\in H^{2}\) (see Proposition 2.1 (c)) and Sobolev's inequality: \(\|u\|_{\infty}^{2}\leq 2\|u\|_{L^{2}}\|\partial_{x}u\|_{L^{2}}\) for all \(u\in H^{2}\).
### LLG dynamics
The time evolution of the magnetization distribution on a ferromagnetic body \(\widetilde{\Omega}\subset\mathbb{R}^{3}\) is governed by the Landau-Lifshitz-Gilbert (LLG) equation [14, 15, 16]:
\[\mathbf{m}_{t}+\alpha\mathbf{m}\times\mathbf{m}_{t}-\gamma\mathbf{m}\times \mathbf{H}_{\mathrm{eff}}=0, \tag{2.9}\]
where \(\mathbf{m}:\widetilde{\Omega}\times(0,\infty)\to\mathbb{S}^{2}\subset\mathbb{ R}^{3}\) is the magnetization field, \(\alpha>0\) is a non-dimensional damping coefficient (Gilbert factor), and \(\gamma>0\) is the (constant) absolute value of the gyromagnetic ratio with dimensions of frequency (see, e.g., [16]). The effective field, \(\mathbf{H}_{\mathrm{eff}}=\mathbf{h}-\nabla\mathbb{E}(\mathbf{m})\), is the applied field \(\mathbf{h}\) and the negative functional gradient of the micromagnetic energy \(\mathbb{E}(\mathbf{m})\). If we consider a single magnetic spin \(\mathbf{m}=\mathbf{m}(t)\) under a constant magnetic field \(\mathbf{h}\) and neglect damping the magnetization \(m\) will precess about the applied field \(\mathbf{h}\) with a frequency given by \(\omega=\gamma|\mathbf{h}|\). When the damping is turned on, the vector \(\mathbf{m}\) will spiral down around \(\mathbf{h}\) until \(\mathbf{m}\) and \(\mathbf{h}\) become parallel. The typical relaxation time is \(1/(\alpha\omega)\).
In bulk materials, there exists a one-dimensional optimal path connecting antipodal magnetization states known as the Bloch wall. Bloch walls are such that \(m_{1}=0\) and the transition is perpendicular to the transition axis. In this case, the magnetization \(\mathbf{m}\) is divergence-free free and the stray field energy vanishes. Under this condition, there exist explicit dynamic solutions in the bulk, where under an applied field \(\mathbf{h}=H\mathbf{e}_{2}\) the magnetization rotates to develop a \(m_{1}\) component. This component implies a rotation of the other magnetization components advancing the domain wall [11, 13].
### LGG wave-type dynamic limit in thin films
Thin films are incompatible with gyrotropic wall motion due to the incompatibility constraint of the in-plane magnetization imposed by the stray field. In this configuration, the competition between energy and dynamic forces becomes singular in the thin field limit. In [1], an effective suitable limit is considered under the appropriate regime where the oscillatory features of the LLG dynamics are preserved in the limit. It turns out that the effective dynamics depend on the asymptotic regime as \(\alpha\) and the relative thickness \(\delta/d\) tend to zero.
For the precise scaling and regime in [1] let \(\varepsilon=\sqrt{Q}\) and consider (2.1) when \(\varepsilon\ll\mathcal{Q}\) while \(\mathcal{Q}=(2\varepsilon d/\delta)^{2}\lesssim 1\) is small but bounded from below. That is, \(\varepsilon\sim\delta/d\) can be regarded as the relative thickness. Under these assumptions we rescale space and time by
\[x\mapsto wx\quad\text{where }w=\delta/(2\varepsilon)^{2},\quad t\mapsto t/( \gamma\varepsilon)\]
In this scaling, the mean effective field \(\mathbf{H}_{\text{eff}}\) becomes
\[\mathbf{H}_{\text{eff}}=-\varepsilon^{2}\nabla E_{\varepsilon}(\mathbf{m}) \quad\text{ where }E_{\varepsilon}=(2/\delta)\mathbb{E}(\mathbf{m}) \tag{2.10}\]
where \(E_{\varepsilon}(\mathbf{m})\) is given by (2.2). Therefore, the LLG equation (2.9) becomes,
\[\mathbf{m}_{t}+\alpha\mathbf{m}\times\mathbf{m}_{t}+\varepsilon\mathbf{m} \times E_{\varepsilon}(\mathbf{m})=0. \tag{2.11}\]
To derive the effective equation for the in-plane magnetization it is necessary to write down \(E_{\varepsilon}(\mathbf{m})\) in terms of \(m=(m_{1},m_{2})\) and \(m_{3}\), that is,
\[E_{\varepsilon}(\mathbf{m})=E_{0}(m)+\frac{1}{2}\int_{0}^{L}\int_{\mathbb{R}} \Big{(}\mathcal{Q}|\nabla m_{3}|^{2}+\Big{(}\frac{m_{3}}{\varepsilon}\Big{)}^ {2}\,\Big{)}dx\]
where
\[E_{0}(m)=\frac{1}{2}\int_{0}^{L}\int_{\mathbb{R}}\Big{(}\mathcal{Q}|\nabla m|^ {2}+||\nabla|^{\frac{1}{2}}\mathcal{H}(m)|^{2}+(1-m_{2}^{2})\Big{)}dx. \tag{2.12}\]
Notice that for one-dimensional transition layers the energy \(E_{0}\) coincides with the reduced Neel wall energy (2.4).
In [1] it is shown that as
\[\varepsilon\to 0\quad\text{while}\quad\alpha(\varepsilon)/\varepsilon\to\nu \tag{2.13}\]
for some positive \(\nu\), while keeping \(\mathcal{Q}=1\) for every \(\varepsilon>0\), there exist a sequence of solution \(\mathbf{m}_{\varepsilon}\) of (2.11) \(L\)-periodic in the \(x_{2}\) direction such that the in-plane magnetization \(m_{\varepsilon}\) weakly converges to \(m\in\mathbb{S}^{1}\) (in the appropriate spaces) a weak solution of
\[[\partial_{t}^{2}m+\nu\partial_{t}m+\nabla E_{0}(m)]\perp T_{m}\mathbb{S}^{1}. \tag{2.14}\]
Because \(E_{0}(m)\) coincides with the Neel wall energy, it is clear that under the appropriate boundary conditions at infinity (e.g. (2.4)) the static Neel wall profile \(\bar{m}=(\cos\bar{\theta},\sin\bar{\theta})\) is a static solution of (2.14).
### Main result
The static Neel wall solution and the wave-type dynamic equation (2.14) are the starting point of the present work. We state our main result in terms of the magnetic phase \(\theta:\mathbb{R}\times(0,\infty)\to\mathbb{R}\). As function of \(\theta(x,t)\), equation (2.14) with the boundary conditions given by (2.5) becomes
\[\left\{\begin{array}{l}\partial_{t}^{2}\theta+\nu\partial_{t}\theta+\nabla \mathcal{E}(\theta)=0,\\ \theta(-\infty,t)=-\pi/2,\quad\theta(\infty,t)=\pi/2,\\ \theta(x,0)=\theta_{0}(x),\quad\partial_{t}\theta(x,0)=v_{0}(x)\end{array}\right. \tag{2.15}\]
where \((\theta_{0},v_{0})\) are some initial conditions and the energy \(\mathcal{E}(\theta)\) is as in (2.5). After these definitions we are ready to state our main result.
**Theorem 2.3** (orbital stability of the static Neel wall).: _Let \(\mathcal{J}\subset H^{1}(\mathbb{R})\times L^{2}(\mathbb{R})\) be the set of initial conditions such that the Cauchy problem (2.15) has a global solution. Then there exists \(\varepsilon>0\) sufficiently small such that if the pair \((\theta_{0},v_{0})\in\mathcal{J}\) satisfies_
\[\|\theta_{0}-\overline{\theta}\|_{H^{1}}+\|v_{0}\|_{L^{2}}<\varepsilon,\]
_then the solution to (2.15) with initial condition \((\theta(x,0),\partial_{t}\theta(x,0))=(\theta_{0},v_{0})\) satisfies for any \(t>0\),_
\[\|\theta(\cdot,t)-\overline{\theta}(\cdot+\delta)\|_{H^{1}}\leq C\exp(- \omega t),\]
_for some shift \(\delta\in\mathbb{R}\) and constants \(C,\omega>0\) that may depend on \((\theta_{0},v_{0})\) and \(\varepsilon\)._
**Remark 2.4**.: It is to be noticed that we are not proving the global existence of the solution for a given small initial perturbation. Theorem 2.3 states that any eventual initial small perturbation of the static Neel profile, if exists, must decay to a translation of it. This type of behavior is also called _orbital_ stability (or stability _in shape_), as initial perturbations decay to an element of the orbit or manifold generated by the static wave which, in this case, is the one-dimensional manifold of translations. The existence of global solutions can be studied using standard semigroup techniques and with the help of the decaying estimates performed in this work; we do not pursue such analysis here. Instead, we focus on the stability problem alone.
## 3. The linearized operator around the static Neel wall's phase
In this section, we study the linearized operator around the static Neel wall's phase. We examine its main properties and locate its resolvent and spectra. Notably, we prove that it is a relatively compact perturbation of an asymptotic operator, a property that will play a key role later on. We start by recalling some standard definitions from spectral theory which can be found in the classical literature on the subject [13, 14, 15].
**Definition 3.1**.: Let \(\mathcal{T}\) and \(\mathcal{S}\) two linear operator from \(X\) to \(Y\), Banach spaces. It is said that \(\mathcal{S}\) is relatively bounded with respect to \(\mathcal{T}\) (or simply \(\mathcal{T}\)-bounded) if \(D(\mathcal{S})\subset D(\mathcal{T})\) and
\[\|\mathcal{S}u\|\leq a\|u\|+b\|\mathcal{T}u\|,\qquad\forall\,u\in D(\mathcal{ T}),\]
where \(a,b\) are non negative constants. The greatest lower bound \(b_{0}\) of all possible constants \(b\) is called the relative bound of \(\mathcal{S}\) with respect to \(\mathcal{T}\) (or simply the \(\mathcal{T}\)-bound of \(\mathcal{S}\)).
**Definition 3.2**.: Let \(\mathcal{L}\in\mathscr{C}(X,Y)\) be a linear, closed operator from \(X\) to \(Y\), Banach spaces. The resolvent \(\rho(\mathcal{L})\), the point spectrum \(\sigma_{\rm pt}(\mathcal{L})\) and the essential spectrum \(\sigma_{\rm ess}(\mathcal{L})\) are defined as:
\[\rho(\mathcal{L}) :=\{\lambda\in\mathbb{C}\,:\,\mathcal{L}-\lambda\,\,\,\text{is injective and onto, and }(\mathcal{L}-\lambda)^{-1}\,\text{is bounded}\,\},\] \[\sigma_{\rm pt}(\mathcal{L}) :=\{\lambda\in\mathbb{C}\,:\,\mathcal{L}-\lambda\,\,\,\text{is Fredholm with index zero and has a non-trivial kernel}\},\] \[\sigma_{\rm ess}(\mathcal{L}) :=\{\lambda\in\mathbb{C}\,:\,\mathcal{L}-\lambda\,\,\,\text{is either not Fredholm or has index different from zero}\}.\]
The spectrum of \(\mathcal{L}\) is the set \(\sigma(\mathcal{L}):=\sigma_{\rm ess}(\mathcal{L})\cup\sigma_{\rm pt}( \mathcal{L})\). If \(\lambda\in\sigma_{\rm pt}(\mathcal{L})\) we refer to it as an _eigenvalue_.
**Remark 3.3**.: Several definitions of essential spectrum are in use. This definition is due to Weyl [20] (see also [13, 14]), and has the advantage that the remaining spectrum, namely \(\sigma_{\mathrm{pt}}(\mathcal{L})\), is a discrete set of isolated eigenvalues. It is also to be observed that \(\rho(\mathcal{L})=\mathbb{C}\backslash\sigma(\mathcal{L})\) because the operator \(\mathcal{L}-\lambda\) is closed (cf. Kato [13], p. 167).
**Definition 3.4**.: Let \(X\) be a Banach space and assume that \(\mathcal{T}\) and \(\mathcal{T}_{0}\) are two closed and linear operators on \(X\). The operator \(\mathcal{T}\) is a relatively compact perturbation of \(\mathcal{T}_{0}\) if \((\mathcal{T}_{0}-\mathcal{T})(\lambda\mathrm{I}-\mathcal{T}_{0})^{-1}:X\to X\) is compact for some \(\lambda\in\rho\left(\mathcal{T}_{0}\right)\).
### Basic properties
It can be shown (cf. Capella _et al._[1]) that the linearization of the mapping \(\nabla\mathcal{E}(\theta)\) around the static Neel wall's phase \(\overline{\theta}=\overline{\theta}(x)\) is given by
\[\begin{cases}\mathcal{L}:L^{2}\to L^{2},\\ D(\mathcal{L})=H^{2},\\ \mathcal{L}u:=-\partial_{x}^{2}u+\mathcal{S}u-c_{\theta}u,\qquad u\in D( \mathcal{L}),\end{cases} \tag{3.1}\]
where the nonlocal operator \(\mathcal{S}\) is defined as
\[\begin{cases}\mathcal{S}:L^{2}\to L^{2},\\ D(\mathcal{S})=H^{1},\\ \mathcal{S}u:=\sin\overline{\theta}(1+(-\Delta)^{1/2})(u\sin\overline{\theta}),\quad u\in D(\mathcal{S}),\end{cases} \tag{3.2}\]
and \(c_{\theta}=c_{\theta}(x)\), \(x\in\mathbb{R}\), is the coefficient defined as
\[c_{\theta}:=\cos\overline{\theta}(1+(-\Delta)^{1/2})\cos\overline{\theta}. \tag{3.3}\]
It can be easily shown that \(c_{\theta}=c_{\theta}(x)\) is a real and uniformly bounded coefficient in \(H^{2}\) (see [1]). Notice that the non-local operator, \(1+(-\Delta)^{1/2}:L^{2}\to L^{2}\), is defined through
\[(1+(-\Delta)^{1/2})u:=((1+|\xi|)\widehat{u}(\xi))^{\vee},\]
for any \(u\) in its natural domain, \(D(\mathcal{S})=H^{1}\), and since \(D(-\partial_{x}^{2})=H^{2}\subset H^{1}\), the natural domain of definition of \(\mathcal{L}\) is \(D(\mathcal{L})=H^{2}\). Therefore, we regard \(\mathcal{L}\) as a densely defined operator in \(L^{2}\) with domain \(D(\mathcal{L})=H^{2}\). For notational convenience we denote
\[s_{\theta}(x):=\sin\overline{\theta}(x),\qquad x\in\mathbb{R},\]
which is also real, smooth and bounded for all \(x\in\mathbb{R}\).
The next lemma shows a relation between the Hilbert transform and the half-Laplacian which will be used later on. We present it without proof, inasmuch as the latter can be found in many references (see, e.g., [1, 13, 14]).
**Lemma 3.5**.: _Let \(\mathcal{H}:L^{2}\to L^{2}\) be the Hilbert transform given by_
\[u\mapsto\mathrm{P.V.}\,\frac{1}{\pi}\int_{\mathbb{R}}\frac{u(s)}{x-s}\,ds.\]
_Then, \(\mathcal{H}\) is an isometry on \(L^{2}\). Moreover, if \(u\in H^{1}\) then_
\[(-\Delta)^{1/2}u=\mathcal{H}(\partial_{x}u)=\partial_{x}\mathcal{H}u.\]
From the definition of the coefficients \(s_{\theta}\) and \(c_{\theta}\) we immediately have the following properties.
**Corollary 3.6**.: _There exists a uniform constant \(C>0\) such that_
\[\|c_{\theta}\|_{\infty},\|\partial_{x}c_{\theta}\|_{\infty} \leq C, \tag{3.4}\] \[\|s_{\theta}\|_{\infty},\|\partial_{x}s_{\theta}\|_{\infty},\| \partial_{x}^{2}s_{\theta}\|_{\infty} \leq C.\]
Proof.: Follows directly from Corollary 2.2 and the regularity of \(c_{\theta}\) and \(s_{\theta}\).
**Corollary 3.7**.: _Let \(u,v\in D(\mathcal{S})\). Then \(\left\langle\mathcal{S}u\,,v\right\rangle_{L^{2}}=b[s_{\theta}u,s_{\theta}v]\)._
Proof.: This can be easily proved by applying Plancherel's theorem. Indeed, there holds
\[\langle\mathcal{S}u,v\rangle_{L^{2}} =\int_{\mathbb{R}}s_{\theta}(x)(1+(-\Delta)^{1/2})(s_{\theta}(x)u )v^{*}\,dx\] \[=\int_{\mathbb{R}}(1+(-\Delta)^{1/2})(s_{\theta}(x)u)(s_{\theta}( x)v)^{*}\,dx\] \[=\int_{\mathbb{R}}(1+|\xi|)\widehat{(s_{\theta}u)}(\xi)\widehat{ (s_{\theta}v)}(\xi)^{*}\,d\xi\] \[=b[s_{\theta}u,s_{\theta}v],\]
as claimed.
The following proposition summarizes the basic properties of the linearized operator \(\mathcal{L}\) and the Neel wall's phase, which have already been proved in [1].
**Proposition 3.8**.: _The operator \(\mathcal{L}\) and the static Neel wall's phase \(\overline{\theta}\) satisfy:_
1. \(\partial_{x}\overline{\theta}\in D(\mathcal{L})\) _with_ \(\mathcal{L}\partial_{x}\overline{\theta}=0\)_._
2. _For all_ \(f\in L^{2}\) _such that_ \(f\perp\partial_{x}\overline{\theta}\) _in_ \(L^{2}\) _there exists a solution_ \(u\in H^{2}\) _to the equation_ \(\mathcal{L}u=f\)_. The solution is unique up to a constant multiple of_ \(\partial_{x}\overline{\theta}\)_._
3. _There exists a uniform constant_ \(\Lambda_{0}>0\) _such that if_ \(u\in H^{1}\) _and_ \(\langle u,\partial_{x}\overline{\theta}\rangle_{L^{2}}=0\)_, then_ \[\langle\mathcal{L}u,u\rangle_{L^{2}}\geq\Lambda_{0}\|u\|_{L^{2}}^{2}.\] (3.5)
4. _Let_ \(f\in\{\partial_{x}\overline{\theta}\}^{\perp}\subset L^{2}\)_. Then the equation_ \(\mathcal{L}u=f\) _has a strong solution_ \(u\in H^{2}\)_, unique up to a constant multiple of_ \(\partial_{x}\overline{\theta}\)_. Moreover, if_ \(u\in\{\partial_{x}\overline{\theta}\}^{\perp}\)_, then_ \[\|u\|_{H^{2}}\leq C\|f\|_{L^{2}},\] (3.6) _for some_ \(C>0\)_._
Proof.: The proof follows from Lemmata 4 and 5, together with Proposition 1 in [1].
Next, we are going to verify that the operator \(\mathcal{L}\) is self-adjoint. For that purpose, we remind the reader that it is well-known that the Laplacian, \(-\Delta=-\partial_{x}^{2}\), is essentially self-adjoint in \(L^{2}\) when defined on \(C_{0}^{\infty}(\mathbb{R})\), but it is actually self-adjoint on its maximal domain, \(D(-\partial_{x}^{2})=H^{2}\subset L^{2}\). This property can be easily verified using the Fourier transform, which unitarily diagonalizes the Laplacian (see, e.g., Kato [14], section V-5.2, pp. 299-302).
First, we have the following observation.
**Lemma 3.9**.: _The operator \(\mathcal{S}:L^{2}\to L^{2}\) is symmetric._
Proof.: We recall that \(\mathcal{S}\) is symmetric if and only if its domain is dense and \(\langle v,\mathcal{S}u\rangle_{L^{2}}=\langle\mathcal{S}v,u\rangle_{L^{2}}\) for all \(u,v\in D(\mathcal{S})\). Since \(D(\mathcal{S})=H^{1}\) is dense in \(L^{2}\) we only need to verify the latter property. But Corollary 3.7 and the hermiticity of \(b\) yield
\[\langle\mathcal{S}u\,,v\rangle_{L^{2}}=b[s_{\theta}u,s_{\theta}v]=b[s_{\theta} v,s_{\theta}u]^{*}=\langle\mathcal{S}v\,,u\rangle_{L^{2}}^{*}=\langle u\,, \mathcal{S}v\rangle_{L^{2}}\,,\]
for all \(u,v\in H^{1}\), as claimed.
We now verify that the operator \(\mathcal{L}\) is self-adjoint through the application of Kato-Rellich's theorem twice.
**Theorem 3.10**.: _The operator \(\mathcal{L}:L^{2}\to L^{2}\) with domain \(D(\mathcal{L})=H^{2}\) is self-adjoint._
Proof.: First, note that \(\mathcal{L}\) is clearly a symmetric operator, because its domain is dense in \(L^{2}\) and there holds
\[\langle\mathcal{L}u,v\rangle_{L^{2}} =\langle-\partial_{x}^{2}u,v\rangle_{L^{2}}+\langle\mathcal{S}u,v \rangle_{L^{2}}+\langle c_{\theta}u,v\rangle_{L^{2}},\] \[=\langle u,-\partial_{x}^{2}v\rangle_{L^{2}}+\langle u,\mathcal{ S}v\rangle_{L^{2}}+\langle u,c_{\theta}v\rangle_{L^{2}},\] \[=\langle u,\mathcal{L}v\rangle_{L^{2}},\]
for all \(u,v\in H^{2}\), after integration by parts, application of Lemma 3.9 and the fact that \(c_{\theta}\) is real.
Now, it is well-known that for every \(u\in H^{2}\) there holds the estimate
\[\|\partial_{x}u\|_{L^{2}}\leq k\|\partial_{x}^{2}u\|_{L^{2}}+\frac{2}{k}\|u\|_ {L^{2}}, \tag{3.7}\]
for any arbitrary \(k>0\) (see Kato [14], p. 192). Let us denote the operator,
\[\begin{cases}\widetilde{\mathcal{S}}:L^{2}\to L^{2},\\ D(\widetilde{\mathcal{S}})=H^{1},\\ \widetilde{\mathcal{S}}u:=s_{\theta}(-\Delta)^{1/2}(s_{\theta}u),\quad u\in D (\widetilde{\mathcal{S}}),\end{cases}\]
so that \(\mathcal{S}=s_{\theta}^{2}\mathrm{I}+\widetilde{\mathcal{S}}\). Following the arguments of Lemma 3.9, it is easy to verify that \(\widetilde{\mathcal{S}}\) is a symmetric operator. Moreover, from Corollary 2.2, we observe that \(s_{\theta}=\sin\overline{\theta}\) and \(\partial_{x}s_{\theta}=(\partial_{x}\overline{\theta})\cos\overline{\theta}\) are uniformly bounded functions for all \(x\in\mathbb{R}\), and there exists a constant \(C_{0}>0\) such that \(\|s_{\theta}\|_{\infty}\leq 1\) and \(\|\partial_{x}s_{\theta}\|_{\infty}\leq C_{0}\). Therefore,
\[\|\widetilde{\mathcal{S}}u\|_{L^{2}} \leq\left(\int_{\mathbb{R}}|(-\Delta)^{1/2}(s_{\theta}(x)u)|^{2} \,dx\right)^{1/2}=\left(\int_{\mathbb{R}}|\xi|^{2}|\widehat{(s_{\theta}u)}( \xi)|^{2}\,d\xi\right)^{1/2}\] \[\leq\|\partial_{x}(s_{\theta}u)\|_{L^{2}}\leq\|\partial_{x}s_{ \theta}\|_{\infty}\|u\|_{L^{2}}+\|s_{\theta}\|_{\infty}\|\partial_{x}u\|_{L^{2 }}\leq C_{0}\|u\|_{L^{2}}+\|\partial_{x}u\|_{L^{2}}.\]
Inequality (3.7) then yields
\[\|\widetilde{\mathcal{S}}u\|_{L^{2}}\leq k\|-\partial_{x}^{2}u\|_{L^{2}}+ \Big{(}C_{0}+\frac{2}{k}\Big{)}\|u\|_{L^{2}},\]
for all \(u\in H^{2}\) and any arbitrary \(k>0\). Since \(D(-\partial_{x}^{2})=H^{2}\subset D(\widetilde{\mathcal{S}})=H^{1}\) and since \(k>0\) is arbitrary, this shows that the symmetric operator \(\widetilde{\mathcal{S}}\) is relatively bounded with respect to \(-\partial_{x}^{2}\) with relative bound equal to zero. Consequently, we may apply Kato-Rellich's theorem (see Reed and Simon, vol. II [12], Theorem X.12, p. 162) to conclude that the operator \(\widetilde{\mathcal{S}}-\partial_{x}^{2}:L^{2}\to L^{2}\), with domain \(D(\widetilde{\mathcal{S}}-\partial_{x}^{2})=D(-\partial_{x}^{2})=H^{2}\), is self-adjoint.
Finally, let us write \(\mathcal{L}=-\partial_{x}^{2}+\mathcal{S}+c_{\theta}\mathrm{I}=-\partial_{x}^{2}+ \widetilde{\mathcal{S}}+\beta\mathrm{I}\), where \(\beta:=s_{\theta}^{2}+c_{\theta}\) is a real, smooth and bounded coefficient. Clearly,
\[\|\beta u\|_{L^{2}}\leq\|\beta\|_{\infty}\|u\|_{L^{2}}\leq\|\beta\|_{\infty}\|u \|_{L^{2}}+k\|(\widetilde{\mathcal{S}}-\partial_{x}^{2})u\|_{L^{2}},\]
for all \(u\in H^{2}\) and for any \(k>0\). Since \(D(\widetilde{\mathcal{S}}-\partial_{x}^{2})=H^{2}\subset D(\beta\mathrm{I})=L^ {2}\), we conclude that the symmetric operator \(\beta\mathrm{I}\) is \((\widetilde{\mathcal{S}}-\partial_{x}^{2})-\)bounded with relative bound equal to zero. Upon application, once again, of Kato-Rellich's theorem we conclude that the operator \(\mathcal{L}=-\partial_{x}^{2}+\widetilde{\mathcal{S}}+\beta\mathrm{I}\), with domain \(D(\mathcal{L})=H^{2}\), is self-adjoint. The theorem is proved.
**Corollary 3.11**.: \(\mathcal{L}\) _is a closed operator._
Proof.: Since every self-adjoint operator is closed (it coincides with its adjoint, which is closed), the conclusion follows from Theorem 3.10.
### The spectrum of \(\mathcal{L}\)
Thanks to Theorem 3.10, we immediately obtain from basic properties of self-adjoint operators that the \(L^{2}\)-spectrum of \(\mathcal{L}\) is real, \(\sigma(\mathcal{L})\subset\mathbb{R}\). Moreover, from Proposition 3.8 (a) and Proposition 2.1 (c), we already know that \(\partial_{x}\overline{\theta}\) is an eigenfunction of \(\mathcal{L}\) associated to the eigenvalue \(\lambda=0\), referred to as _the translation eigenvalue_. This means that any translation of the Neel wall's phase, \(\overline{\theta}(\cdot+\delta)\), remains a Neel wall (that is, a minimizer of the variational problem (2.5), which might not be longer centered at \(x=0\), though).
Now, if we decompose \(L^{2}=\{\partial_{x}\overline{\theta}\}^{\perp}\oplus\mathrm{span}\{\partial _{x}\overline{\theta}\}\), and if we suppose that there exists \(u\in H^{2}\subset L^{2}\), \(u\neq 0\), with \(u=u_{\perp}+\alpha\partial_{x}\overline{\theta}\) with some \(\alpha\in\mathbb{C}\) such that \(\mathcal{L}u=0\), then by Proposition 3.8 (c) there holds
\[0=\langle\mathcal{L}(u_{\perp}+\alpha\partial_{x}\overline{\theta}),u_{\perp} +\alpha\partial_{x}\overline{\theta}\rangle_{L^{2}}=\langle\mathcal{L}u_{ \perp},u_{\perp}\rangle_{L^{2}}\geq\Lambda_{0}\|u_{\perp}\|_{L^{2}}^{2},\]
yielding \(u_{\perp}=0\). This implies that the geometric multiplicity of \(\lambda=0\) is equal to one. Recalling that for self-adjoint operators on Hilbert spaces the algebraic and geometric multiplicities of an eigenvalue coincide (see Kato [14], p. 273), we readily obtain the following result.
**Corollary 3.12**.: \(\lambda=0\) _is a simple eigenvalue of the operator \(\mathcal{L}:L^{2}\to L^{2}\), with eigenfunction \(\partial_{x}\overline{\theta}\in D(\mathcal{L})=H^{2}\)._
We use this information to prove the following spectral bound.
**Lemma 3.13**.: _The \(L^{2}\)-spectrum of \(\mathcal{L}\) satisfies \(\sigma(\mathcal{L})\subset\{0\}\cup[\Lambda_{0},\infty)\), where \(\Lambda_{0}>0\) is the constant from Proposition 3.8 (c)._
Proof.: Corollary 3.12 implies that \(\lambda=0\in\sigma_{\mathrm{pt}}(\mathcal{L})\) and, therefore, it is an isolated simple eigenvalue. Moreover, we also know that \(\sigma(\mathcal{L})_{|L^{2}}\subset\mathbb{R}\). Hence, by the spectral decomposition theorem (see Theorem III-6.17, p. 178, in Kato [14]) we have a decomposition for \(\mathcal{L}\) according to a decomposition of the Hilbert space, \(L^{2}=X^{\prime}\oplus X^{\prime\prime}\), such that \(\sigma(\mathcal{L})_{|X^{\prime}}=\{0\}\) and \(\sigma(\mathcal{L})_{|X^{\prime\prime}}=\sigma(\mathcal{L})_{|L^{2}}\backslash\{0\}\), where \(X^{\prime}=\mathcal{P}_{0}L^{2}\), \(X^{\prime\prime}=(\mathrm{I}-\mathcal{P}_{0})L^{2}\) and \(\mathcal{P}_{0}\) is the spectral projection associated to the eigenvalue \(\lambda=0\), determined by the Dunford integral,
\[\mathcal{P}_{0}=-\frac{1}{2\pi i}\int_{\partial\Gamma}(\lambda\mathrm{I}- \mathcal{L})^{-1}\,d\lambda,\]
with \(\partial\Gamma\) being a simple, rectifiable curve such that \(\partial\Gamma\subset\rho(\mathcal{L})\) and \(\Gamma\cap\sigma(\mathcal{L})_{|L^{2}}=\{0\}\). Actually, since the eigenvalue is simple (the rank of \(\mathcal{P}_{0}\) is equal to one), we have \(X^{\prime}=\mathrm{span}\{\partial_{x}\overline{\theta}\}\subset L^{2}\) and \(X^{\prime\prime}=\{\partial_{x}\overline{\theta}\}^{\perp}\) in \(L^{2}\).
Next, we verify that \(\mathcal{L}_{|X^{\prime\prime}}\) is also self-adjoint. This restriction of \(\mathcal{L}\) is defined as
\[\begin{cases}\mathcal{L}_{|X^{\prime\prime}}:X^{\prime\prime}\to X^{\prime \prime},\\ D(\mathcal{L}_{|X^{\prime\prime}})=D(\mathcal{L})\cap X^{\prime\prime}=H^{2} \cap X^{\prime\prime},\\ \mathcal{L}_{|X^{\prime\prime}}u:=\mathcal{L}u,\quad u\in D(\mathcal{L}_{|X^{ \prime\prime}}).\end{cases}\]
Clearly, \(\mathcal{L}_{|X^{\prime\prime}}\) is symmetric because \(D(\mathcal{L}_{|X^{\prime\prime}})\) is dense in \(X^{\prime\prime}=\{\partial_{x}\overline{\theta}\}^{\perp}\) and \(\mathcal{L}\) is symmetric. Thus, we apply the basic criterion of self-adjointness: in order to show that \(\mathcal{L}_{|X^{\prime\prime}}\) is self-adjoint it suffices to show that \((\mathcal{L}_{|X^{\prime\prime}}\pm i)(D(\mathcal{L})\cap X^{\prime\prime})=X^ {\prime\prime}\) (see, e.g., Theorem VIII.3, p. 256, in Reed and Simon, vol. I [13]). But we already know that \(\mathcal{L}\pm i:D(\mathcal{L})\to L^{2}\) is surjective because \(\mathcal{L}\) is self-adjoint. Therefore, for \(v\in X^{\prime\prime}\subset L^{2}\) there exist elements \(u_{\pm}\in D(\mathcal{L})=H^{2}\) such that \((\mathcal{L}\pm i)u_{\pm}=v\). This implies that
\[(\mathcal{L}\pm i)(\mathrm{I}-\mathcal{P}_{0})u_{\pm}=(\mathcal{ L}\pm i)u_{\pm}-(\mathcal{L}\pm i)\mathcal{P}_{0}u_{\pm} =v-(\mathcal{L}\mathcal{P}_{0}u_{\pm}\pm i\mathcal{P}_{0}u_{\pm})\] \[=v-\mathcal{P}_{0}(\mathcal{L}\pm i)u_{\pm}\] \[=(\mathrm{I}-\mathcal{P}_{0})v\,\in X^{\prime\prime},\]
with \((\mathrm{I}-\mathcal{P}_{0})u_{\pm}\in X^{\prime\prime}\). That is, \((\mathcal{L}_{|X^{\prime\prime}}\pm i):D(\mathcal{L})\cap X^{\prime\prime} \to X^{\prime\prime}\) is surjective, and this proves that \(\mathcal{L}_{|X^{\prime\prime}}\) is self-adjoint.
Finally, from Rayleigh's formula for semi-bounded self-adjoint operators (cf. Kato [12], p. 278), we have the bound
\[\langle\mathcal{L}_{|X^{\prime\prime}}u,u\rangle_{L^{2}}=\langle\mathcal{L}u, u\rangle_{L^{2}}\geq\Lambda_{0}\|u\|_{L^{2}}^{2},\]
for all \(u\in D(\mathcal{L})\cap X^{\prime\prime}=H^{2}\cap\{\partial_{x}\overline{ \theta}\}_{L^{2}}^{\perp}\) (see Proposition 3.8 (c)), which implies, in turn, that \(\sigma(\mathcal{L}_{X^{\prime\prime}})\subset[\Lambda_{0},\infty)\). Kato's decomposition theorem then yields \(\sigma(\mathcal{L})_{L^{2}}\subset\{0\}\cup[\Lambda_{0},\infty)\), as claimed.
### The asymptotic operator \(\mathcal{L}_{\infty}\)
We now examine the following operator, defined by
\[\begin{cases}\mathcal{L}_{\infty}:L^{2}\to L^{2},\\ D(\mathcal{L}_{\infty})=H^{2},\\ \mathcal{L}_{\infty}u:=-\partial_{x}^{2}u+(1+(-\Delta)^{1/2})u,\quad u\in D( \mathcal{L}_{\infty}).\end{cases} \tag{3.8}\]
This operator results from (formally) taking the limit when \(x\to\pm\infty\) in the expression of \(\mathcal{L}\), recalling that \(\overline{\theta}(\pm\infty)=\pm\pi/2\). Let us define the bilinear form
\[a_{\infty}[\cdot,\cdot]:H^{1}\times H^{1}\to\mathbb{C}, \tag{3.9}\] \[a_{\infty}[u,v]:=\langle\partial_{x}u,\partial_{x}v\rangle_{L^{2} }+b[u,v],\qquad u,v\in H^{1},\]
where \(b[\cdot,\cdot]\) is the bilinear form defined in (2.8). It follows from standard facts that if \(f\in L^{2}\), then the equation
\[\mathcal{L}_{\infty}u=f, \tag{3.10}\]
is endowed with a weak formulation in the space \(H^{1}\) in terms of the bilinear form \(a_{\infty}[\cdot,\cdot]\). Indeed, we say that \(u\in H^{1}\) is a weak solution to (3.10) provided that
\[a_{\infty}[u,v]=\langle\mathcal{L}_{\infty}u,v\rangle_{L^{2}}=\langle f,v \rangle_{L^{2}},\qquad\forall\,v\in H^{1}. \tag{3.11}\]
**Lemma 3.14**.: _The bilinear form \(a_{\infty}[\cdot,\cdot]\) defines an inner product in \(H^{1}\) whose induced norm is equivalent to the standard \(H^{1}\)-norm._
Proof.: First, it is to be noticed that \(a_{\infty}[\cdot,\cdot]\) is complex symmetric, \(a_{\infty}[u,v]^{*}=a_{\infty}[v,u]\) for all \(u,v\in H^{1}\), and clearly bilinear. Use Plancherel's identity to observe that
\[b[u,u]=\int_{\mathbb{R}}(1+|\xi|)|\widehat{u}(\xi)|^{2}\,d\xi\geq\|u\|_{L^{2}}^ {2},\]
yielding \(a_{\infty}[u,u]=\|\partial_{x}u\|^{2}+b[u,u]\geq\|\partial_{x}u\|^{2}+\|u\|_{L^ {2}}=\|u\|_{H^{1}}^{2}\) for all \(u\in H^{1}\). On the other hand, by Cauchy-Schwarz' inequality it follows that
\[b[u,u]=\int_{\mathbb{R}}(1+|\xi|)|\widehat{u}(\xi)|^{2}\,d\xi\leq\int_{ \mathbb{R}}\big{(}\tfrac{3}{2}+\tfrac{1}{2}\xi^{2})|\widehat{u}(\xi)|^{2}\,d\xi,\]
yielding,
\[a_{\infty}[u,u]=\|\partial_{x}u\|_{L^{2}}^{2}+b[u,u]\leq\|\partial_{x}u\|_{L^ {2}}^{2}+\int_{\mathbb{R}}\big{(}\tfrac{3}{2}+\tfrac{1}{2}\xi^{2})|\widehat{u }(\xi)|^{2}\,d\xi=\tfrac{3}{2}\|u\|_{H^{1}}^{2}.\]
Finally, we notice that \(a_{\infty}[u,u]=0\) if and only if \(u=0\). This shows the result.
We now apply the previous lemma to show that (3.10) has a unique strong solution in \(H^{2}\).
**Lemma 3.15**.: _For every \(f\in L^{2}\) there exists a unique solution \(u\in H^{2}\) to (3.10). Moreover, there exists a uniform constant \(C>0\) such that_
\[\|u\|_{H^{2}}\leq C\|f\|_{L^{2}}. \tag{3.12}\]
Proof.: The bilinear form is continuous in \(H^{1}\) because
\[|a_{\infty}[u,v]|\leq|\langle\partial_{x}u,\partial_{x}v\rangle_ {L^{2}}|+|b[u,v]| \leq\|\partial_{x}u\|_{L^{2}}\|\partial_{x}v\|_{L^{2}}+\int_{ \mathbb{R}}(1+|\xi|)|\widehat{u}(\xi)||\widehat{v}(\xi)|\,d\xi\] \[\leq\|\partial_{x}u\|_{L^{2}}\|\partial_{x}v\|_{L^{2}}+\|u\|_{L^ {2}}\|v\|_{L^{2}}+\|\partial_{x}u\|_{L^{2}}\|v\|_{L^{2}}\] \[\leq C\|u\|_{H^{1}}\|v\|_{H^{1}},\]
for all \(u,v\in H^{1}\). Moreover, in Lemma 3.14 we have already verified that \(a_{\infty}[\cdot,\cdot]\) is \(H^{1}\)-elliptic. Thus, by Lax-Milgram theorem, for each \(f\in L^{2}\) there exists a unique weak solution \(u\in H^{1}\) to (3.11). This solution solves
\[\mathcal{L}_{\infty}u=-\partial_{x}^{2}u+(1+(-\Delta)^{1/2})u=f,\]
in the sense of distributions. Therefore, for any test function \(\varphi\in C_{0}^{\infty}(\mathbb{R})\) there holds
\[\langle\partial_{x}u,\partial_{x}\varphi\rangle_{L^{2}}+\langle(1+(-\Delta)^{ 1/2})u,\varphi\rangle_{L^{2}}=\langle f,\varphi\rangle_{L^{2}}.\]
By Plancherel's identity this implies that
\[\int_{\mathbb{R}}\big{[}(1+|\xi|+\xi^{2})\widehat{u}(\xi)-\widehat{f}(\xi) \big{]}\widehat{\varphi}(\xi)^{*}\,d\xi=0,\]
for all \(\varphi\in C_{0}^{\infty}(\mathbb{R})\). Hence, \((1+|\xi|+\xi^{2})\widehat{u}(\xi)=\widehat{f}(\xi)\) a.e. in \(\xi\in\mathbb{R}\). Therefore,
\[\|u\|_{H^{2}}^{2}=\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{u}(\xi)|^{2}\,d \xi=\int_{\mathbb{R}}\Big{(}\frac{1+\xi^{2}}{1+|\xi|+\xi^{2}}\Big{)}^{2}| \widehat{f}(\xi)|^{2}\,d\xi\leq\|f\|_{L^{2}}^{2}<\infty.\]
This yields \(u\in H^{2}\), that \(u\) is a strong solution to (3.10) as well as estimate (3.12). The lemma is proved.
**Lemma 3.16**.: _The asymptotic operator \(\mathcal{L}_{\infty}:L^{2}\to L^{2}\) with domain \(D(\mathcal{L}_{\infty})=H^{2}\) is endowed with the following properties:_
* \(\mathcal{L}_{\infty}\) _is self-adjoint._
2. \(\ker\mathcal{L}_{\infty}=\{0\}\)_._
3. \(\operatorname{ran}(\mathcal{L}_{\infty})=L^{2}\)_._
4. \(\sigma(\mathcal{L}_{\infty})_{|L^{2}}\subset[1,\infty)\)_._
5. \((\mathcal{L}_{\infty})^{-1}\) _exists and it is bounded._
Proof.: Notice that \(\mathcal{L}_{\infty}\) is clearly symmetric with dense domain. Therefore, the proof of property (a) follows a Kato-Rellich argument, very similar to that in the proof of Theorem 3.10 and we omit it. Items (b) and (c) follow directly from Lemma 3.15 due to \(\mathcal{L}_{\infty}\) is self-adjoint and its spectrum is real, \(\sigma(\mathcal{L}_{\infty})_{|L^{2}}\subset\mathbb{R}\). If \(u\in D(\mathcal{L}_{\infty})=H^{2}\), then \(\langle\mathcal{L}_{\infty}u,u\rangle_{L^{2}}=a_{\infty}[u,u]\geq\|u\|_{H^{1} }^{2}\geq\|u\|_{L^{2}}^{2}\), showing that \(\mathcal{L}_{\infty}\) is semi-bounded. By Rayleigh's spectral bound for semi-bounded self-adjoint operators in Hilbert spaces (cf. [10], p. 278) we obtain
\[\inf\sigma(\mathcal{L}_{\infty})_{|L^{2}}=\inf_{0\neq v\in D(\mathcal{L}_{ \infty})}\frac{\langle\mathcal{L}_{\infty}v,v\rangle_{L^{2}}}{\|v\|_{L^{2}}^{ 2}}=\inf_{0\neq v\in D(\mathcal{L}_{\infty})}\frac{a_{\infty}[v,v]}{\|v\|_{L^ {2}}^{2}}\geq 1.\]
This shows (d). Property (e) follows directly from (d), inasmuch as it implies that \(\lambda=0\in\rho(\mathcal{L}_{\infty})\). This completes the proof of the lemma.
### Relative compactness
In this section we prove that the linearized operator around the static Neel wall's phase, \(\mathcal{L}\), is a relatively compact perturbation of \(\mathcal{L}_{\infty}\). This fundamental property will be useful later on. First we establish an elementary result.
**Lemma 3.17**.: _Let \(\mu\in\mathbb{C}\backslash[1,\infty)\) be fixed. Then the function_
\[\begin{cases}\qquad g_{\mu}:[0,\infty)\to\mathbb{R},\\ g_{\mu}(\eta):=\frac{\eta^{2}+1}{|\eta^{2}+\eta+1-\mu|},\qquad\eta\geq 0, \end{cases}\]
_is bounded above, that is, there exists a positive constant \(\widetilde{C}=\widetilde{C}(\mu)>0\) such that \(|g_{\mu}(\eta)|\leq\widetilde{C}(\mu)\) for all \(\eta\geq 0\). Moreover, if \(\operatorname{Re}\mu<0\), then the constant \(\widetilde{C}\) may be chosen independently of \(\mu\)._
Proof.: Fix \(\mu\in\mathbb{C}\backslash[1,\infty)\). Then the function \(g_{\mu}\) is continuous in \(\eta\in[0,\infty)\). Since,
\[g_{\mu}(0)=\frac{1}{|1-\mu|},\qquad\lim_{\eta\to\infty}g_{\mu}(\eta)=1,\]
then from continuity we deduce the existence of \(\widetilde{C}=\widetilde{C}(\mu)>0\) such that \(|g_{\mu}(\eta)|\leq\widetilde{C}(\mu)\) for all \(\eta\geq 0\). If \(\operatorname{Re}\mu<0\), then we have \(1-\operatorname{Re}\mu>1\) and therefore
\[|\eta^{2}+\eta+1-\mu|\geq|\operatorname{Re}\left(\eta^{2}+\eta+1-\mu\right)|= \eta^{2}+\eta+1-\operatorname{Re}\mu>\eta^{2}+1,\]
yielding \(|g_{\mu}(\eta)|\leq 1\) for all \(\eta\geq 0\).
**Lemma 3.18**.: _Fix \(\mu\in\mathbb{C}\backslash[1,\infty)\subset\rho(\mathcal{L}_{\infty})\). Then for every \(f\in L^{2}\) there exists a unique solution \(u\in H^{2}\) to the equation \((\mathcal{L}_{\infty}-\mu)u=f\). Moreover, there exists a constant \(C=C(\mu)>0\) such that_
\[\|u\|_{H^{2}}\leq C(\mu)\|f\|_{L^{2}}. \tag{3.13}\]
Proof.: From Lemma 3.16 (d), if we fix \(\mu\in\mathbb{C}\backslash[1,\infty)\), then \(\mu\in\rho(\mathcal{L}_{\infty})\) and \((\mathcal{L}_{\infty}-\mu):D(\mathcal{L}_{\infty})=H^{2}\subset L^{2}\to L^{2}\) is onto. Hence, for every \(f\in L^{2}\) there exists \(u\in H^{2}\) such
that \((\mathcal{L}_{\infty}-\mu)u=f\). This implies that \((\xi^{2}+1+|\xi|-\mu)\widehat{u}(\xi)=\widehat{f}(\xi)\). Noticing that for \(\mu\in\mathbb{C}\backslash[1,\infty)\) we have \(\xi^{2}+1+|\xi|-\mu\neq 0\) for all \(\xi\in\mathbb{R}\), we obtain the estimate
\[\|u\|_{H^{2}}^{2} =\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{u}(\xi)|^{2}\,d\xi=\int _{\mathbb{R}}|g_{\mu}(|\xi|)|^{2}|\widehat{f}(\xi)|^{2}\,d\xi\] \[\leq\widetilde{C}(\mu)^{2}\int_{\mathbb{R}}|\widehat{f}(\xi)|^{2 }\,d\xi=C(\mu)\|f\|_{L^{2}}^{2},\]
thanks to Lemma 3.17.
**Lemma 3.19**.: \(\mathcal{L}_{\infty}-\mathcal{L}\) _continuously maps \(H^{2}\) into \(H^{1}\)._
Proof.: Take any \(u\in H^{2}\). We then have
\[(\mathcal{L}_{\infty}-\mathcal{L})u=(1+(-\Delta)^{1/2})u-s_{\theta}(1+(- \Delta)^{1/2})(s_{\theta}u)+c_{\theta}u. \tag{3.14}\]
Apply bounds (3.4) to obtain
\[\|c_{\theta}u\|_{H^{1}}^{2} =\|c_{\theta}u\|_{L^{2}}^{2}+\|\partial_{x}(c_{\theta}u)\|_{L^{2}} ^{2}\] \[\leq\|c_{\theta}\|_{\infty}^{2}\big{(}\|u\|_{L^{2}}^{2}+\| \partial_{x}u\|_{L^{2}}^{2}\big{)}+\|\partial_{x}c_{\theta}\|_{\infty}^{2}\|u \|_{L^{2}}^{2}\leq C\|u\|_{H^{2}}^{2},\]
for some \(C>0\). Moreover,
\[\|(1+(-\Delta)^{1/2})u\|_{H^{1}}^{2} =\int_{\mathbb{R}}(1+\xi^{2})\big{|}\big{(}(1+(-\Delta)^{1/2})u \big{)}^{\wedge}(\xi)\big{|}^{2}\,d\xi\] \[\leq 2\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{u}(\xi)|^{2}\,d \xi=2\|u\|_{H^{2}}^{2}.\]
Apply bounds (3.4) once again to obtain
\[\|s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u)\|_{H^{1}}^{2} \leq\|s_{\theta}(1+(-\Delta)^{1/2})(s_{\theta}u)\|_{L^{2}}^{2}+\] \[\quad+\|(\partial_{x}s_{\theta})(1+(-\Delta)^{1/2})(s_{\theta}u )\|_{L^{2}}^{2}+\] \[\quad+\|s_{\theta}\partial_{x}(1+(-\Delta)^{1/2})(s_{\theta}u)\| _{L^{2}}^{2}\] \[\leq C\|(1+(-\Delta)^{1/2})(s_{\theta}u)\|_{H^{1}}^{2}\] \[\leq 2C\int_{\mathbb{R}}(1+\xi^{2})^{2}|\widehat{(s_{\theta}u)}( \xi)|^{2}\,d\xi=2C\|s_{\theta}u\|_{H^{2}}^{2}\leq\widetilde{C}\|u\|_{H^{2}}^{2},\]
for some \(\widetilde{C}>0\). Combine these estimates to conclude that there exists a constant \(C>0\) such that
\[\|(\mathcal{L}_{\infty}-\mathcal{L})u\|_{H^{1}}\leq C\|u\|_{H^{2}}, \tag{3.15}\]
for all \(u\in D(\mathcal{L})=H^{2}\). This shows the result.
At this point, let us recall two useful theorems, one due to Pego [10] and the other due to Kolmogorov [11] and Riesz [14] (see, for example, [1] and the references therein), describing totally bounded sets in \(L^{2}\) and in \(L^{p}\), respectively.
**Theorem 3.20** (Pego [10]).: _Let \(\mathcal{F}\) be a bounded set of \(L^{2}(\mathbb{R}^{n})\) and \(\widehat{\mathcal{F}}:=\{\widehat{u}\,|\,u\in\mathcal{F}\}\). The functions for \(\mathcal{F}\) are \(L^{2}\)-equicontinuous if and only if the functions for \(\widehat{\mathcal{F}}\) decay uniformly in \(L^{2}\) and vice versa._
Proof.: See Theorem 1 in [10].
**Theorem 3.21** (Kolmogorov-Riesz [11, 14]).: _A bounded set \(\mathcal{F}\subset L^{p}(\mathbb{R}^{n})\) with \(1\leq p<\infty\) is totally bounded if and only if_
1. (\(L^{p}\)-equicontinuity) \(\lim_{h\to 0}\int_{\mathbb{R}^{n}}|u(x+h)-u(x)|^{p}\,dx=0\) uniformly for \(u\in\mathcal{F}\), and
2. (\(L^{p}\)-uniform decay) \(\lim_{R\to\infty}\int_{|x|>R}|u(x)|^{p}\,dx=0\) uniformly for \(u\in\mathcal{F}\).
Proof.: See the proof of Theorem 5 in Hanche-Olsen and Holden [1].
We now prove a result which will be helpful in the proof of Theorem 3.23 below.
**Proposition 3.22**.: _Let \(\mathcal{F}\) be a bounded set in \(H^{1}\) and let \(\phi\in H^{1}\) be a fixed function such that \(\|\partial_{x}\phi\|_{L^{\infty}}<\infty\). Then the set \(\phi\mathcal{F}:=\{\phi u\,|\,u\in\mathcal{F}\}\) is totally bounded in \(L^{2}\)._
Proof.: First, we prove that \(\lim_{|x|\to\infty}\phi(x)=0\). By density, there exists a sequence \(\{u_{n}\}_{n\in\mathbb{N}}\subset C_{0}^{\infty}(\mathbb{R})\) that converges to \(\phi\) in \(H^{1}\). Thanks to the Sobolev inequality, \(\|v\|_{L^{\infty}}\leq 2\,\|v\|_{L^{2}}\,\|\partial_{x}v\|_{L^{2}}\) for \(v\in H^{1}\), the \(H^{1}\)-convergence of \(\{u_{n}\}\) to \(\phi\) is improved to \(L^{\infty}\)-convergence, and for every \(\epsilon>0\) there exists \(N\in\mathbb{N}\) such that
\[\|\phi-u_{n}\|_{L^{\infty}}<\epsilon\quad\text{for }n>N.\]
Since each \(u_{n}\) has compact support, there exists \(R>0\) such that \(u_{n}(x)=0\) for \(|x|>R\). Hence
\[|\phi(x)|\leq|\phi(x)-u_{n}(x)|\leq\|\phi-u_{n}\|_{L^{\infty}}\leq\epsilon.\]
Therefore, \(\lim_{|x|\to\infty}\phi(x)=0\). It is also easy to see that \(\phi\mathcal{F}\) is bounded in \(L^{2}\). Indeed, by hypothesis, there exists \(\widetilde{M}>0\) such that \(\sup_{u\in\mathcal{F}}\|u\|_{H^{1}}<\widetilde{M}\). Moreover, since \(\|\phi\|_{L^{\infty}}<\infty\), by the Sobolev inequality we obtain
\[\sup_{v\in\phi\mathcal{F}}\|v\|_{L^{2}}\leq\sup_{u\in\mathcal{F}}\|\phi u\|_{ L^{2}}\leq\widetilde{M}\|\phi\|_{L^{\infty}}.\]
Second, we prove that \(\phi\mathcal{F}\) is \(L^{2}\)-equicontinuous. By Sobolev imbedding theorems, we can assume that \(\phi\in C^{0}\cap L^{\infty}\). Also, by hypothesis \(\|\partial_{x}\phi\|_{L^{\infty}}<\infty\). Hence,
\[\|\phi u\|_{H^{1}}^{2}\leq\int_{\mathbb{R}}(\phi^{2}+(\partial_{x}\phi)^{2})(u ^{2}+(\partial_{x}u)^{2})\,dx\leq(\|\phi\|_{L^{\infty}}^{2}+\|\partial_{x}\phi \|_{L^{\infty}}^{2})\,\|u\|_{H^{1}}^{2}<M^{2}.\]
for every \(u\in\mathcal{F}\) and \(M:=\sqrt{\|\phi\|_{L^{\infty}}^{2}+\|\partial_{x}\phi\|_{L^{\infty}}^{2}} \,\widetilde{M}\). Thus \(\phi\mathcal{F}\) is bounded in \(H^{1}\). Then, for every \(v\in\phi\mathcal{F}\)
\[\int_{\{|\xi|>R\}}|\hat{v}(\xi)|^{2}\,d\xi\leq\frac{1}{1+R^{2}}\int_{\mathbb{R }}(1+\xi^{2})|\hat{v}(\xi)|^{2}\,d\xi=\frac{\|v\|_{H^{1}}^{2}}{1+R^{2}}\leq \frac{M^{2}}{1+R^{2}}.\]
Thus, the functions in \(\widehat{\phi\mathcal{F}}\) are \(L^{2}\)-uniformly decaying. By Theorem 3.20, the functions in \(\phi\mathcal{F}\) are \(L^{2}\)-equicontinuous.
Finally, we prove that the functions in \(\phi\mathcal{F}\) are \(L^{2}\)-uniformly decaying. Indeed, if \(v\in\phi\mathcal{F}\) then \(v=\phi u\) for some \(u\in\mathcal{F}\subset H^{1}\). This yields,
\[\int_{|x|>R}|v(x)|^{2}\,dx=\left\|\mathbf{1}_{\{|x|>R\}}(\phi u)\right\|_{L^{2 }}^{2}\leq\|\mathbf{1}_{\{|x|>R\}}\phi\|_{L^{\infty}}^{2}\,\|u\|_{L^{2}}^{2}\]
Again, since \(\|u\|_{L^{2}}\leq\widetilde{M}\) for every \(u\in\mathcal{F}\) and \(\phi(x)\to 0\) as \(|x|\to\infty\), we conclude that
\[\lim_{R\to\infty}\int_{|x|>R}|\phi u|^{2}\,dx\leq\lim_{R\to\infty}2\widetilde{ M}^{2}\|\mathbf{1}_{\{|x|>R\}}\phi\|_{L^{\infty}}^{2}=0\]
uniformly for \(u\in\mathcal{F}\). Thus, by Theorem 3.21, the set \(\phi\mathcal{F}\) is totally bounded in \(L^{2}\)
We now prove the main result of this section.
**Theorem 3.23**.: _The operator \(\mathcal{L}_{\infty}-\mathcal{L}:H^{2}\to H^{1}\) is compact in \(L^{2}\)._
Proof.: Let \(\mathcal{F}\) be a bounded subset of \(H^{2}\), namely \(\sup_{u\in\mathcal{F}}\|f\|_{H^{2}}<M\) for some \(M>0\). Then, fix \(\delta>0\) and let \(g_{\delta}\in C^{\infty}\) be an increasing and antisymmetric function such that \(g_{\delta}(x)=x/|x|\) for \(|x|\geq\delta\). With these tools at hand and assuming that \(\mathcal{T}\) stands for the operator \((1+(-\Delta)^{1/2})\), the operator \(\mathcal{L}_{\infty}-\mathcal{L}\) is easily recast, by adding and subtracting the terms \(g_{\delta}(x)\mathcal{T}(s_{\theta}u)+g_{\delta}(x)\mathcal{T}(g_{\delta}(x)u)\), as
\[(\mathcal{L}_{\infty}-\mathcal{L})u=\mathcal{Q}_{1}u+\mathcal{Q}_{2}u+ \mathcal{Q}_{3}(s_{\theta}u)+\mathcal{Q}_{4}u, \tag{3.16}\]
where
\[\mathcal{Q}_{1}u :=\mathcal{T}u-g_{\delta}\mathcal{T}(g_{\delta}u), \mathcal{Q}_{2}u :=g_{\delta}\mathcal{T}[(g_{\delta}-s_{\theta})u], \tag{3.17}\] \[\mathcal{Q}_{3}u :=[g_{\delta}-s_{\theta}]\mathcal{T}u, \mathcal{Q}_{4}u :=c_{\theta}u. \tag{3.18}\]
Since the set of compact operators between two Banach spaces is a linear manifold, we shall prove that each operator \(\mathcal{Q}_{i}:H^{2}\to H^{1}\) is compact in \(L^{2}\) by showing that the set \(\mathcal{Q}_{i}\mathcal{F}:=\{\mathcal{Q}_{i}u\,|\,u\in\mathcal{F}\}\subset H^ {1}\) is totally bounded in \(L^{2}\), for each \(1\leq i\leq 4\).
First, we analyze \(\mathcal{Q}_{4}\). Notice that \(\mathcal{Q}_{4}\mathcal{F}\) is totally bounded by Proposition 3.22 since \(\mathcal{F}\subset H^{2}\), \(c_{\theta}\) is a smooth function which belongs to \(H^{2}\) and \(\lim_{x\to\pm\infty}c_{\theta}(x)=0\) (see the beginning of proof of Proposition 3.22).
Second, we examine \(\mathcal{Q}_{3}\). Indeed the set \(\mathcal{T}\mathcal{F}:=\{\mathcal{T}u\,|\,u\in\mathcal{F}\}\subset H^{1}\) satisfies that \(\sup_{v\in\mathcal{T}\mathcal{F}}\|v\|_{H^{1}}\leq\sqrt{2}M\) because \(\|\mathcal{T}u\|_{H^{1}}^{2}\leq 2\,\|u\|_{H^{2}}\). Then, \(\mathcal{T}\mathcal{F}\) is bounded in \(H^{1}\) and, consequently, also in \(L^{2}\). Now, observe that
\[\mathcal{Q}_{3}\mathcal{F}=\{[g_{\delta}-s_{\theta}]\mathcal{T}u\,|\,u\in \mathcal{F}\}=\{[g_{\delta}-s_{\theta}]v\,|\,v\in\mathcal{T}\mathcal{F}\}=[g_{ \delta}-s_{\theta}]\mathcal{T}\mathcal{F},\]
and that \(\lim_{x\to\pm\infty}(g_{\delta}-s_{\theta})(x)=0\) since \(\lim_{x\to\pm\infty}\overline{\theta}(x)=\pm\pi/2\). In order to apply Proposition 3.22 and to conclude that \(\mathcal{Q}_{3}\mathcal{F}\) is totally bounded we only need to show that \(g_{\delta}-s_{\theta}\in H^{1}\) and \(\partial_{x}(g_{\delta}-s_{\theta})\in L^{\infty}\). This follows by standard calculus. It is easily seen that \(|\theta/|\theta|-\sin\theta|<\cos\theta\) for every \(\theta\in(-\pi/2,0)\cup(0,\pi/2)\), and since \(\overline{\theta}\) is a strictly increasing function with \(\overline{\theta}(0)=0\), one concludes that \(x/|x|=\operatorname{sgn}\left(\overline{\theta}(x)\right)\) for every \(x\neq 0\). These two facts readily imply that \((x/|x|-s_{\theta}(x))^{2}<\cos^{2}\overline{\theta}(x)\) a.e. in \(x\in\mathbb{R}\), and \(x/|x|-s_{\theta}\in L^{2}\). Recalling that \(g_{\delta}(x)=x/|x|-s_{\theta}\) for \(|x|\geq\delta\) and \(|g_{\delta}(x)|<2\) for \(|x|<\delta\), one concludes that \(g_{\delta}-s_{\theta}\in L^{2}\cap L^{\infty}\). In the same fashion, and using Proposition 2.1 we can readily prove that \(\partial_{x}(g_{\delta}-s_{\theta})\in L^{2}\cap L^{\infty}\). Therefore, we conclude that \([g_{\delta}-s_{\theta}]\mathcal{T}\mathcal{F}\) is totally bounded. The compactness of the linear operator \(\mathcal{Q}_{3}\) in \(L^{2}\) then follows. Moreover, observe that \(s_{\theta}\in H^{1}\cap L^{\infty}\) and the compactness of \(\mathcal{Q}_{3}\) imply the compactness of the linear operator \(u\mapsto\mathcal{Q}_{3}(s_{\theta}u)\) involved in (3.16).
Let us now study the operator \(\mathcal{Q}_{2}\). We claim that \(\mathcal{T}^{-1}:H^{2}\to H^{3}\) is continuous. Indeed, since \((1+|\xi|)^{2}\geq 1+|\xi|^{2}\), we have
\[\|\mathcal{T}^{-1}u\|_{H^{3}}^{2}=\int_{\mathbb{R}}\frac{1+\xi^{6}}{(1+|\xi|)^ {2}}|\hat{u}(\xi)|^{2}\,d\xi\leq\int_{\mathbb{R}}(1-\xi^{2}+|\xi|^{4})|\hat{u} (\xi)|^{2}\,d\xi\leq\|u\|_{H^{2}}^{2}\,.\]
Notice that \(\mathcal{Q}_{2}=g_{\delta}(x)\mathcal{T}\mathcal{Q}_{3}\mathcal{T}^{-1}\), and since \(g_{\delta}\mathcal{T}:H^{1}\to L^{2}\) is bounded, the compactness of \(\mathcal{Q}_{2}\) is proved by showing that \(\mathcal{Q}_{3}:H^{3}\to H^{2}\) is compact in \(H^{1}\). Let \(\{u_{j}\}_{j>0}\subset H^{3}\), then by the second step (compactness of \(\mathcal{Q}_{3}\) in \(L^{2}\)), there exists a
subsequence \(\{u_{j_{k}}\}_{k>0}\) and \(u\in L^{2}\) such that \(\left\|u-\mathcal{Q}_{3}u_{j_{k}}\right\|_{L^{2}}\to 0\) as \(k\to\infty\). Since \(\{u_{j_{k}}\}_{k>0}\subset H^{3}\), then \(\{\mathcal{Q}_{3}u_{j_{k}}\}_{k>0}\subset H^{2}\) and
\[\begin{split}\partial_{x}(\mathcal{Q}_{3}u_{j_{k}})=\partial_{x}( [g_{\delta}-s_{\theta}]\mathcal{T}u_{j_{k}})=&\partial_{x}(g_{ \delta}-s_{\theta})\mathcal{T}u_{j_{k}}+[g_{\delta}-s_{\theta}]\mathcal{T} \partial_{x}u_{j_{k}}\\ =&\partial_{x}(g_{\delta}-s_{\theta})\mathcal{T}u_{j _{k}}+\mathcal{Q}_{3}\partial_{x}u_{j_{k}},\end{split} \tag{3.19}\]
where the first equality follows by noticing that \(\partial_{x}\) and \(\mathcal{T}\) commute,
\[\widehat{\partial_{x}\mathcal{T}u}=i\xi(1+\xi)\hat{u}(\xi)=(1+\xi)i\xi\hat{u }(\xi)=(1+\xi)\widehat{\partial_{x}u}=\widehat{\mathcal{T}\partial_{x}u}.\]
It is not difficult to see that \(\partial_{x}(g_{\delta}-s_{\theta})\in H^{1}\) with \(\|\partial_{x}(g_{\delta}-s_{\theta})\|_{L^{\infty}}<\infty\). Hence, by Proposition 3.22, the linear operator \(\partial_{x}(g_{\delta}-s_{\theta})\mathcal{T}:H^{3}\to H^{2}\) is compact in \(L^{2}\). Therefore, there exist two functions, \(v\) and \(w\), both in \(L^{2}\), and a subsequence denoted as \(\{u_{\ell}\}_{\ell>0}\), such that
\[\lim_{\ell\to\infty}\left\|v-\partial_{x}(g_{\delta}-s_{\theta})\mathcal{T}u_ {\ell}\right\|_{L^{2}}=0,\quad\text{and}\quad\lim_{\ell\to\infty}\left\|w- \mathcal{Q}_{3}\partial_{x}u_{\ell}\right\|_{L^{2}}=0.\]
We will prove that \(u\in H^{1}\) and \(\partial_{x}u=v+w\). The argument follows by density and letting \(\phi\in C_{0}^{\infty}\), where
\[\left\langle u\,,\partial_{x}\phi\right\rangle_{L^{2}}=\left\langle u-[g_{ \delta}-s_{\theta}]\mathcal{T}u_{\ell}\,,\partial_{x}\phi\right\rangle_{L^{2}} +\left\langle[g_{\delta}-s_{\theta}]\mathcal{T}u_{\ell}\,,\partial_{x}\phi \right\rangle_{L^{2}}.\]
Now, take the limit when \(\ell\to\infty\) and use the facts that \(\left\|u-(g_{\delta}-s_{\theta})\mathcal{T}u_{\ell}\right\|_{L^{2}}\to 0\) and that strong convergence implies weak convergence, in order to obtain
\[\left\langle u\,,\partial_{x}\phi\right\rangle_{L^{2}}=\lim_{ \ell\to\infty}\left\langle[g_{\delta}-s_{\theta}]\mathcal{T}u_{\ell}\,, \partial_{x}\phi\right\rangle_{L^{2}}= -\lim_{\ell\to\infty}\left\langle\partial_{x}([g_{\delta}-s_{ \theta}]\mathcal{T}u_{\ell})\,,\phi\right\rangle_{L^{2}}\] \[= -\lim_{\ell\to\infty}\left\langle\partial_{x}(g_{\delta}-s_{ \theta})\mathcal{T}u_{\ell}+\mathcal{Q}_{3}\partial_{x}u_{\ell}\,,\phi\right\rangle _{L^{2}}\] \[= -\left\langle v+w\,,\phi\right\rangle_{L^{2}}.\]
Whence, for every bounded sequence \(\{u_{j}\}_{j>0}\subset H^{3}\) there exists a convergent subsequence \(\{\mathcal{Q}_{3}u_{\ell}\}_{\ell>0}\) in \(H^{1}\). In other words, \(\mathcal{Q}_{3}:H^{3}\to H^{2}\) is compact in \(H^{1}\).
Finally, we study the operator \(\mathcal{Q}_{1}\). From the definition of \(\mathcal{Q}_{1}\), we have
\[\mathcal{Q}_{1}u=\mathcal{T}u-g_{\delta}(x)\mathcal{T}(g_{\delta}(x)u)=(1-g_{ \delta}^{2})\mathcal{T}u+g_{\delta}(g_{\delta}\mathcal{T}-\mathcal{T}g_{\delta })u=(1-g_{\delta}^{2})\mathcal{T}u+g_{\delta}[g_{\delta},\mathcal{T}]u.\]
Notice that \((1-g_{\delta}^{2}(x))=0\) for \(|x|\geq\delta\) and it is bounded for \(|x|<\delta\) as well as its derivative. Hence by Proposition 3.22, the operator \((1-g_{\delta}^{2})\mathcal{T}:H^{2}\to H^{1}\) is compact in \(L^{2}\). For this last term, it will be enough to prove that the commutator \([g_{\delta},\mathcal{T}]:H^{2}\to H^{1}\) is compact in \(L^{2}\), since the multiplication by \(g_{\delta}\) is a continuous operation. Indeed, we notice that the term \([g_{\delta},\mathcal{T}]\) can be written in terms of the Hilbert transform \(\mathcal{H}\) (see Lemma 3.5) as
\[[g_{\delta},\mathcal{T}]u=[g_{\delta},(-\Delta)^{1/2}]u=[g_{\delta },\mathcal{H}\circ\partial_{x}]u =g_{\delta}\mathcal{H}(\partial_{x}u)-\mathcal{H}(\partial_{x}(g_{ \delta}u))\] \[=g_{\delta}\mathcal{H}(\partial_{x}u)-\mathcal{H}(g_{\delta} \partial_{x}u)-\mathcal{H}((\partial_{x}g_{\delta})u)\] \[=[g_{\delta},\mathcal{H}](\partial_{x}u)-\mathcal{H}((\partial_{x }g_{\delta})u).\]
Observe that \((\partial_{x}g_{\delta})\mathrm{I}:H^{2}\to H^{1}\) is compact in \(L^{2}\) since the hypothesis in Proposition 3.22 is satisfied by choosing \(\phi=\partial_{x}g_{\delta}\). Also, since the Hilbert transform is continuous on \(L^{2}\), we conclude that \(\mathcal{H}\circ(\partial_{x}g_{\delta})\mathrm{I}:H^{2}\to H^{1}\) is compact on \(L^{2}\).
Thus we must prove that the linear operator \([g_{\delta},\mathcal{H}]\partial_{x}:H^{2}\to H^{1}\) is compact in \(L^{2}\). Notice that \(\{[g_{\delta},\mathcal{H}]\partial_{x}u\,|\,u\in\mathcal{F}\}\) is \(L^{2}\)-equicontinuous since this set is bounded in \(H^{1}\). This readily follows by applying the properties of the Hilbert transform to the terms in \(\left\|[g_{\delta},\mathcal{H}]\partial_{x}u\right\|_{H^{1}}^{2}\). Indeed, we have the estimates
\[\left\|[g_{\delta},\mathcal{H}]\partial_{x}u\right\|_{L^{2}}^{2}\leq\left\|g_{ \delta}\mathcal{H}\partial_{x}u\right\|_{L^{2}}^{2}+\left\|\mathcal{H}(g_{ \delta}\partial_{x}u)\right\|_{L^{2}}^{2}\leq 2\|g_{\delta}\|_{L^{\infty}}^{2}\left\| \partial_{x}u\right\|_{L^{2}}^{2},\]
and
\[\left\|\partial_{x}([g_{\delta},\mathcal{H}]\partial_{x}u)\right\|_{L^ {2}}^{2}\leq \left\|\partial_{x}(g_{\delta}\mathcal{H}\partial_{x}u)\right\|_{L^ {2}}^{2}+\left\|\partial_{x}(\mathcal{H}(g_{\delta}\partial_{x}u))\right\|_{L^ {2}}^{2}\] \[\leq \left\|(\partial_{x}g_{\delta})\mathcal{H}\partial_{x}u\right\|_{L ^{2}}^{2}+\left\|g_{\delta}\mathcal{H}\partial_{x}^{2}u\right\|_{L^{2}}^{2}+ \left\|\mathcal{H}\partial_{x}(g_{\delta}\partial_{x}u)\right\|_{L^{2}}^{2}\] \[\leq \|\partial_{x}g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x}u \right\|_{L^{2}}^{2}+\|g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x}^{2}u \right\|_{L^{2}}^{2}+\left\|\partial_{x}(g_{\delta}\partial_{x}u)\right\|_{L^ {2}}^{2}\] \[\leq 2\|\partial_{x}g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x} u\right\|_{L^{2}}^{2}+2\|g_{\delta}\|_{L^{\infty}}^{2}\left\|\partial_{x}^{2}u \right\|_{L^{2}}^{2}.\]
It remains to show that functions in the set \(\{[g_{\delta},\mathcal{H}]\partial_{x}u\,|\,u\in\mathcal{F}\}\) are \(L^{2}\)-uniformly decaying. For simplicity, let \(v\) denote \(u_{x}\). Hence, \(v\in\mathcal{F}^{\prime}:=\{u_{x}\,:\,u\in\mathcal{F}\}\), which is a bounded set in \(H^{1}\). We recall that
\[\pi[g_{\delta},\mathcal{H}]\partial_{x}u=\pi[g_{\delta},\mathcal{H }]v= \,\text{P.V. }\int_{\mathbb{R}}\frac{g_{\delta}(x)-g_{\delta}(y)}{x-y}v(y)\,dy\] \[= \lim_{\epsilon\to 0}\int_{|y-x|>\epsilon}\frac{g_{\delta}(x)-g_{ \delta}(y)}{x-y}v(y)\,dy\] \[= \lim_{\epsilon\to 0}\int_{|h|>\epsilon}\frac{g_{\delta}(x+h)-g_{ \delta}(x)}{h}v(x+h)\,dh\] \[= \lim_{\epsilon\to 0}\int_{|h|>\epsilon}\frac{1}{h}\int_{x}^{x+h}g_{ \delta}^{\prime}(t)\,dt\,v(x+h)\,dh.\]
Since we are interested in the behavior of \(\left\|\mathbf{1}_{|x|>R}[g_{\delta},\mathcal{H}]\partial_{x}u\right\|_{L^{2}}^ {2}\) for \(R\to\infty\), we assume that \(R>2\delta\) and \(\epsilon<\delta\). For \(x>R\) the integral is split as
\[\pi[g_{\delta},\mathcal{H}]v(x)= \int_{-\infty}^{-x+\delta}\frac{1}{h}\int_{x}^{x+h}g_{\delta}^{ \prime}(t)\,dt\,v(x+h)\,dh+\] \[+\lim_{\epsilon\to 0}\left[\int_{-x+\delta}^{-\epsilon}\frac{1}{h} \int_{x}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh+\int_{\epsilon}^{\infty }\frac{1}{h}\int_{x}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh\right].\]
Notice that the last two integral are equal to zero since \(\operatorname{supp}g^{\prime}\subset[-\delta,\delta]\) and \(\delta<x+h\) for \(h>\delta-x\). Moreover if \(C:=\int_{|x|\leq\delta}g^{\prime}(x)\,dx\) then
\[\pi[g_{\delta},\mathcal{H}]v(x)= \int_{-\infty}^{-x-\delta}\frac{1}{h}\int_{x}^{x+h}g_{\delta}^{ \prime}(t)\,dt\,v(x+h)\,dh+\int_{-x-\delta}^{-x+\delta}\frac{1}{h}\int_{x}^{x+ h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh\] \[= \int_{-\infty}^{-x-\delta}\frac{1}{h}\int_{\delta}^{-\delta}g_{ \delta}^{\prime}(t)\,dt\,v(x+h)\,dh+\int_{-x-\delta}^{-x+\delta}\frac{1}{h} \int_{\delta}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh\] \[= -C\int_{-\infty}^{-x-\delta}\frac{v(x+h)}{h}\,dh+\int_{-x-\delta}^ {-x+\delta}\frac{1}{h}\int_{\delta}^{x+h}g_{\delta}^{\prime}(t)\,dt\,v(x+h)\,dh.\]
Now, we use the variable change \(y=x+h\), the fundamental theorem of calculus, and the fact that \(g_{\delta}(\delta)=1\), to obtain
\[\pi[g_{\delta},\mathcal{H}]v(x)= -C\int_{-\infty}^{-\delta}\frac{v(y)}{y-x}\,d+\int_{-\delta}^{ \delta}\frac{1}{y-x}\int_{\delta}^{y}g_{\delta}^{\prime}(t)\,dt\,v(y)\,dy\] \[= -C\int_{-\infty}^{-\delta}\frac{v(y)}{y-x}\,dy-\int_{-\delta}^{ \delta}\frac{1-g_{\delta}(y)}{y-x}\,v(y)\,dy.\]
A similar analysis applies for \(x<-R\). Thus, for \(|x|>R>2\delta\) there holds
\[\pi[g_{\delta},\mathcal{H}]v(x)=\begin{cases}C{\int_{\delta}^{ \infty}\frac{v(y)}{y-x}\,dy}+{\int_{-\delta}^{\delta}\frac{g_{ \delta}(y)+1}{y-x}\,v(y)\,dy}&\text{for }x<-R,\\ -C{\int_{-\infty}^{-\delta}\frac{v(y)}{y-x}\,dy}-{\int_{- \delta}^{\delta}\frac{1-g_{\delta}(y)}{y-x}\,v(y)\,dy}&\text{for }x>R.\end{cases}\]
These expressions can be recast as,
\[\pi[g_{\delta},\mathcal{H}]v(x)=C\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,( x)y)}{y+|x|}\,dy+{\int_{-\delta}^{\delta}\frac{g_{\delta}(y)-\text{sgn}\,(x)}{y-x }\,v(y)\,dy}. \tag{3.20}\]
Notice that both integrals are convergent. Indeed, since \(v=u_{x}\), then an integration by parts yields
\[{\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,(x)y)}{y+|x|}\,dy} ={\int_{\delta}^{\infty}\frac{u^{\prime}(-\text{sgn}\,(x)y)}{y+|x|} \,dy}\] \[=\frac{-\text{sgn}\,(x)u(-\text{sgn}\,(x)y)}{y+|x|}\bigg{|}_{ \delta}^{\infty}+{\int_{\delta}^{\infty}\frac{-\text{sgn}\,(x)u(-\text{sgn} \,(x)y)}{(y+|x|)^{2}}\,dy}\] \[=-\,\text{sgn}\,(x)\left[-\frac{u(-\text{sgn}\,(x)\delta)}{ \delta+|x|}+{\int_{\delta}^{\infty}\frac{u(-\text{sgn}\,(x)y)}{(y+|x|)^{2}}\, dy}\right]\] \[=-\,\text{sgn}\,(x)\int_{\delta}^{\infty}\frac{u(-\text{sgn}\,( x)y)-u(-\text{sgn}\,(x)\delta)}{(y+|x|)^{2}}\,dy.\]
Since \(\mathcal{F}\) is bounded in \(H^{2}\), then \(\|u\|_{L^{\infty}}\leq\|u\|_{H^{1}}^{2}\leq M\) for every \(u\in\mathcal{F}\), which implies that
\[\left|{\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,(x)y)}{y+|x|}\,dy}\right|\leq 2 M\int_{\delta}^{\infty}\frac{1}{(y+|x|)^{2}}\,dy=\frac{2M}{\delta+|x|}.\]
This yields
\[{\int_{|x|>R}\left({\int_{\delta}^{\infty}\frac{v(-\text{sgn}\,(x)y)}{y+|x|} \,dy}\right)^{2}dx}\leq 4M^{2}\int_{|x|>R}\frac{dx}{(\delta+x)^{2}}\leq\frac{8M^ {2}}{\delta+R}. \tag{3.21}\]
Now, we analyze the second integral in (3.20). By Jensen's inequality and the fact that \(\|g_{\delta}\|_{L^{\infty}}\leq 1\), one gets
\[{\int_{|x|>R}\left({\int_{-\delta}^{\delta}\frac{g_{\delta}(y)- \text{sgn}\,(x)}{y-x}\,v(y)\,dy}\right)^{2}dx} \leq 2\delta\int_{|x|>R}{\int_{-\delta}^{\delta}\frac{\left(g_{ \delta}(y)-\text{sgn}\,(x)\right)^{2}}{\left(y-x\right)^{2}}\,v(y)^{2}\,dydx}\] \[\leq 8\delta\int_{|x|>R}{\int_{-\delta}^{\delta}\frac{1}{\left(y-x \right)^{2}}\,v(y)^{2}\,dydx}.\]
Since \(v\in L^{2}\) and \((y-x)^{2}\geq(|x|-\delta)\), then for every \(y\in(-\delta,\delta)\) we obtain
\[{\int_{|x|>R}\left({\int_{-\delta}^{\delta}\frac{g_{\delta}(y)- \text{sgn}\,(x)}{y-x}\,v(y)\,dy}\right)^{2}dx}\leq 8\delta\left\|v\right\|_{L^{2}}^ {2}\int_{|x|>R}\frac{dx}{\left(|x|-\delta\right)^{2}}\leq\frac{16\delta M^{2} }{R-\delta}. \tag{3.22}\]
We easily see that Young's inequality implies that
\[\int_{|x|>R}([g_{\delta},\mathcal{H}]v(x))^{2}\,dx \leq\frac{2C^{2}}{\pi^{2}}\int_{|x|>R}\left(\int_{\delta}^{\infty} \frac{v(-\mathrm{sgn}\,(x)y)}{y+|x|}\,dy\right)^{2}dx\] \[+\frac{2}{\pi^{2}}\int_{|x|>R}\int_{-\delta}^{\delta}\left(\frac{ g_{\delta}(y)-\mathrm{sgn}\,(x)}{y-x}\,v(y)\,dy\right)^{2}dx\] \[\leq\frac{16M^{2}(C^{2}+2\delta)}{\pi(R-\delta)}.\]
Therefore, it follows that the set \(\{[g_{\delta},\mathcal{H}]\partial_{x}u\,|\,u\in\mathcal{F}\}\) is totally bounded and the operator \([g_{\delta},\mathcal{H}]\partial_{x}:H^{2}\to H^{1}\) is compact in \(L^{2}\). Therefore \([g_{\delta},\mathcal{T}]:H^{2}\to H^{1}\) and \(\mathcal{Q}_{1}:H^{2}\to H^{1}\) both are compact in \(L^{2}\). This completes the proof.
**Theorem 3.24**.: _The operator \(\mathcal{L}\) is a relatively compact perturbation of \(\mathcal{L}_{\infty}\)._
Proof.: Let \(\mu\in\rho\,(\mathcal{L}_{\infty})\), hence \((\mu-\mathcal{L}_{\infty})^{-1}:L^{2}\to H^{2}\) is a continuous linear operator and, by Theorem 3.23, \(\mathcal{L}_{\infty}-\mathcal{L}:H^{2}\to H^{1}\) is compact in \(L^{2}\). This implies that the operator \((\mathcal{L}_{\infty}-\mathcal{L})(\mu-\mathcal{L}_{\infty})^{-1}\) is compact on \(L^{2}\) (see, e.g., [10] p. 158).
**Remark 3.25**.: An immediate consequence of this relative compactness result is that the essential spectrum of \(\mathcal{L}\) and the spectrum of \(\mathcal{L}_{\infty}\) coincide, by virtue of Weyl's essential spectrum theorem (see, e.g., [11], p. 29). Albeit we do not apply the latter to the operator \(\mathcal{L}\)_per se_, Theorem 3.24 will play a key role in the location of the essential spectrum of a block operator matrix, as we shall see below.
## 4. Perturbation equations and spectral stability
In order to establish the perturbation equations, consider a solution \(\overline{\theta}(x)+u(x,t)\) to the reduced dynamic equation (2.15). Here \(u\) is the perturbation of the static Neel wall's phase which, by the boundary conditions on the real line, must satisfy
\[u(\pm\infty,t)=0,\qquad t>0. \tag{4.1}\]
Upon substitution into (2.15), we obtain the following nonlinear equation for the perturbation,
\[\partial_{t}^{2}u+\nu\partial_{t}u+\nabla\mathcal{E}(\overline{\theta}+u)=0. \tag{4.2}\]
In view of (3.1), equation (4.2) can be recast as
\[\partial_{t}^{2}u+\nu\partial_{t}u+\mathcal{L}u+\mathcal{N}(u)=0,\]
where \(\mathcal{L}u\) is the linearization around \(\overline{\theta}\) of \(\nabla\mathcal{E}(\overline{\theta}+u)\) acting on the perturbation \(u\), and
\[\mathcal{N}(u):=\nabla\mathcal{E}(\overline{\theta}+u)-\mathcal{L}u=O(u^{2}),\]
comprises the nonlinear terms. In view of the form of the operator (3.1) we reckon the perturbation equation as a nonlinear wave equation. By making the (standard) change of variables \(v=\partial_{t}u\), solving the perturbation equation (4.2) is equivalent to solving the nonlinear hyperbolic system
\[\partial_{t}\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}&-\nu\mathrm{I}\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}+\begin{pmatrix}0\\ \mathcal{N}(u)\end{pmatrix}, \tag{4.3}\]
in an appropriate space, which will be determined later.
### The spectral problem
By linearizing equation (4.2) around the Neel wall's phase, we obtain the following equation for the perturbation,
\[\partial_{t}^{2}u+\nu\partial_{t}u+\mathcal{L}u=0, \tag{4.4}\]
which is equivalent to the following linear system in the \((u,v)\) variables,
\[\partial_{t}\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}&-\nu\mathrm{I}\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}. \tag{4.5}\]
Specialize the linearized equation (4.4) to perturbations of the form \(e^{\lambda t}u(x)\), with \(\lambda\in\mathbb{C}\) and \(u\in X\), being \(X\) a Banach space to be determined below. Substituting into (4.4), we obtain the following spectral problem
\[(\lambda^{2}+\nu\lambda)u+\mathcal{L}u=0. \tag{4.6}\]
**Remark 4.1**.: Under the substitution \(\lambda=i\zeta\), equation (4.6) can be written in terms of a _quadratic operator pencil_, \(\widetilde{\mathcal{T}}u=0\) (cf. Markus [13]), with \(\widetilde{\mathcal{T}}=\widetilde{\mathcal{T}}_{0}+\zeta\widetilde{\mathcal{ T}}_{1}+\zeta^{2}\widetilde{\mathcal{T}}_{2}\), and \(\widetilde{\mathcal{T}}_{0}=\partial_{x}^{2}+\mathcal{L}\), \(\widetilde{\mathcal{T}}_{1}=i\nu\mathrm{I}\), \(\widetilde{\mathcal{T}}_{2}=\mathrm{I}\). The transformation \(v=\lambda u\) (the spectral equivalent of the change of variables \(v=\partial_{t}u\)) defines an appropriate cartesian product of the base space which allows us to write equation (4.6) as a genuine eigenvalue problem of the form
\[\lambda\begin{pmatrix}u\\ v\end{pmatrix}=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}&-\nu\end{pmatrix}\begin{pmatrix}u\\ v\end{pmatrix}=:\mathcal{A}\begin{pmatrix}u\\ v\end{pmatrix}. \tag{4.7}\]
The matrix operator \(\mathcal{A}\) is often called the companion matrix to the pencil \(\widetilde{\mathcal{T}}\) (see [1, 1] for further information). Clearly, equation (4.7) is the spectral equation associated to the linear system (4.5). We shall refer to both (4.6) and (4.7) as the spectral problem making no distinction.
In the present stability analysis, we are interested in the spectral properties of the block operator,
\[\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2},\]
regarded as a linear, densely defined operator in \(H^{1}\times L^{2}\) with domain \(D(\mathcal{A}):=H^{2}\times H^{1}\). In other words, we choose our energy base space as
\[X:=H^{1}\times L^{2}.\]
This choice is not only consistent with the boundary conditions (4.1) for perturbations of the Neel wall's phase but, more importantly, it relates to the appropriate energy space encoding perturbations of the energy functional defined in (2.5), which requires variations \(u\in H^{1}\). In addition, the condition \(v\in L^{2}\) implies that those perturbations have finite kinetic energy because \(v\) is the spectral equivalent to \(\partial_{t}u\). Thus, the stability analysis pertains to localized perturbations with finite energy in \(X=H^{1}\times L^{2}\). For shortness, let us introduce the notation
\[U=(u,v)\in H^{2}\times H^{1},\qquad\mathcal{A}U=(v,-\mathcal{L}u-\nu v)\in H^ {1}\times L^{2}.\]
In addition, the standard scalar product in \(H^{1}\times L^{2}\) will be denoted as
\[\langle U,F\rangle_{X}:=\langle(u,v),(f,g)\rangle_{H^{1}\times L^{2}}=\langle u,f\rangle_{H^{1}}+\langle v,g\rangle_{L^{2}},\]
for any \(U=(u,v)\) and \(F=(f,g)\) in \(X\).
**Remark 4.2**.: It is to be observed that this choice of the energy space conveys a slight abuse of notation. Indeed, the operator \(\mathcal{L}\) in the expression for \(\mathcal{A}\) in (4.7) refers actually to its restriction to \(H^{1}\), namely, to the operator
\[\widetilde{\mathcal{L}} :=\mathcal{L}_{|H^{1}} \widetilde{\mathcal{L}} :H^{1}\to L^{2},\] \[D(\widetilde{\mathcal{L}}) =H^{2}\subset H^{1}, \widetilde{\mathcal{L}}u :=\mathcal{L}u,\quad\forall\,u\in H^{2},\]
where, rigorously speaking, \(\mathcal{L}\) is the operator from \(L^{2}\) to \(L^{2}\) defined in (3.1). However, since the original properties remain (for example, its closedness and its spectral bounds, as the reader may easily verify), for simplicity we keep the notation \(\mathcal{L}:H^{1}\to L^{2}\) with the same dense domain \(D(\mathcal{L})=H^{2}\) in the definition of the operator \(\mathcal{A}\) under consideration. In the sequel, we shall remind the reader of this distinction at the steps of the proofs where its is explicitly required.
The first property of the block operator \(\mathcal{A}\) that we verify is its closedness, so that the definitions of resolvent and spectra, as well as their basic properties, apply.
**Lemma 4.3**.: _The matrix block operator \(\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) is closed._
Proof.: Let \(U_{j}=(u_{j},v_{j})\in D(\mathcal{A})=H^{2}\times H^{1}\), \(j\in\mathbb{N}\), be a Cauchy sequence in \(X=H^{1}\times L^{2}\) such that \(\{\mathcal{A}U_{j}\}_{j\in\mathbb{N}}\) is a Cauchy sequence in \(X\) as well. Let us denote their limits as \(U=(u,v)=\lim_{j\to\infty}U_{j}\) and \(F=(f,g)=\lim_{j\to\infty}\mathcal{A}U_{j}\), both in \(X\). This implies that
\[v_{j} \to f,\quad\text{in}\;\;H^{1},\] \[-\mathcal{L}u_{j}-\nu v_{j} \to g,\quad\text{in}\;\;L^{2},\]
as \(j\to\infty\). Since \(v_{j}\to f\) in \(H^{1}\) implies that \(v_{j}\to f\) in \(L^{2}\), we obtain \(-\mathcal{L}u_{j}\to g+\nu f\) in \(L^{2}\). Because \(\{u_{j}\}_{j\in\mathbb{N}}\) is a Cauchy sequence in \(H^{2}\) we deduce that it is also a Cauchy sequence in \(L^{2}\). By virtue of the closedness of the operator \(\mathcal{L}\) when regarded as an operator from \(L^{2}\) to \(L^{2}\) (see Corollary 3.11), we deduce that \(u_{j}\to u\) in \(L^{2}\) implies \(u\in D(\mathcal{L})=H^{2}\) and \(-\mathcal{L}u=g+\nu f\). Therefore, \(U=(u,v)\in D(\mathcal{A})\) and
\[\mathcal{A}U=(v,-\mathcal{L}u-\nu v)=(f,g)=F.\]
This proves that \(\mathcal{A}\) is a closed operator.
Another important property is that the translation eigenvalue remains.
**Lemma 4.4**.: \(\lambda=0\) _is a simple eigenvalue of \(\mathcal{A}\) with eigenfunction_
\[\Theta:=(\partial_{x}\overline{\theta},0)\in D(\mathcal{A})=H^{2}\times H^{1}. \tag{4.8}\]
Proof.: Since \(\partial_{x}\overline{\theta}\in H^{2}\) (see Proposition 2.1 (c)) we clearly notice that \(\Theta\in D(\mathcal{A})\). Moreover, \(\mathcal{A}\Theta=(0,-\mathcal{L}\partial_{x}\overline{\theta})=0\). Hence \(\lambda=0\in\sigma_{\mathrm{pt}}(\mathcal{A})\) with eigenfunction \(\Theta\). To verify that it spans the whole kernel, let \(0\neq U=(u,v)\in\ker\mathcal{A}\). Since \(u\in H^{2}\subset L^{2}\), writing \(u=u_{\perp}\oplus\alpha\partial_{x}\overline{\theta}\) with \(\langle u_{\perp},\partial_{x}\overline{\theta}\rangle=0\) and some \(\alpha\in\mathbb{C}\), yields
\[0=\mathcal{A}U=\mathcal{A}(u_{\perp},v)+\alpha\mathcal{A}(\partial_{x} \overline{\theta},0)=(v,-\mathcal{L}u_{\perp}-\nu v).\]
Therefore \(v=0\) and \(\mathcal{L}u_{\perp}=0\). By Corollary 3.12, \(u_{\perp}=0\) and this shows that the geometric multiplicity is equal to one.
Finally, the algebraic multiplicity is equal to one. Otherwise, there would exist a non trivial Jordan chain \(\mathcal{A}U=\alpha\Theta\), \(\alpha\in\mathbb{C}\setminus\{0\}\) with \(U\neq 0\). This implies that
\[\mathcal{A}U=(v,-\mathcal{L}u-\nu v)=(\alpha\partial_{x}\overline{\theta},0).\]
Therefore \(v=\alpha\partial_{x}\overline{\theta}\) and \(-\mathcal{L}u=\nu\alpha\partial_{x}\overline{\theta}\). Then \(\mathcal{L}\) has a nontrivial Jordan chain which contradicts Corollary 3.12.
### Point spectral stability
After these preparations, we are ready to prove that the operator \(\mathcal{A}\) is point spectrally stable.
**Lemma 4.5**.: _Let \(\lambda\in\sigma_{\mathrm{pt}}(\mathcal{A})\), \(\lambda\neq 0\). Then_
\[\mathrm{Re}\,\lambda\leq-\tfrac{1}{2}\nu+\tfrac{1}{2}\mathbf{1}_{[2\sqrt{ \Lambda_{0}},\infty)}(\nu)\sqrt{\nu^{2}-4\Lambda_{0}}<0, \tag{4.9}\]
_where \(\Lambda_{0}\) is given by Proposition 3.8 (c) and \(\mathbf{1}_{\Omega}(\cdot)\) denotes the characteristic function of any measurable set \(\Omega\subset\mathbb{R}\)._
Proof.: Suppose that \(\lambda\in\sigma_{\mathrm{pt}}(\mathcal{A})\) with \(\lambda\neq 0\). Hence, there exists \(U=(u,v)\in D(\mathcal{A})=H^{2}\times H^{1}\) such that \(\mathcal{A}U=\lambda U\). This yields \(\lambda u=v\) and \((\lambda+\nu)v+\mathcal{L}u=0\). Upon substitution, we obtain
\[\mathcal{L}u+\lambda(\lambda+\nu)u=0,\qquad u\in H^{2}=D(\mathcal{L}).\]
Therefore, \(-\lambda(\lambda+\nu)\in\sigma_{\mathrm{pt}}(\mathcal{L})\) with eigenfunction \(u\). Since \(\mathcal{L}\) is self-adjoint we obtain \(\lambda(\lambda+\nu)\in\mathbb{R}\). Due to \(u\in H^{2}\subset L^{2}\) and \(v\in H^{1}\subset L^{2}\) we may decompose \(u=u_{\perp}\oplus\alpha\partial_{x}\overline{\theta}\), \(v=v_{\perp}\oplus\alpha\partial_{x}\overline{\theta}\), for some \(\alpha,\beta\in\mathbb{C}\), and \(\langle u_{\perp},\partial_{x}\overline{\theta}\rangle_{L^{2}}=\langle v_{ \perp},\partial_{x}\overline{\theta}\rangle_{L^{2}}=0\). Substituting one arrives at the relations
\[\lambda u_{\perp}=v_{\perp},\quad\beta=\lambda\alpha,\] \[\mathcal{L}u_{\perp}+\lambda(\lambda+\nu)(u_{\perp}+\alpha \partial_{x}\overline{\theta})=0.\]
Take the \(L^{2}\)-product of last equation with \(u_{\perp}\). The result is
\[0=\langle\mathcal{L}u_{\perp},u_{\perp}\rangle_{L^{2}}+\lambda(\lambda+\nu) \|u_{\perp}\|_{L^{2}}^{2}+\lambda(\lambda+\nu)\langle\alpha\partial_{x} \overline{\theta},u_{\perp}\rangle_{L^{2}}\geq(\Lambda_{0}+\lambda^{2}+\lambda \nu)\|u_{\perp}\|_{L^{2}}^{2},\]
because \(\langle u_{\perp},\partial_{x}\overline{\theta}\rangle_{L^{2}}=0\) and \(\lambda(\lambda+\nu)\in\mathbb{R}\). Therefore, we obtain the bound
\[\lambda(\lambda+\nu)\leq-\Lambda_{0}. \tag{4.10}\]
Henceforth, we arrive at the relations
\[\mathrm{Im}\,(\lambda(\lambda+\nu)) =(\mathrm{Im}\,\lambda)(\nu+2\mathrm{Re}\,\lambda)=0, \tag{4.11a}\] \[-\Lambda_{0}\geq\mathrm{Re}\,(\lambda(\lambda+\nu)) =(\mathrm{Re}\,\lambda)^{2}-(\mathrm{Im}\,\lambda)^{2}+\nu \mathrm{Re}\,\lambda. \tag{4.11b}\]
Since \(\nu>0\) is a given physical constant,1 we have two parameter regimes: (i) \(\nu\in(0,2\sqrt{\Lambda_{0}})\), or (ii) \(\nu\in[2\sqrt{\Lambda_{0}},\infty)\). Let us examine the first case. From (4.11a) we either have \(\lambda\in\mathbb{R}\) or \(\mathrm{Re}\,\lambda=-\tfrac{1}{2}\nu\). Assuming \(\lambda\in\mathbb{R}\), we readily observe that (4.11b) has no real \(\lambda\)-solutions if \(\nu\in(0,2\sqrt{\Lambda_{0}})\). Indeed, with basic calculus tools one can easily verify that the real polynomial \(q(\lambda)=\lambda^{2}+\nu\lambda+\Lambda_{0}\) has a unique global minimum at \(\lambda=-\tfrac{1}{2}\nu\) with \(q(-\tfrac{1}{2}\nu)=\Lambda_{0}-\tfrac{1}{4}\nu^{2}>0\). Thus, we are left with the case \(\mathrm{Re}\,\lambda=-\tfrac{1}{2}\nu\) which clearly satisfies (4.9).
Footnote 1: Notice that \(\mathcal{L}\) and its spectral bound \(\Lambda_{0}\) do not depend on \(\nu\)
In the second parameter regime with \(\nu\in[2\sqrt{\Lambda_{0}},\infty)\), again we either have \(\lambda\in\mathbb{R}\) or \(\mathrm{Re}\,\lambda=-\tfrac{1}{2}\nu\). If \(\lambda\) is real then \(\lambda^{2}+\nu\lambda\leq-\Lambda_{0}\) holds only for
\[\lambda\in\big{[}-\tfrac{1}{2}\nu-\tfrac{1}{2}\sqrt{\nu^{2}-4\Lambda_{0}},- \tfrac{1}{2}\nu+\tfrac{1}{2}\sqrt{\nu^{2}-4\Lambda_{0}}\big{]}.\]
Clearly in both cases the bound (4.9) holds. This shows the lemma.
**Corollary 4.6** (point spectral stability).: \[\sigma_{\mathrm{pt}}(\mathcal{A})\subset\{0\}\cup\{\lambda\in\mathbb{C}\,: \,\mathrm{Re}\,\lambda\leq-\tfrac{1}{2}\nu+\tfrac{1}{2}\sqrt{\nu^{2}-4 \Lambda_{0}}\,\mathbf{1}_{[2\sqrt{\Lambda_{0}},\infty)}(\nu)\}.\] (4.12)
Proof.: Follows immediately from Lemmata 4.4 and 4.5.
### Stability of the essential spectrum
In this section, we study the essential spectrum of the block operator \(\mathcal{A}\). To that end, we define the following auxiliary asymptotic matrix block operator,
\[\mathcal{A}_{\infty}:H^{1}\times L^{2}\to H^{1}\times L^{2},\qquad\mathcal{A}_{ \infty}:=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}_{\infty}&-\nu\mathrm{I}\end{pmatrix}, \tag{4.13}\]
with dense domain \(D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\). Once again, with a slight abuse in notation the operator \(\mathcal{L}_{\infty}\) in (4.13) refers to the restriction of the operator defined in (3.8) to the space \(H^{1}\), namely, to the operator
\[\widetilde{\mathcal{L}}_{\infty} :=\mathcal{L}_{\infty|H^{1}} \widetilde{\mathcal{L}}_{\infty}:H^{1}\to L^{2},\] \[D(\widetilde{\mathcal{L}}_{\infty}) =H^{2}\subset H^{1}, \widetilde{\mathcal{L}}_{\infty}u:=\mathcal{L}_{\infty}u,\quad \forall\,u\in H^{2},\]
so that the energy base space of the asymptotic operator \(\mathcal{A}_{\infty}\) is \(H^{1}\times L^{2}\). In the sequel, we write \(\mathcal{L}_{\infty}\) to denote this restriction. Therefore, for any \(U=(u,v)\in D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\) we clearly have \(\mathcal{A}_{\infty}U=(v,-\mathcal{L}_{\infty}u-\nu v)\in H^{1}\times L^{2}\).
**Lemma 4.7**.: _The asymptotic block operator \(\mathcal{A}_{\infty}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) is closed and onto._
Proof.: The proof of the closedness of \(\mathcal{A}_{\infty}\) is the same as that of Lemma 4.3 and we omit it. To show that \(\mathcal{A}_{\infty}\) is onto, notice that for any \(F=(f,g)\in H^{1}\times L^{2}\) the equation \(\mathcal{A}_{\infty}U=F\) with \(U=(u,v)\in D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\) is equivalent to the system
\[v=f,\qquad-\mathcal{L}_{\infty}u=g+\nu f.\]
By defining \(v:=f\in H^{1}\) and by virtue of Lemma 3.15, given \(g+\nu f\in L^{2}\) there exists a unique solution \(u\in H^{2}\) to the equation \(-\mathcal{L}_{\infty}u=g+\nu f\). Hence, \(H^{1}\times L^{2}=\mathcal{R}(\mathcal{A}_{\infty})\), as claimed.
In this fashion, \(\mathcal{A}_{\infty}\) is a closed, densely defined operator with full range. The following result determines the location of its spectrum.
**Lemma 4.8**.: _If \(\lambda\in\sigma(\mathcal{A}_{\infty})\) then_
\[\operatorname{Re}\lambda\leq-\tfrac{1}{2}\nu+\tfrac{1}{2}\mathbf{1}_{[2, \infty)}(\nu)\sqrt{\nu^{2}-4}<0. \tag{4.14}\]
Proof.: Assume \(\lambda\in\mathbb{C}\), \(U=(u,v)\in D(\mathcal{A}_{\infty})=H^{2}\times H^{1}\) and \(F=(f,g)\in X=H^{1}\times L^{2}\) are such that \((\lambda-\mathcal{A}_{\infty})U=F\). This equation is equivalent to the system
\[\lambda u-v=f,\qquad\mathcal{L}_{\infty}u+(\lambda+\nu)v=g.\]
Substituting the first equation into the second, we arrive at
\[\big{(}\mathcal{L}_{\infty}+\lambda(\lambda+\nu)\big{)}u=g+(\lambda+\nu)f.\]
For any \(\nu>0\) and \(\lambda\in\mathbb{C}\) fixed, we have \(g+(\lambda+\nu)f\in L^{2}\). Thus, from Lemma 3.16 (d) and the resolvent estimate from Lemma 3.18, this equation has a unique solution \(u\in H^{2}\) provided that \(\lambda(\lambda+\nu)\in\mathbb{C}\backslash(-\infty,-1]\). Moreover, by Young's inequality
\[\|U\|_{H^{1}\times L^{2}}^{2}=\|u\|_{H^{1}}^{2}+\|v\|_{L^{2}}^{2}=\|u\|_{H^{1} }^{2}+\|f+\lambda u\|_{L^{2}}^{2}\leq(1+2|\lambda|)\|u\|_{H^{1}}^{2}+2\|f\|_{L^ {2}}^{2}.\]
From Lemma 3.18, if \(u\in H^{2}\) solves \((\mathcal{L}_{\infty}+\lambda(\lambda+\nu))u=g+(\lambda+\nu)f\) with \(\mu=\lambda(\lambda+\nu)\in\mathbb{C}\backslash(-\infty,-1]\), then there exists a constant \(C=C(\lambda,\nu)>0\) such that
\[\|u\|_{H^{1}}\leq\|u\|_{H^{2}}\leq C(\lambda,\nu)\|g+(\lambda+\nu)f\|_{L^{2}}.\]
Therefore, we obtain that
\[\|U\|_{H^{1}\times L^{2}}^{2}\leq(1+2|\lambda|)\|u\|_{H^{1}}^{2}+2 \|f\|_{L^{2}}^{2} \leq(1+2|\lambda|)C(\lambda,\nu)^{2}\|g+(\lambda+\nu)f\|_{L^{2}}^{ 2}+2\|f\|_{L^{2}}^{2}\] \[\leq\overline{C}(\lambda,\nu)\big{(}\|f\|_{H^{1}}^{2}+\|g\|_{L^{ 2}}^{2}\big{)}=\overline{C}(\lambda,\nu)\|F\|_{H^{1}\times L^{2}}^{2},\]
for some \(\overline{C}(\lambda,\nu)>0\). This shows that \(\lambda\in\rho(\mathcal{A}_{\infty})\). To sum up, we have proved that \(\lambda(\lambda+\nu)\in\mathbb{C}\backslash(-\infty,-1]\,\Rightarrow\,\lambda \in\rho(\mathcal{A}_{\infty})\), or, equivalently, that
\[\sigma(\mathcal{A}_{\infty})\subset\big{\{}\lambda\in\mathbb{C}\,:\,\lambda( \lambda+\nu)\in(-\infty,-1]\big{\}}. \tag{4.15}\]
Now, notice that the relation that defines the set on the right hand side of (4.15) can be recast as
\[\begin{split}\operatorname{Im}\left(\lambda(\lambda+\nu)\right)& =(\operatorname{Im}\lambda)(\nu+2\mathrm{Re}\,\lambda)=0,\\ -1&\geq\mathrm{Re}\left(\lambda(\lambda+\nu)\right) =(\mathrm{Re}\,\lambda)^{2}-(\operatorname{Im}\lambda)^{2}+\nu\mathrm{Re}\, \lambda.\end{split} \tag{4.16}\]
First, let us assume that \(\nu\in(0,2)\). Then the first equation in (4.16) implies that either \(\operatorname{Im}\lambda=0\) or \(\mathrm{Re}\,\lambda=-\frac{1}{2}\nu\). In the latter case there is nothing to prove. Whereas, if \(\lambda\in\mathbb{R}\), then the second relation in (4.16), namely \(\lambda^{2}+\nu\lambda\leq-1\) has no real \(\lambda\)-solutions. Thus, (4.14) holds if \(\nu\in(0,2)\).
Second, suppose that \(\nu\geq 2\). Once again, we have two cases, either \(\lambda\in\mathbb{R}\) or \(\mathrm{Re}\,\lambda=-\frac{1}{2}\nu\). In the latter case (4.14) clearly holds. In the former case, the inequality \(\lambda^{2}+\lambda\nu\leq-1\) is satisfied only if
\[\lambda\in\big{[}-\tfrac{1}{2}\nu-\tfrac{1}{2}\sqrt{\nu^{2}-4},-\tfrac{1}{2} \nu+\tfrac{1}{2}\sqrt{\nu^{2}-4}\big{]},\]
determining values of \(\lambda\) for which (4.14) also holds. The proof is complete.
The following lemma is the key ingredient to locate the essential spectrum of the block operator \(\mathcal{A}\).
**Lemma 4.9**.: _The block operator \(\mathcal{A}\) is a relatively compact perturbation of \(\mathcal{A}_{\infty}\)._
Proof.: Suppose \(\lambda\in\rho(\mathcal{A}_{\infty})\) and let \(\{U_{j}\}_{j\in\mathbb{N}}\) be a bounded sequence in \(H^{1}\times L^{2}\). Therefore, \((\lambda-\mathcal{A}_{\infty})^{-1}U_{j}\in D(\mathcal{A}_{\infty})=H^{2} \times H^{1}\) is a bounded sequence in \(H^{2}\times H^{1}\) because \((\lambda-\mathcal{A}_{\infty})^{-1}\) is a bounded operator. Hence, if we denote
\[\begin{pmatrix}f_{j}\\ g_{j}\end{pmatrix}:=(\lambda-\mathcal{A}_{\infty})^{-1}U_{j},\]
we have,
\[(\mathcal{A}_{\infty}-\mathcal{A})(\lambda-\mathcal{A}_{\infty})^{-1}U_{j}= \begin{pmatrix}0&0\\ \mathcal{L}_{\infty}-\mathcal{L}&0\end{pmatrix}\begin{pmatrix}f_{j}\\ g_{j}\end{pmatrix}=\begin{pmatrix}0\\ (\mathcal{L}_{\infty}-\mathcal{L})f_{j}\end{pmatrix}.\]
Since \(\{f_{j}\}_{j\in\mathbb{N}}\) is bounded in \(H^{2}\) and \(\mathcal{L}_{\infty}-\mathcal{L}:H^{2}\to H^{1}\) is compact in \(L^{2}\) (see Theorem 3.23 above), the bounded sequence \(\{(\mathcal{L}_{\infty}-\mathcal{L})f_{j}\}\subset H^{1}\) has a convergent subsequence in \(L^{2}\). This implies that \((\mathcal{A}_{\infty}-\mathcal{A})(\lambda-\mathcal{A}_{\infty})^{-1}U_{j}\) has a convergent subsequence in \(H^{1}\times L^{2}\). Thus, the operator \((\mathcal{A}_{\infty}-\mathcal{A})(\lambda-\mathcal{A}_{\infty})^{-1}\) is compact on \(H^{1}\times L^{2}\) for every \(\lambda\in\rho(\mathcal{A}_{\infty})\), and the proof is complete.
The most important consequence of last result is the location of the essential spectrum of \(\mathcal{A}\).
**Corollary 4.10**.: \(\sigma(\mathcal{A}_{\infty})=\sigma_{\rm ess}(\mathcal{A})\)_. Moreover, any \(\lambda\in\sigma_{\rm ess}(\mathcal{A})\) satisfies estimate (4.14)._
Proof.: This is a direct consequence of Weyl's essential spectrum theorem (see [13], p. 29) and Lemma 4.8.
### Spectral stability with uniform spectral gap
Let us summarize the content of Corollaries 4.6 and 4.10 into the following result, which conveys the spectral stability of the Neel wall's phase in the appropriate energy space with a uniform spectral gap, that is, a positive distance from the eigenvalue zero to the rest of the spectrum.
**Theorem 4.11**.: _For each fixed \(\nu>0\) there exists a uniform positive constant_
\[\zeta_{0}(\nu):=\tfrac{1}{2}\nu-\max\Big{\{}\tfrac{1}{2}\mathbf{1}_{[2,\infty )}(\nu)\sqrt{\nu^{2}-4},\tfrac{1}{2}\mathbf{1}_{[2\sqrt{\Lambda_{0}},\infty)}( \nu)\sqrt{\nu^{2}-4\Lambda_{0}}\Big{\}}>0,\]
_such that_
\[\sigma(\mathcal{A})\subset\{0\}\cup\big{\{}\lambda\in\mathbb{C}\,:\,\mathrm{ Re}\,\lambda\leq-\zeta_{0}(\nu)<0\big{\}}.\]
**Remark 4.12**.: The positive constant \(\zeta_{0}(\nu)\) is uniform because the spectral bound \(\Lambda_{0}\) does not depend on the parameter \(\nu\). This spectral gap determines an exponential decay for the solutions to the evolutionary equation, as we shall see in the sequel.
## 5. Semigroup generation and decay
### The adjoint operator
It is known (see Kato [14], Remark 6.23, p. 184) that if \(\lambda\in\mathbb{C}\) is an eigenvalue of a closed operator \(\mathcal{T}:D(\mathcal{T})\subset H\to H\) on a Hilbert space \(H\), then \(\lambda^{*}\) is an eigenvalue of \(\mathcal{T}^{*}\) (formal adjoint) with the same geometric and algebraic multiplicities. In the present context, since \(H^{1}\) and \(L^{2}\) are reflexive Hilbert spaces, then \(\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) and \(D(\mathcal{A})=H^{2}\times H^{1}\) has a formal adjoint which is also densely defined and closed. Moreover, \(\mathcal{A}^{**}=\mathcal{A}\) (cf. [14], Theorem 5.29, p. 168). Upon these observations we immediately have the following
**Lemma 5.1**.: \(\lambda=0\) _is an isolated, simple eigenvalue of \(\mathcal{A}^{*}:X^{*}\to X^{*}\)._
The following result determines the form of the adjoint of the linearized block matrix operator around the Neel wall's phase.
**Lemma 5.2**.: _The formal adjoint \(\mathcal{A}^{*}\), restricted to the domain \(D(\mathcal{A})\), is given by_
\[\mathcal{A}^{*}|_{D(\mathcal{A})}=\begin{pmatrix}0&\mathcal{F}\\ -\partial_{xx}+\mathrm{I}&-\nu\end{pmatrix} \tag{5.1}\]
_where the operator \(\mathcal{F}:H^{1}\to H^{-1}\) is formally defined as the map_
\[v\mapsto-(\mathcal{S}v-c_{\theta}v,\partial_{x}v)=:\mathcal{F}v.\]
_Moreover, \(\mathcal{F}|_{H^{2}}=[1+(-\Delta)]^{-1}\,\mathcal{L}\), where \([1+(-\Delta)]^{-1}\,\mathcal{L}\,v\) denotes the convolution of the Bessel potential for \(k=2\) with \(\mathcal{L}\,v\)._
Proof.: First, let \(U=(u,v)\) and \(V=(w,z)\) be both in \(D(\mathcal{A})=H^{2}\times H^{1}\). Then by definition of the inner product in \(X\), we have
\[\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle v\,,w\right\rangle _{H^{1}}-\left\langle\mathcal{L}\,u+\nu v\,,z\right\rangle_{L^{2}}=\left\langle v \,,w-\nu z\right\rangle_{L^{2}}-\left\langle\mathcal{L}\,u\,,z\right\rangle_{ L^{2}}+\left\langle\partial_{x}v\,,\partial_{x}w\right\rangle_{L^{2}}.\]
Since \(w\in H^{2}\), integration by parts on the last term leads us to
\[\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle v\,,-\partial_{x}^ {2}w+w-\nu z\right\rangle_{L^{2}}-\left\langle\mathcal{L}\,u\,,z\right\rangle_{ L^{2}}. \tag{5.2}\]
Also, by the symmetry of the linear operator \(\mathcal{S}\) (see Lemma 3.9), we recast the last inequality as
\[\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle v\,,-\partial_{x}^{2}w +w-\nu z\right\rangle_{L^{2}}-\left\langle\partial_{x}u\,,\partial_{x}z\right \rangle_{L^{2}}-\left\langle u\,,\mathcal{S}z-c_{\theta}z\right\rangle_{L^{2}},\]
since \(z\in H^{1}\). Therefore, \(\left\langle\mathcal{A}U\,,V\right\rangle_{X}=\left\langle U\,,\mathcal{A}^{* }V\right\rangle_{X}\) for \(\mathcal{A}^{*}\) as in (5.1) where \(\mathcal{F}z\in H^{-1}\) is represented by the pair \(-(\mathcal{S}z-c_{\theta}z,\partial_{x}z)\in L^{2}\times L^{2}\).
Finally, assume that \(z\in H^{2}\) and let \(\mathcal{K}\) be the Bessel potential with parameter \(k=2\) on \(L^{2}\) functions, defined by the Fourier symbol \((1+|\xi|^{2})^{-1}\). Apply Plancherel's identity twice to the last term of (5.2) in order to get
Last equality holds because \(\mathcal{K}\,\mathcal{L}\,z\in H^{1}\) with \(\left\|\mathcal{K}\,\mathcal{L}\,z\right\|_{H^{1}}^{2}\leq\left\|\mathcal{L} \,z\right\|_{L^{2}}^{2}\). This shows the result.
**Corollary 5.3**.: _Let \(\mathcal{A}^{*}\) be the formal adjoint of \(\mathcal{A}\). Also assume that \(\left.\mathcal{A}^{*}\right|_{D(\mathcal{A})}\) and \(\mathcal{F}\) be as in Lemma 5.2 and define_
\[\Phi:=(\nu[1+(-\Delta)]^{-1}\ \partial_{x}\overline{\theta},\partial_{x} \overline{\theta}). \tag{5.3}\]
_Then \(\Phi\in X^{*}\) is an eigenvector of the adjoint \(\mathcal{A}^{*}:X^{*}\to X^{*}\), associated to the isolated, simple eigenvalue \(\lambda=0\)._
Proof.: First, we claim that \([1+(-\Delta)]^{-1}\ \partial_{x}\overline{\theta}\in H^{2}\). This can easily seen by Plancherel's identity since,
\[\left\|[1+(-\Delta)]^{-1}\ \partial_{x}\overline{\theta}\right\|_{H^{2}}^{2}=\nu^ {2}\int_{\mathbb{R}}(1+|\xi|^{2})^{2}(1+|\xi|^{2})^{-2}\left|\widehat{\partial _{x}\overline{\theta}}\right|^{2}d\xi=\nu^{2}\left\|\partial_{x}\overline{ \theta}\right\|_{L^{2}}^{2}. \tag{5.4}\]
Thanks to property (c) in Proposition 2.1, we know that \(\partial_{x}\overline{\theta}\in H^{2}\); therefore \(\Phi\in H^{2}\times H^{2}\subset D(\mathcal{A})\). Since \(H^{2}\subset H^{1}\subset L^{2}\subset H^{-1}\) and \((H^{1}\times L^{2})^{*}=H^{-1}\times L^{2}\) holds due to the used norm in \(X\), it follows that \(\Phi\in X^{*}\). Also, Lemma 5.2 yields
\[\mathcal{A}^{*}\Phi=\left.\mathcal{A}^{*}\right|_{D(\mathcal{A})}\Phi=( \mathcal{F}\partial_{x}\overline{\theta},0)=(\mathcal{K}\,\mathcal{L}\, \partial_{x}\overline{\theta},0)=(0,0).\]
The last equality holds since the Bessel potential is an invertible linear operator on \(L^{2}\) and \(\mathcal{L}\,\partial_{x}\overline{\theta}=0\) in \(L^{2}\).
If we define \(\Phi_{0}:=(\nu\partial_{x}\overline{\theta},\partial_{x}\overline{\theta})\) then it is clear that \(\Phi_{0}\in L^{2}\times L^{2}\). The following result shows that \(\left\langle\cdot\,,\Phi\right\rangle_{X}\in(H^{1}\times L^{2})^{*}\) has a natural extension to the dual of \(L^{2}\times L^{2}\).
**Corollary 5.4**.: _Let \(F\in H^{1}\times L^{2}\) and \(\Phi_{0}=(\nu\partial_{x}\overline{\theta},\partial_{x}\overline{\theta})^{\top}\), then_
\[\left\langle F\,,\Phi\right\rangle_{X}=\left\langle F\,,\Phi_{0}\right\rangle_{ L^{2}}.\]
Proof.: The result follows by a straightforward calculation in the Fourier space; indeed, for any \(F=(f,g)\in H^{1}\times L^{2}\) there holds
\[\left\langle F\,,\Phi\right\rangle_{X}= \left\langle f\,,\nu[1+(-\Delta)]^{-1}\partial_{x}\overline{ \theta}\right\rangle_{H^{1}}+\left\langle g\,,\partial_{x}\overline{\theta} \right\rangle_{L^{2}}\] \[= \nu\int_{\mathbb{R}}\widehat{f}(\xi)(\widehat{\partial_{x} \overline{\theta}}(\xi))^{*}d\xi+\left\langle g\,,\partial_{x}\overline{ \theta}\right\rangle_{L^{2}}\] \[= \left\langle f\,,\nu\partial_{x}\overline{\theta}\right\rangle_{L^ {2}}+\left\langle g\,,\partial_{x}\overline{\theta}\right\rangle_{L^{2}}= \left\langle F\,,\Phi_{0}\right\rangle_{L^{2}}.\]
Now, let us denote the inner product
\[\Xi:=\left\langle\Theta\,,\Phi\right\rangle_{X}=\nu\left\|\partial_{x}\overline{ \theta}\right\|_{L^{2}}^{2}>0, \tag{5.5}\]
and define the Hilbert space \(X_{1}\subset H^{1}\times L^{2}\) as the range of the spectral projection
\[\mathcal{P}U:=U-\Xi^{-1}\left\langle U\,,\Phi\right\rangle_{X}\Theta,\qquad U \in H^{1}\times L^{2}, \tag{5.6}\]
that is, \(X_{1}:=\mathcal{R}(\mathcal{P})\). In this fashion we project out the eigenspace spanned by the single eigenfunction, \(\Theta=(\partial_{x}\overline{\theta},0)\). We shall verify that, outside this eigenspace, the associated semigroup decays exponentially. First, it is to be observed that Corollary 5.4 implies the following explicit characterization of the space \(X_{1}\).
**Lemma 5.5**.: _Let \(\mathcal{P}\) be the spectral projector defined in (5.6) and let \(X_{1}\) be its range. Then_
\[X_{1}=\left\{F\in H^{1}\times L^{2}\ \big{|}\ \left\langle F\,,\Phi_{0} \right\rangle_{L^{2}}=0\right\}. \tag{5.7}\]
Proof.: Let \(F\in X_{1}\). Hence, \(F=\mathcal{P}F\) because \(\mathcal{P}\) is a projector. By (5.6), we have \(F=F-\Xi^{-1}\left\langle F\,,\Phi\right\rangle_{X}\Theta\), which implies \(0=\left\langle F\,,\Phi\right\rangle_{X}=\left\langle F\,,\Phi_{0}\right\rangle _{L^{2}}\), due to Corollary 5.4. The converse holds trivially.
### Generation of the semigroup and decay estimates
In this section we prove that a restriction of the linearized block operator around the Neel wall's phase is the infinitesimal generator of an exponentially-decaying semigroup. For that purpose we need to show some resolvent estimates. Let us recall the growth bound for a semigroup \(e^{t\mathcal{T}}\) (where \(\mathcal{T}\) denotes its infinitesimal generator),
\[\omega_{0}=\inf\{\omega\in\mathbb{R}\,:\,\lim_{t\to+\infty}e^{-\omega t}\|e^{ t\mathcal{T}}\|\}.\]
We say a semigroup is uniformly (exponentially) stable whenever \(\omega_{0}<0\). The spectral bound of the generator is defined as
\[s(\mathcal{T}):=\sup\{\operatorname{Re}\lambda\,:\,\lambda\in\sigma( \mathcal{T})\}.\]
Since the spectral mapping theorem (that is, \(\sigma(e^{t\mathcal{T}})\backslash\{0\}=e^{t\sigma(\mathcal{T})}\) for all \(t\geq 0\)) is not true in general for \(C_{0}\)-semigroups (see [10]), for stability purposes we rely on the Gearhart-Pruss theorem (cf. [11, 12]), which restricts our attention to semigroups on Hilbert spaces (see also [13, 14] and the references therein). It states that any \(C_{0}\)-semigroup \(\{e^{t\mathcal{T}}\}_{t\geq 0}\) on a Hilbert space \(H\) is uniformly exponentially stable if and only if \(s(\mathcal{T})<0\) and the resolvent satisfies \(\sup_{\operatorname{Re}\lambda>0}\|(\mathcal{T}-\lambda)^{-1}\|<\infty\) (see Lemma 5.21 below).
It is well-known that the generalized Hille-Yosida theorem (see, e.g., [14], p. 69) requires all powers of the resolvent to conclude the existence of a \(C_{0}\)-semigroup unless it is quasi-contractive. Therefore, we apply the classical Lumer-Philips theorem instead. For that purpose we need some preparations.
Following Capella _et al._[13], we define \(L^{2}_{\perp}:=\{\partial_{x}\overline{\theta}\}_{L^{2}}^{\perp}\). For \(k=1\) and \(2\), we define \(H^{k}_{\perp}\) as \(H^{k}\cap L^{2}_{\perp}\). Next lemma describes the structure of these subspaces.
**Lemma 5.6**.: _Let \(L^{2}_{\perp}\) be the \(L^{2}\)-orthogonal complement of \(\partial_{x}\overline{\theta}\). For \(k=1,2\) define \(H^{k}_{\perp}\) as the intersection between \(H^{k}\) and \(L^{2}_{\perp}\). Then, for every \(\bar{u}\in H^{k}\),_
\[\bar{u}=u+\alpha\partial_{x}\overline{\theta} \tag{5.8}\]
_for some \(u\in H^{k}_{\perp}\) and \(\alpha\in\mathbb{C}\)._
Notice that this lemma needs to be proved since, in general, the intersection does not distribute the direct sum.
Proof.: Assume \(k\) is fixed and \(\bar{u}\in H^{k}\). The spectral decomposition theorem (see Theorem III-6.17, p. 178 in [10]) and Corollary 3.12 yield \(L^{2}=L^{2}_{\perp}\oplus\operatorname{Span}\{\partial_{x}\overline{\theta}\}\) and because \(H^{k}\subset L^{2}\) there exist \(u\in L^{2}_{\perp}\) and \(\alpha\in\mathbb{C}\) such that \(\bar{u}=u+\alpha\partial_{x}\overline{\theta}\). Since \(\partial_{x}\overline{\theta}\in H^{k}\), by Proposition 2.1 (c) there holds \(u=\bar{u}-\alpha\partial_{x}\overline{\theta}\in H^{k}\). Thus \(u\in H^{k}_{\perp}\).
This splitting also extends to the working (product) space \(H^{1}\times L^{2}\). The proof of the following corollary is omitted.
**Corollary 5.7**.: _For every \(\bar{U}\in H^{1}\times L^{2}\) there exist \(U\in H^{1}_{\perp}\times L^{2}\) and \(\alpha\in\mathbb{C}\) such that \(\bar{U}=U+\alpha\Theta\)._
**Lemma 5.8**.: _Define \(a:H^{1}_{\perp}\times H^{1}_{\perp}\to\mathbb{C}\) as_
\[a\left[u,v\right]:=\left\langle\partial_{x}u\,,\partial_{x}v\right\rangle_{L^{ 2}}+b[s_{\theta}u,s_{\theta}v]-\left\langle c_{\theta}u\,,v\right\rangle_{L^{ 2}}, \tag{5.9}\]
_with \(b\) as in (2.8). Then, \(a[\cdot,\cdot]\) is a positive, Hermitian, sesquilinear form. Moreover, if \(u\in H^{2}_{\perp}\)_
\[\left\langle\mathcal{L}\,u\,,v\right\rangle_{L^{2}}=a[u,v]\quad\text{for every $v\in H^{1}_{\perp}$.} \tag{5.10}\]
Proof.: The sesquilinearity and hermiticity of \(a\) follows trivially from its definition, meanwhile it is positive defined due to item (c) in Proposition 3.8. Finally, relation (5.10) follows from an integration by parts and from Corollary 3.7.
With slight changes to the arguments presented in [1], we can prove that \(a[\cdot,\cdot]\) induces an inner product in \(H^{1}_{\perp}\) equivalent to the \(H^{1}\)-inner product. The norm induced by this sesquilinear form is denoted by \(\|\cdot\|_{a}:H^{1}_{\perp}\to\mathbb{C}\). In other words,
\[\|u\|_{a}:=\sqrt{a[u,u]},\qquad\text{for every $u\in H^{1}_{\perp}$.}\]
**Proposition 5.9**.: _Let us define_
\[Z:=H^{1}_{\perp}\times L^{2}. \tag{5.11}\]
_Then \(\left(Z,\left\langle\cdot\,,\cdot\right\rangle_{X}\right)\) is a Hilbert space. In addition, if \(\|\cdot\|_{Z}:Z\to\mathbb{C}\) and \(\|\cdot\|_{2}:Z\to\mathbb{C}\) are defined by_
\[\|U\|_{Z}:=\sqrt{\|u\|_{a}^{2}+\|v\|_{L^{2}}^{2}}, \tag{5.12}\]
\[\|U\|_{2}:=\|u\|_{a}+\|v\|_{L^{2}}\,, \tag{5.13}\]
_where \(U=(u,v)\in Z\), then \(\|\cdot\|_{Z}\) and \(\|\cdot\|_{2}\) are norms in \(Z\), both equivalent to \(\left\|\cdot\right\|_{X}\)._
Proof.: It suffices to show that \(Z\) is a closed linear subspace of \(X=H^{1}\times L^{2}\). The linearity of \(Z\) follows from the linearity of \(L^{2}\) and the linearity of \(H^{1}_{\perp}\). Now, assume \(\{U_{j}=(u_{j},v_{j})\}_{j\in\mathbb{N}}\) is a Cauchy sequence in \(Z\). Therefore, \(\{u_{j}\}_{j\in\mathbb{N}}\) and \(\{v_{j}\}_{j\in\mathbb{N}}\) are Cauchy sequences in \(H^{1}\) and in \(L^{2}\), respectively, and \(u_{j}\to u\) and \(v_{j}\to v\) for some \(u\in H^{1}\) and \(v\in L^{2}\). Note that \(u\in H^{1}_{\perp}\) since \(H^{1}\)-convergence implies weak \(L^{2}\)-convergence and \(0=\left\langle u_{j}\,,\partial_{x}\overline{\theta}\right\rangle_{L^{2}}\) for every \(j\in\mathbb{N}\). Therefore \(Z\) is a closed linear subspace of \(X\).
Next, we will show that \(\|\cdot\|_{Z}\) and \(\|\cdot\|_{2}\) are norms in \(Z\). Clearly, both functions are positive defined and absolutely homogeneous since \(\|\cdot\|_{L^{2}}\) and \(\|\cdot\|_{a}\) are norms in \(L^{2}\) and \(H^{1}_{\perp}\), respectively. Also, subadditivity of \(\|\cdot\|_{2}\) readily follows from the
subadditivity of \(\|\cdot\|_{a}\) and of \(\left\|\cdot\right\|_{L^{2}}\). To verify the subadditivity for \(\|\cdot\|_{Z}\), let \(U=(u_{1},u_{2})\) and \(V=(v_{1},v_{2})\) belong to \(Z\); then we obtain
\[\begin{split}\|U+V\|_{Z}^{2}&=\|u_{1}+u_{2}\|_{a}^{ 2}+\left\|v_{1}+v_{2}\right\|_{L^{2}}^{2}\\ &\leq(\|u_{1}\|_{a}+\|u_{2}\|_{a})^{2}+(\left\|v_{1}\right\|_{L^ {2}}+\left\|v_{2}\right\|_{L^{2}})^{2}\\ &=(\|u_{1}\|_{a}^{2}+\left\|v_{1}\right\|_{L^{2}}^{2})+(\|u_{2}\|_ {a}^{2}+\|v_{2}\|_{L^{2}}^{2})+2(\|u_{1}\|_{a}\|u_{2}\|_{a}+\left\|v_{1}\right\| _{L^{2}}\left\|v_{2}\right\|_{L^{2}})\\ &\leq(\|U\|_{Z}+\left\|V\right\|_{Z})^{2}\,.\end{split}\]
Finally, we prove that both norms are equivalent to \(\left\|\cdot\right\|_{X}\). Indeed, since \(\|\cdot\|_{a}\) and \(\left\|\cdot\right\|_{H^{1}}\) are equivalent in \(H^{1}_{\perp}\), there exist \(k_{0}\) and \(K_{0}\) two positive constants such that \(k_{0}\|u\|_{a}\leq\|u\|_{H^{1}}\leq K_{0}\|u\|_{a}\) for each \(u\in H^{1}_{\perp}\). Hence
\[k_{0}^{2}\|u\|_{a}^{2}+\|v\|_{L^{2}}^{2}\leq\left\|(u,v)^{\top}\right\|_{X}^{ 2}\leq K_{0}^{2}\|u\|_{a}^{2}+\left\|v\right\|_{L^{2}}^{2}.\]
By choosing \(k_{1}=\sqrt{\min\{1,k_{0}^{2}\}}\) and \(K_{1}=\sqrt{\max\{1,K_{0}^{2}\}}\)
\[k_{1}\|U\|_{Z}\leq\left\|U\right\|_{X}\leq K_{1}\|U\|_{Z},\qquad\text{for every $U=(u,v)^{\top}\in Z$.}\]
Thus, \(\|\cdot\|_{Z}\) and \(\left\|\cdot\right\|_{X}\) are equivalent in \(H^{1}_{\perp}\). Since, clearly,
\[(\left\|u\right\|_{H^{1}}+\left\|v\right\|_{L^{2}})^{2}\leq 2(\left\|u\right\| _{H^{1}}^{2}+\left\|v\right\|_{L^{2}}^{2})\leq 2(\left\|u\right\|_{H^{1}}+ \left\|v\right\|_{L^{2}})^{2}.\]
taking the square root and using the equivalence between \(\|\cdot\|_{a}\) and \(\left\|\cdot\right\|_{H^{1}}\) one obtains
\[(k_{0}\|u\|_{a}+\left\|v\right\|_{L^{2}})\leq\sqrt{2}\left\|U\right\|_{X}\leq \sqrt{2}(K_{0}\|u\|_{a}+\left\|v\right\|_{L^{2}}).\]
Again, choosing \(k_{2}=\min\{1,k_{0}\}/\sqrt{2}\) and \(K_{2}=\max\{1,K_{0}\}\), we get
\[k_{2}\|U\|_{2}\leq\left\|U\right\|_{X}\leq K_{2}\|U\|_{2},\qquad\text{for every $U=(u,v)^{\top}\in Z$.}\]
**Remark 5.10**.: Note that \(\|\cdot\|_{Z}\) is induced by the inner product \(\langle\cdot,\cdot\rangle_{Z}:Z\times Z\to\mathbb{C}\) given by
\[\langle U,V\rangle_{Z}:=a[u,w]+\langle v\,,z\rangle_{L^{2}}\,,\quad\text{with $U=(u,v),\;V=(w,z)$.}\]
Henceforth, \(\langle\cdot,\cdot\rangle_{Z}\) is equivalent to \(\langle\cdot\,,\cdot\rangle_{X}\) in \(Z\).
**Lemma 5.11**.: _Let \(\langle\cdot,\cdot\rangle_{\tilde{X}}:X\times X\to\mathbb{C}\) be defined as_
\[\langle\bar{U},\bar{V}\rangle_{\tilde{X}}:=\langle U,V\rangle_{Z}+\left\langle \bar{U}\,,\beta\Theta\right\rangle_{X}+\left\langle\alpha\Theta\,,\bar{V} \right\rangle_{X}-\alpha\beta^{*}\left\|\Theta\right\|_{X}^{2},\]
_where \(\bar{U}=U+\alpha\Theta\) and \(\bar{V}=V+\beta\Theta\) for some \(U,V\in Z\) and \(\alpha,\beta\in\mathbb{C}\) (see Corollary 5.7). Then \(\langle\cdot,\cdot\rangle_{\tilde{X}}\) is and inner product in \(X\) equivalent to \(\langle\cdot\,,\cdot\rangle_{X}\)._
Proof.: First, we prove that \(\langle\cdot,\cdot\rangle_{\tilde{X}}:X\times X\to\mathbb{C}\) is an inner product. It is clearly an hermitian sesquilinear form because it is a sum of four inner products whose first entry depends linearly on \(\bar{U}\) and their second entry also depends linearly on \(\bar{V}\). In view of Corollary 5.7, if \(\bar{U}\in X\), then \(\bar{U}=U+\alpha\Theta\) for some \(U\in Z\) and \(\alpha\in\mathbb{C}\) which yields
\[\langle\bar{U},\bar{U}\rangle_{\tilde{X}}=\left\|U\right\|_{Z}^{2}+2\mathrm{ Re}\langle U\,,\alpha\Theta\rangle_{X}+\left\|\alpha\Theta\right\|_{X}^{2}.\]
Thus, by adding and subtracting \(\left\|U\right\|_{X}^{2}\), one gets
\[\langle\bar{U},\bar{U}\rangle_{\tilde{X}}=\left\|\bar{U}\right\|_{X}^{2}-\left\| U\right\|_{X}^{2}+\left\|U\right\|_{Z}^{2}\geq\left\|U\right\|_{Z}^{2}. \tag{5.14}\]
Last inequality holds since \(\left\|\bar{U}\right\|_{X}^{2}\geq\left\|U\right\|_{X}^{2}\) with equality if and only if \(\alpha=0\). Henceforth, \(\langle\bar{U},\bar{U}\rangle_{\tilde{X}}=0\) if and only if \(\bar{U}=0\).
Second, we prove that \(\left\langle\cdot,\cdot\right\rangle_{\tilde{X}}\) and \(\left\langle\cdot\,,\cdot\right\rangle_{X}\) are equivalent. Since \(\left\langle\cdot,\cdot\right\rangle_{Z}\) and \(\left\langle\cdot\,,\cdot\right\rangle_{X}\) are equivalent in \(Z\), then there exist two positive constants \(k,K>0\) such that \(0<k\leq 1\leq K\) and \(k\left\|U\right\|_{X}\leq\|U\|_{Z}\leq K\left\|U\right\|_{X}\) (see the proof of Lemma 5.9). By applying this relation into the equality in eq. (5.14), we obtain
\[\left(k^{2}-1\right)\left\|U\right\|_{X}^{2}+\left\|\bar{U}\right\|_{X}^{2} \leq\left\langle\bar{U},\bar{U}\right\rangle_{\tilde{X}}\leq\left(K^{2}-1 \right)\left\|U\right\|_{X}^{2}+\left\|\bar{U}\right\|_{X}^{2}\]
Since \(\left\|U\right\|_{X}\leq\left\|\bar{U}\right\|_{X}\), we conclude that
\[k^{2}\left\|\bar{U}\right\|_{X}^{2}\leq\left\langle\bar{U},\bar{U}\right\rangle _{\tilde{X}}\leq K^{2}\left\|\bar{U}\right\|_{X}^{2}.\]
and the proof is complete.
The following resolvent estimate is the key ingredient to apply Lumer-Phillips theorem. We use the appropriate choice of a metric in order to prove it.
**Lemma 5.12**.: _There exists \(\eta_{0}\in\mathbb{R}\) such that_
\[\operatorname{Re}\left\langle\mathcal{A}\bar{U}\,,\bar{U}\right\rangle_{X} \leq\eta_{0}\|\bar{U}\|_{X}^{2}\]
_for every \(\bar{U}\in D(\mathcal{A})\)._
Proof.: Note that if \(\bar{U}\in D(\mathcal{A})\subset X\), then \(\bar{U}=U+\alpha\Theta\) for some \(U\in Z\) and \(\alpha\in\mathbb{C}\) due to Corollary 5.7. Moreover, \(U=(u,v)\), with \(u\in H_{\perp}^{2}\) and \(v\in H^{1}\) also, by Lemma 5.6, \(v=w+\beta\partial_{x}\overline{\theta}\) for some \(w\in H_{\perp}^{1}\) and \(\beta\in\mathbb{C}\). Since \(\lambda=0\) is a eigenvalue of \(\mathcal{A}\) with eigenfunction \(\Theta\) (see Lemma 4.4), we have that
\[\mathcal{A}\bar{U}=\mathcal{A}U=V+\beta\Theta,\quad\text{and}\quad\ V:= \begin{pmatrix}w\\ -\nu v-\mathcal{L}\,u\end{pmatrix}\in Z.\]
Then,
\[\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{\tilde{X}}=\]
In view of Remark 5.10 and (5.10), the term \(\langle V,U\rangle_{Z}\) is recast as
\[\left\langle V,U\right\rangle_{Z}=a[w,u]-\left\langle\mathcal{L}\,u\,,v\right \rangle_{L^{2}}-\nu\left\|v\right\|_{L^{2}}^{2}=2i\operatorname{Im}a[w,u]-\nu \left\|v\right\|_{L^{2}}^{2}\]
Upon substitution into the expression for \(\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{\tilde{X}}\), one gets
Now, using the explicit form of \(\Theta\) and the fact that \(\left\langle w\,,\partial_{x}\overline{\theta}\right\rangle_{L^{2}}=0\) we obtain
\[\left\langle V\,,\alpha\Theta\right\rangle_{X}=\left\langle w\,,\alpha \partial_{x}\overline{\theta}\right\rangle_{H^{1}}=\left\langle\partial_{x}w \,,\alpha\partial_{x}^{2}\overline{\theta}\right\rangle_{L^{2}}=-\left\langle w \,,\alpha\partial_{x}^{3}\overline{\theta}\right\rangle_{L^{2}},\]
where the last equality follows upon integration by parts. Henceforth,
\[\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{\tilde{X}}=2i \operatorname{Im}a[w,u]-\nu\left\|v\right\|_{L^{2}}^{2}+\left\langle\beta \Theta\,,U\right\rangle_{X}+\left\langle\beta\Theta\,,\alpha\Theta\right\rangle _{X}-\left\langle w\,,\alpha\partial_{x}^{3}\overline{\theta}\right\rangle_{L^{2 }}. \tag{5.15}\]
Taking the real part of (5.15) and applying Cauchy-Schwartz inequality yields
\[2\operatorname{Re}\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{ \tilde{X}}\leq-2\nu\left\|v\right\|_{L^{2}}^{2}+\left\|U\right\|_{X}^{2}+2 \left\|\beta\Theta\right\|_{X}^{2}+\left\|\alpha\Theta\right\|_{X}^{2}+\left\| \alpha\partial_{x}^{3}\overline{\theta}\right\|_{L^{2}}^{2}+\left\|w\right\|_ {L^{2}}^{2}.\]
Note that \(\left\|\partial_{x}^{3}\overline{\theta}\right\|_{L^{2}}<\infty\) and \(\left\|\partial_{x}\overline{\theta}\right\|_{L^{2}}\neq 0\) due to Proposition 2.1. Thereby, we may define the positive constants \(C_{1}:=\left\|\Theta\right\|_{X}^{2}/\left\|\partial_{x}\overline{\theta} \right\|_{L^{2}}^{2}\) and \(C_{2}:=\left\|\partial_{x}^{3}\overline{\theta}\right\|_{L^{2}}^{2}/\left\| \partial_{x}\overline{\theta}\right\|_{L^{2}}^{2}\)
depending only on \(\overline{\theta}\), so that
\[2\mathrm{Re}\left\langle\mathcal{A}\bar{U},\bar{U}\right\rangle_{ \tilde{X}}\leq -2\nu\left\|v\right\|_{L^{2}}^{2}+\left\|U\right\|_{X}^{2}+2C_{1} \left\|\beta\partial_{x}\overline{\theta}\right\|_{L^{2}}^{2}+\left\|\alpha \Theta\right\|_{X}^{2}+C_{2}\left\|\alpha\partial_{x}\overline{\theta}\right\| _{L^{2}}^{2}+\left\|w\right\|_{L^{2}}^{2}\] \[\leq -2\nu\left\|v\right\|_{L^{2}}^{2}+\left(2+C_{2}\right)\left\| \bar{U}\right\|_{X}^{2}+\left(1+2C_{1}\right)\left\|v\right\|_{L^{2}}^{2}\] \[\leq \left(3+2C_{1}+C_{2}\right)\left\|\bar{U}\right\|_{X}^{2}\]
The last two inequalities hold because \(\left\|\bar{U}\right\|_{X}\geq\left\|\bar{U}\right\|_{X}\geq\left\|v\right\| _{L^{2}}\geq\max\{\left\|w\right\|_{L^{2}},\left\|\beta\partial_{x}\overline{ \theta}\right\|_{L^{2}}\}\) and \(\left\|\bar{U}\right\|_{X}\geq\left\|\alpha\Theta\right\|_{X}\geq\left\| \alpha\partial_{x}\overline{\theta}\right\|_{L^{2}}\). Finally, the equivalence between \(\left\langle\cdot\,,\cdot\right\rangle_{X}\) and \(\left\langle\cdot,\cdot\right\rangle_{\tilde{X}}\) implies the existence of \(K>0\) such that
\[\mathrm{Re}\,\left\langle\bar{U},\mathcal{A}\bar{U}\right\rangle_{X}\leq\tfrac {1}{2}K(3+2C_{1}+C_{2})\|\bar{U}\|_{X}^{2},\]
yielding the result with \(\eta_{0}=\tfrac{1}{2}K(3+2C_{1}+C_{2})>0\).
**Lemma 5.13**.: _There exists \(\tau>\eta_{0}\) such that \(\mathcal{A}-\tau\) is onto._
Proof.: First we notice that, from the proof of Lemma 5.12, \(\eta_{0}>0\). In addition, we know that every \(\lambda>0\) belongs to \(\rho\left(\mathcal{A}\right)\) due to Theorem 4.11. Therefore, the proof is complete by choosing any \(\tau>\omega_{0}\).
As an immediate consequence of Lemmata 5.12 and 5.13, we are now able to apply the classical Lumer-Phillips theorem (see, e.g., Theorem 12.22, p. 407, in [10]) and to claim the following result.
**Lemma 5.14**.: _The operator \(\mathcal{A}:H^{1}\times L^{2}\to H^{1}\times L^{2}\) with \(D(\mathcal{A})=H^{2}\times H^{1}\) is the inifinitesimal generator of a \(C_{0}\)-semigroup of quasicontractions \(\{e^{t\mathcal{A}}\}_{t\geq 0}\)._
**Corollary 5.15**.: _For each \(A(\mathcal{A})=U\in H^{2}\times H^{1}\) there holds_
\[\frac{d}{dt}\big{(}e^{t\mathcal{A}}U\big{)}=e^{t\mathcal{A}}\mathcal{A}U= \mathcal{A}(e^{t\mathcal{A}}U).\]
Proof.: Follows from Lemma 5.14 and basic properties of semigroups (cf. [11, 12]).
We now observe that on a reflexive Banach space, weak and weak\({}^{*}\) topologies coincide, and therefore the family of dual operators \(\{(e^{t\mathcal{A}})^{*}\}_{t\geq 0}\), consisting of all the formal adjoints in \(L^{2}\) is a \(C_{0}\)-semigroup as well (cf. [11], p. 44). Moreover, the infinitesimal generator of this semigroup is simply \(\mathcal{A}^{*}\) (see Corollary 10.6 in [12]), so we denote \((e^{t\mathcal{A}})^{*}=e^{t\mathcal{A}^{*}}\). By semigroup properties we readily have
\[e^{t\mathcal{A}}\Theta=\Theta,\qquad\text{and,}\qquad e^{t\mathcal{A}^{*}} \Phi=\Phi.\]
As a result of these identities and of the definition of the projector, we have
**Lemma 5.16**.: _For all \(t\geq 0\) there holds \(e^{t\mathcal{A}}\mathcal{P}=\mathcal{P}e^{t\mathcal{A}}\)._
Proof.: Let \(U\in H^{2}\times H^{1}\); then
\[\mathcal{P}e^{t\mathcal{A}}U=e^{t\mathcal{A}}U-\Xi^{-1}\left\langle e ^{t\mathcal{A}}U\,,\Phi\right\rangle_{X}\Theta =e^{t\mathcal{A}}U-\Xi^{-1}\left\langle U\,,e^{t\mathcal{A}^{*}} \Phi\right\rangle_{X}\Theta\] \[=e^{t\mathcal{A}}U-\Xi^{-1}\left\langle U\,,\Phi\right\rangle_{X }e^{t\mathcal{A}}\Theta\] \[=e^{t\mathcal{A}}\mathcal{P}U,\]
as claimed.
Last result implies that \(X_{1}\) is an \(e^{t\mathcal{A}}\)-invariant closed (Hilbert) subspace of \(X=H^{2}\times H^{1}\). Hence, we define the domain
\[D_{1}:=\{U\in\mathcal{D}\cap X_{1}\,:\,\mathcal{A}U\in X_{1}\},\]
and the operator
\[\mathcal{A}_{1}:D_{1}\subset X_{1}\to X_{1},\]
\[\mathcal{A}_{1}U:=\mathcal{A}U,\qquad U\in D_{1},\]
as the restriction of \(\mathcal{A}\) on \(X_{1}\). Therefore, \(\mathcal{A}_{1}\) is a closed, densely defined operator on the Hilbert space \(X_{1}\). Moreover,
**Lemma 5.17**.: \(\lambda=0\) _is not in the spectrum of \(\mathcal{A}_{1}\)._
Proof.: It suffices to verify that \(\Theta\notin X_{1}\). We compute \(\mathcal{P}\Theta=\Theta-\Xi^{-1}\langle\Theta,\Phi\rangle_{L^{2}}\Theta=0\). Hence \(0\neq\Theta\in\ker\mathcal{P}\) and therefore the eigenfunction associated to the eigenvalue \(\lambda=0\) is not in \(\mathcal{R}(\mathcal{P})=X_{1}\).
In this fashion we project out \(\lambda=0\) from the spectrum. As a consequence of spectral stability (see Theorem 4.11 above), we obtain the following
**Corollary 5.18**.: \(\sigma(\mathcal{A}_{1})\) _is a strict subset of the stable complex plane,_
\[\sigma(\mathcal{A}_{1})\subset\{\lambda\in\mathbb{C}\,:\,\mathrm{Re}\,\lambda \leq-\zeta_{0}(\nu)<0\},\]
_and the spectral bound of \(\mathcal{A}_{1}\) is strictly negative, \(s(\mathcal{A}_{1})<0\)._
**Lemma 5.19**.: _The family of operators \(\{e^{t\mathcal{A}_{1}}\}_{t\geq 0}\), \(e^{t\mathcal{A}_{1}}:X_{1}\to X_{1}\), defined as_
\[e^{t\mathcal{A}_{1}}U:=e^{t\mathcal{A}}U,\quad U\in X_{1},\;t\geq 0,\]
_is a \(C_{0}\)-semigroup of quasicontractions in the Hilbert space \(X_{1}\) with infinitesimal generator \(\mathcal{A}_{1}\)._
Proof.: The semigroup properties are inherited from those of \(e^{t\mathcal{A}}\) in \(X=H^{1}\times L^{2}\). That \(\mathcal{A}_{1}\) is the infinitesimal generator follows from Corollary in Section 2.2 of [1], p. 61.
Finally, in order to prove that the semigroup is exponentially decaying, we rely on Gearhart-Pruss theorem and we need to show that
\[\sup_{\mathrm{Re}\,\lambda>0}\|(\lambda-\mathcal{A}_{1})^{-1}\|_{X_{1}\to X_{ 1}}<\infty.\]
This condition is satisfied if any solution \(U\) to the linear equation \((\lambda-\mathcal{A}_{1})U=F\) for \(F\in H^{1}\times L^{2}\) satisfies a resolvent estimate, \(\|U\|_{X}\leq C(\lambda)\left\|F\right\|_{X}\), in which the constant \(C(\lambda)\) remains bounded in \(\mathrm{Re}\,\lambda>0\). The next result goes in that direction.
**Lemma 5.20**.: _Let \(\lambda\in\rho\left(\mathcal{A}\right)\) and \(f,g,u,v\in L^{2}_{\perp}\) be such that \(F=(f,g)^{\top}\in X_{1}\), \(U=(u,v)^{\top}\in D_{1}\) and \((\lambda-\mathcal{A}_{1})U=F\). Then,_
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq \|U\|_{2}\|F\|_{2} \tag{5.16}\]
_Moreover, if \(C_{0}\) and \(C_{1}\) are two fixed positive numbers, then there exists a constant \(K(C_{0},C_{1})>0\) such that \(\left\|U\right\|_{X}\leq K(C_{0},C_{1})\left\|F\right\|_{X}\) for all \(\lambda\) such that \(\mathrm{Re}\,\lambda>C_{0}\) or \(\left|\mathrm{Im}\,\lambda\right|>C_{1}\)._
Proof.: First, we write the vectorial equation as a system of linear equations,
\[\lambda u-v=f, \tag{5.17}\]
\[\mathcal{L}\,u+(\lambda+\nu)v=g. \tag{5.18}\]
Take the \(L^{2}\)- product on the left of (5.17) with \(\mathcal{L}\,u\), and the \(L^{2}\)-product on the right of (5.18) with \(v\). The result is
\[\lambda^{*}\left\langle\mathcal{L}\,u\,,u\right\rangle_{L^{2}}-\left\langle \mathcal{L}\,u\,,v\right\rangle_{L^{2}}=\left\langle\mathcal{L}\,u\,,f\right \rangle_{L^{2}},\quad\left\langle\mathcal{L}\,u\,,v\right\rangle_{L^{2}}+( \lambda+\nu)\left\|v\right\|_{L^{2}}^{2}=\left\langle g\,,v\right\rangle_{L^{2 }}.\]
Notice that \(u\in H^{2}_{\perp}\) and \(v,f\in H^{1}_{\perp}\). By Lemma 5.8, these equations can be written in terms of the sesquilinear form \(a[\cdot,\cdot]\) as
\[\lambda^{*}a[u,u]-a[u,v]=a[u,f],\]
\[a[u,v]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}=\left\langle g\,,v\right\rangle _{L^{2}}.\]
Then, the complex modulus of the sum of these equations satisfies
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq \left|a[u,f]\right|+\left|\left\langle g\,,v\right\rangle_{L^{2}}\right|\]
Since \(a\) is a nonnegative, Hermitian, sesquilinear form, then the Cauchy-Schwartz inequality remains valid for \(a\) in \(H^{1}_{\perp}\), as well as the classic inner product in \(L^{2}\). Hence,
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq a ^{1/2}[u,u]a^{1/2}[f,f]+\left\|g\right\|_{L^{2}}\left\|v\right\|_{L^{2}}.\]
Also note that the right-hand side of last equation is bounded by
\[\left[\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right]\left[\left\|g\right\|_{L^{ 2}}+a^{1/2}[f,f]\right]=\|U\|_{2}\|F\|_{2}.\]
Thus, inequality (5.16) follows.
Second, use (5.16) to get
\[\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|\leq \left[a^{1/2}[u,u]+\left\|v\right\|_{L^{2}}\right]\|F\|_{2}.\]
Notice that \(\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|=0\) if and only if \((u,v)=(0,0)\) because \(\operatorname{Re}\lambda\geq 0\) and \(\nu>0\). Hence, if \((u,v)\neq 0\), we have
\[a^{1/2}[u,u]+\left\|v\right\|_{L^{2}}\leq\frac{\left(a^{1/2}[u,u]+\left\|v \right\|_{L^{2}}\right)^{2}}{\left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v \right\|_{L^{2}}^{2}\right|}\|F\|_{2}.\]
If \(\operatorname{Re}\lambda>C_{0}>0\), for some \(C_{0}>0\), then
\[\frac{\left(\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right)^{2}}{\left|\lambda^{ *}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq\frac{2}{ \operatorname{Re}\lambda}\ \frac{a[u,u]+\left\|v\right\|_{L^{2}}^{2}}{a[u,u]+\left\|v\right\|_{L^{2}}^ {2}+\frac{\nu}{\operatorname{Re}\lambda}\left\|v\right\|_{L^{2}}^{2}}\leq \frac{2}{C_{0}}.\]
Now, if \(|\operatorname{Im}\lambda|\geq C_{1}>0\) and \(\operatorname{Re}\lambda\geq 0\) then
\[\frac{\left(a^{1/2}[u,u]+\left\|v\right\|_{L^{2}}\right)^{2}}{\left|\lambda^{ *}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq 2\frac{a[u,u]+\left\|v \right\|_{L^{2}}^{2}}{\sqrt{C_{1}^{2}(a[u,u]-\left\|v\right\|_{L^{2}}^{2})^{2} +\nu^{2}\left\|v\right\|_{L^{2}}^{4}}}.\]
Let us write \(a[u,u]=r^{2}\cos^{2}t\) and \(\left\|v\right\|_{L^{2}}^{2}=r^{2}\sin^{2}t\) for some \(r>0\) and \(t\in[0,\pi/2]\). This change of variables implies that
\[\frac{\left(\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right)^{2}}{ \left|\lambda^{*}a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq \frac{2}{\sqrt{C_{1}^{2}\cos^{2}2t+\nu^{2}\cos^{4}t}}\] \[\leq \frac{2}{\sqrt{C_{1}^{2}\cos^{2}2t+\frac{1}{4}\nu^{2}(1+\cos 2t)^{ 2}}}\] \[\leq \frac{2}{\sqrt{\left(C_{1}^{2}+\frac{1}{4}\nu^{2}\right)\cos^{2}2 t+\frac{1}{2}\nu^{2}\cos 2t+\frac{1}{4}\nu^{2}}}.\]
Let us denote,
\[h(t):=\left(C_{1}^{2}+\tfrac{1}{4}\nu^{2}\right)\cos^{2}2t+\tfrac{1}{2}\nu^{2 }\cos 2t+\tfrac{1}{4}\nu^{2},\qquad t\in[0,\pi/2].\]
This is a not vanishing \(C^{1}\)-function with global minimum for \(t_{c}\in(\pi/4,\pi/2)\) determined by the relation \(\cos 2t_{c}=-\nu^{2}/(4C_{1}^{2}+\nu^{2})\). Thus, a straightforward computation implies that
\[\frac{\left(\left\|v\right\|_{L^{2}}+a^{1/2}[u,u]\right)^{2}}{\left|\lambda^{* }a[u,u]+(\lambda+\nu)\left\|v\right\|_{L^{2}}^{2}\right|}\leq\frac{2}{\sqrt{h (t_{c})}}=\frac{2\sqrt{\nu^{2}+4C_{1}^{2}}}{\nu C_{1}}.\]
Therefore, if \(K=2\max\{\sqrt{\nu^{2}+4C_{1}^{2}}/(\nu C_{1}),1/C_{0}\}\), we obtain
\[\|U\|_{2}\leq K\|F\|_{2}.\]
Finally, we conclude the existence of a constant \(K(C_{0},C_{1})>0\) such that \(\left\|U\right\|_{X}\leq K(C_{0},C_{1})\left\|F\right\|_{X}\) due to the equivalence between the norms \(\|\cdot\|_{2}\) and \(\left\|\cdot\right\|_{X}\); see Proposition (5.9). Thus, the second statement also holds. This completes the proof.
We are left to prove the following estimate.
**Lemma 5.21**.: \[\sup_{\operatorname{Re}\lambda>0}\|(\lambda-\mathcal{A}_{1})^{-1}\|_{X_{1} \to X_{1}}<\infty.\]
Proof.: Let \(\lambda\geq 0\), so \(\lambda\in\rho(\mathcal{A}_{1})\) by Corollary 5.18 and choose \(C_{0}\) and \(C_{1}\) two positive numbers. Then, we split the set \(\{\lambda\in\mathbb{C}\ |\ \operatorname{Re}\lambda\geq 0\}\) into three disjoint sets, namely
\[\begin{array}{l}S_{0}=\{\lambda\in\mathbb{C}\ |\ 0\leq\operatorname{Re} \lambda\leq C_{0},\ |\operatorname{Im}(\lambda)\,|\leq C_{1}\},\\ S_{1}=\{\lambda\in\mathbb{C}\ |\ 0\leq\operatorname{Re}\lambda\leq C_{0},\ C_{1} <|\operatorname{Im}(\lambda)\,|\},\\ S_{2}=\{\lambda\in\mathbb{C}\ |\ C_{0}<\operatorname{Re}\lambda\}.\end{array}\]
In the rest of the proof, we will show that for every \(\bar{F}\in X_{1}\) the solution \(\bar{U}\in D_{1}\subset X_{1}\) to the equation \((\lambda-\mathcal{A}_{1})\bar{U}=\bar{F}\) is uniformly bounded for \(\lambda\in S_{k}\) with \(k=0,1\), or \(2\).
We analyze the behavior on \(S_{0}\). We claim that \(\lambda\to\|(\lambda-\mathcal{A}_{1})^{-1}\|\) is a continuous mapping. Indeed it follows by the continuity of the mapping \(\lambda\to(\lambda-\mathcal{A}_{1})^{-1}\) and the reversed triangle's inequality, since for every \(\lambda,\ \mu\in\rho(\mathcal{A}_{1})\) there holds
\[\big{|}\ \|(\lambda-\mathcal{A}_{1})^{-1}\|-\|(\mu-\mathcal{A}_{1})^{-1}\|\ \big{|}\leq\|(\lambda-\mathcal{A}_{1})^{-1}-(\mu-\mathcal{A}_{1})^{-1}\|.\]
Now, we observe that \(S_{0}\) is a compact subset contained in \(\rho(\mathcal{A}_{1})\), where the mapping \(\lambda\to\|(\lambda-\mathcal{A}_{1})^{-1}\|\) is continuous. Then, it follows that there exists \(K_{1}>0\) such that \(\|(\lambda-\mathcal{A}_{1})^{-1}\|\leq K_{1}\).
The analysis on \(S_{1}\) and \(S_{2}\) is as follows. Since \(H^{k}\subset L^{2}=H^{0}\) for \(k>0\), we write the entries in \(\bar{F}\) and \(\bar{U}\) as the sum of two terms, one in \(\operatorname{Span}\{\partial_{x}\overline{\theta}\}\) and the other in \(H^{k}_{\perp}\). More precisely, by Lemma 5.5 we know that there exist \(u\in H^{2}_{\perp}\), \(v,f\in H^{1}_{\perp}\), \(g\in L^{2}_{\perp}\) and \(\alpha,\gamma\in\mathbb{C}\) such that \(\bar{U}=(u,v)+\alpha(1,-\nu)\partial_{x}\overline{\theta}\) and \(\bar{F}=(f,g)+\gamma(1,-\nu)\partial_{x}\overline{\theta}\). The vectorial equation \((\lambda-\mathcal{A}_{1})\bar{U}=\bar{F}\) translates into three equations:
\[\lambda u-v=f,\qquad\mathcal{L}\,u+(\lambda+\nu)v=g,\qquad\text{and}\qquad \alpha(\lambda+\nu)=\gamma.\]
Now let \(U=(u,v)\) and \(F=(f,g)\). Since \(u,v,f\), and \(g\) satisfies Lemma 5.20, then
\[\left\|U\right\|_{X}\leq K(C_{0},C_{1})\left\|F\right\|_{X}.\]
Thus,
\[\left\|\bar{U}\right\|_{X}\leq\left\|U\right\|_{X}+\frac{\left\|\gamma(1,-\nu )\partial_{x}\overline{\theta}\right\|_{X}}{|\lambda+\nu|}\leq\left(K(C_{0},C _{1})+\frac{1}{|\lambda+\nu|}\right)\left\|\bar{F}\right\|_{X}.\]
Hence, \((\lambda-\mathcal{A}_{1})^{-1}\) is bounded on \(S_{1}\cup S_{2}\) and the proof is complete.
Now from Lemma 5.21 and Corollary 5.18, we may apply Gearhart-Pruss theorem directly to conclude the following:
**Theorem 5.22**.: _There exists a uniform \(M\geq 1\) and \(\omega_{1}>0\) such that_
\[\|e^{t\mathcal{A}_{1}}U\|_{H^{1}\times L^{2}}\leq Me^{-\omega_{1}t}\|U\|_{H^{ 1}\times L^{2}}, \tag{5.19}\]
_for all \(t\geq 0\), \(U\in X_{1}\)._
## 6. Nonlinear (orbital) stability
In this section we study the stability of the solution \(\theta(x,t)\), if it exists, to the Cauchy problem (2.15),
\[\begin{split}\partial_{t}^{2}\theta+\nu\partial_{t}\theta+\nabla \mathcal{E}(\theta)=0,&\quad x\in\mathbb{R},\ t>0,\\ \theta(x,0)=u_{0}(x),&\quad x\in\mathbb{R},\\ \partial_{t}\theta(x,0)=v_{0}(x),&\quad x\in\mathbb{ R},\end{split} \tag{6.1}\]
when the initial conditions are close to the static Neel wall \(\overline{\theta}\). This problem can be rewritten as a nonlinear vector system of equations by setting \(\varphi=\partial_{t}\theta\). Hence, if \(W=(\theta,\varphi)\), \(W_{0}=(u_{0},v_{0})\), and \(F(W)=(\varphi,-\nu\varphi-\nabla\mathcal{E}(\theta))^{\top}\), we get
\[\begin{split}\partial_{t}W&=F(W),\qquad x\in \mathbb{R},\ t>0,\\ W(x,0)&=W_{0}(x),\qquad x\in\mathbb{R}.\end{split} \tag{6.2}\]
**Remark 6.1**.: It is known that the nonlinear term in (6.1) is invariant to translations in the spatial variable (see Lemma 2.6 in [10]). Thus, if \(\overline{\theta}\) denotes the phase of the static Neel wall, then \(\nabla\mathcal{E}(\overline{\theta}(\cdot+\delta))=0\) for every \(\delta\in\mathbb{R}\). This symmetry is inherited by equation (6.2). Indeed,
\[F(\phi(\delta))=0,\quad\text{for}\quad\ \phi(\delta)=(\overline{\theta}(\cdot+ \delta),0)^{\top}.\]
Hence, taking the derivative with respect to \(\delta\), we get \(DF(\phi(\delta))\phi^{\prime}(\delta)=0\). Therefore, zero is an eigenvalue of the \(DF(\phi(\delta))\) with eigenfunction \(\phi^{\prime}(\delta)\), expressing, once again, translation invariance.
The linearized system around \(\phi(\delta)\) now reads,
\[\partial_{t}V=\mathcal{A}^{\delta}V \qquad x\in\mathbb{R},\ t>0, \tag{6.3}\] \[V(x,0)=V_{0}(x) x\in\mathbb{R},\]
where,
\[\mathcal{A}^{\delta}:=\begin{pmatrix}0&\mathrm{I}\\ -\mathcal{L}^{\delta}&-\nu\mathrm{I}\end{pmatrix},\qquad\text{and}\qquad \mathcal{L}^{\delta}\,u=\left.\frac{d}{d\epsilon}\nabla\mathcal{E}\left( \overline{\theta}(\cdot+\delta)+\epsilon u\right)\right|_{\epsilon=0}.\]
These operators are defined on the same base spaces as before: \(H^{1}\times L^{2}\) and \(L^{2}\), respectively. Notice that \(\mathcal{L}^{\delta}\) is similar to \(\mathcal{L}\), but the only difference lies on the dependence on \(\delta\) due to the translation in the argument of the Neel wall's phase. Then, the following identification is well justified,
\[\mathcal{A}^{0}:=\mathcal{A},\qquad\text{and}\qquad\mathcal{L}^{0}:=\mathcal{ L}\,.\]
Due to previous results, the system (6.3) for \(\delta=0\) has a unique solution in \(X_{1}\) given by the action of a \(C_{0}\)-semigroup, generated by \(\mathcal{A}_{1}\), on the initial condition \(V_{0}\in X_{1}\). It is not difficult to see that all the arguments before Section 6 are easily adapted to the case \(\delta\neq 0\) since the translation by \(\delta\) in the argument of \(\overline{\theta}\) can be interpreted as the action of the left translation operator \(T_{l}(\delta)\), which is an \(L^{2}\)-isometry and a \(C^{0}\)-semigroup with generator \(\partial_{x}\) (see [10]). Therefore, since \(\overline{\theta}\in H^{1}\), there holds
\[\left\|\partial_{x}\overline{\theta}(x+\delta)\right\|_{L^{2}}=\left\|\partial _{x}T_{l}(\delta)\overline{\theta}(x)\right\|_{L^{2}}=\left\|T_{l}(\delta) \partial_{x}\overline{\theta}(x)\right\|_{L^{2}}=\left\|\partial_{x}\overline {\theta}(x)\right\|_{L^{2}},\]
which implies that the \(H^{1}\)-norm and \(L^{2}\)-norm remains invariant. Thus, we must emphasize this \(\delta\)-dependence in all the terms that depends on the profile \(\overline{\theta}\). For example, we replace \(\overline{\theta}\) by \(\overline{\theta}_{\delta}=\overline{\theta}(\cdot+\delta)\), as well as the vector \(\Theta\), the projector \(\mathcal{P}\) and the space \(X_{1}\), which are replaced by \(\Theta(\delta)\), \(\mathcal{P}(\delta)\) and \(X_{1}(\delta)\), respectively. We represent these functions, operators and spaces for the case \(\delta\neq 0\) with the explicit dependence on this variable. It is important to point out that the spectral and growth bounds _do not change_, because they depend on the invariant \(H^{1}\) and \(L^{2}\) norms.
As a result, from the previous analysis we know that system (6.3) has a unique solution in \(X=H^{1}\times L^{2}\) given by the action of the \(C_{0}\)-semigroup \(\{e^{t\mathcal{A}^{\delta}}\}_{t>0}\), on the initial condition \(V_{0}\in X\). Moreover, due to Theorem 5.22, there exist uniform constants \(M\geq 1\) and \(\tilde{\omega}>0\) such that
\[\|e^{t\mathcal{A}^{\delta}_{1}}V_{0}\|_{H^{1}\times L^{2}}\leq Me^{-\tilde{ \omega}t}\|V_{0}\|_{H^{1}\times L^{2}}\]
Notice that if \(\mathcal{P}(\delta)\) is the projector defined in (5.6) for \(\delta\neq 0\) and \(V_{0}\in H^{1}\times L^{2}\), then \(\mathcal{P}(\delta)V_{0}\in X_{1}(\delta)\) and the linear system (6.3) has at least one solution given by
\[V=e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta)V_{0}+(\mathrm{I}-\mathcal{P}( \delta))V_{0}, \tag{6.4}\]
since
\[\partial_{t}V= \partial_{t}\left[e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta)V_{ 0}+(\mathrm{I}-\mathcal{P}(\delta))V_{0}\right]\] \[= \mathcal{A}^{\delta}[e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta) V_{0}]\] \[= \mathcal{A}^{\delta}\left[e^{t\mathcal{A}^{\delta}}\mathcal{P}( \delta)V_{0}+(\mathrm{I}-\mathcal{P}(\delta))V_{0}\right]\] \[= \mathcal{A}^{\delta}V.\]
Moreover, due to standard properties of \(C_{0}\)-semigroups, it follows that
\[\lim_{t\to 0}V=\lim_{t\to 0}\left[e^{t\mathcal{A}^{\delta}}\mathcal{P}(\delta)V_{0}+( \mathrm{I}-\mathcal{P}(\delta))V_{0}\right]=\mathcal{P}(\delta)V_{0}+(\mathrm{I }-\mathcal{P}(\delta))V_{0}=V_{0}.\]
In order to establish nonlinear stability we rely on an application of the implicit function theorem in Hilbert spaces given by Lattanzio _et al._[13] based on a similar result for Banach spaces presented by Sattinger [14]. We present this result here to ease the reading.
**Theorem 6.2**.: _Let \(X\) be a Hilbert space and \(I\subset\mathbb{R}\) be an open neighborhood of \(\delta=0\). Assume that \(F:\mathcal{D}\subset X\to X\) and \(\phi:I\subset\mathbb{R}\to\mathcal{D}\) satisfies \(F(\phi)=0\). If \(\mathcal{P}(\delta)\) is the projector onto \(\{\phi^{\prime}(\delta)\}_{X}^{\perp}\) and there exist positive constants \(C_{0},\delta_{0},M,\omega,\) and \(\gamma\) such that_
1. _for every solution_ \(V=V(t,V_{0},\delta)\) _to (_6.3_),_ \[\|\mathcal{P}(\delta)V(t,V_{0},\delta)\|_{X}\leq C_{0}e^{-\omega t}\|\mathcal{ P}(\delta)V_{0}\|_{X},\] (6.5)
2. \(\phi\) _is differentiable at_ \(\delta=0\) _with_ \[\|\phi(\delta)-\phi(0)-\phi^{\prime}(0)\delta\|_{X}\leq C_{0}|\delta|^{1+\gamma},\] (6.6) _for_ \(|\delta|<\delta_{0}\)_, and_
3. \(F\) _is differentiable at_ \(\phi(\delta)\) _for every_ \(\delta\in(-\delta_{0},\delta_{0})\) _with_ \[\|F(\phi(\delta)+W)-F(\phi(\delta))-DF(\phi(\delta))W\|_{X}\leq C_{0}\|W\|_{X} ^{1+\gamma},\] (6.7) _for_ \(|\delta|<\delta_{0}\) _and_ \(\|W\|_{X}\leq M\)_._
_Then there exists \(\epsilon>0\) such that for any \(W_{0}\in B_{\epsilon}(\phi(0))\subset X\) there exists \(\delta\in I\) and a positive constant C for which the solution \(W(t;W_{0})\) to the nonlinear system (6.2) satisfies_
\[\|W(t,W_{0})-\phi(\delta)\|_{X}\leq C\ \|W_{0}-\phi(0)\|_{X}\ e^{-\omega t}. \tag{6.8}\]
We proceed with the nonlinear stability result for the Neel wall's phase.
### Proof of Theorem 2.3
We begin the proof by setting \(X=H^{1}\times L^{2}\) and
\[\phi(\delta)=(T_{l}(\delta)\overline{\theta},0),\qquad F(W)=\begin{pmatrix} \varphi\\ -\nu\varphi-\nabla\mathcal{E}(\theta)\end{pmatrix},\qquad\mathcal{D}:=H^{2} \times H^{1}.\]
Due to Remark 6.1, we know that \(F(\phi(\delta))=0\) for every \(\delta\in\mathbb{R}\).
Now, let \(V_{0}\in\mathcal{D}\) be an initial condition such that \(V(t,V_{0},\delta)\) is a solution to the linear system (6.3). By setting \(\mathcal{P}(\delta)\) as the projector in Theorem 6.2, it follows that (6.5) is satisfied (see Theorem 5.22).
We turn our attention to the second hypothesis in Theorem 6.2. We know that \(\overline{\theta}\in H^{2}\) is a smooth real-valued function. Hence \(\phi\in H^{1}\times L^{2}\) and
\[\|\phi(\delta)-\phi(0)-\phi^{\prime}(0)\delta\|_{H^{1}\times L^{2}}=\|T_{l}( \delta)\overline{\theta}-\overline{\theta}-\partial_{x}\overline{\theta} \delta\|_{H^{1}}.\]
This term is easily estimated with the integral representation of the remainder for Taylor polynomials, yielding
\[|T_{l}(\delta)\overline{\theta}-\overline{\theta}-\partial_{x} \overline{\theta}\delta|^{2} =\delta^{4}\left|\int_{0}^{1}(1-t)\,\partial_{x}^{2}\overline{ \theta}(x+t\delta)\,dt\right|^{2}\] \[\leq\delta^{4}\int_{0}^{1}(1-t)^{2}\,\left(\partial_{x}^{2} \overline{\theta}(x+t\delta)\right)^{2}\,dt,\]
where the last inequality follows from Jensen's inequality. Now, integrating in \(x\) leads us to
\[\left\|T_{l}(\delta)\overline{\theta}-\overline{\theta}-\partial_{x}\overline{ \theta}\delta\right\|_{L^{2}}^{2}\leq\delta^{4}\int_{\mathbb{R}}\int_{0}^{1}(1 -t)^{2}\,\left(\partial_{x}^{2}\overline{\theta}(x+t\delta)\right)^{2}\,dt\,dx.\]
Since the integrand is not negative, we can interchange the order of integration. Also, by noticing that \(\partial_{x}\) is the generator of the left translation semigroup, we have
\[\partial_{x}^{2}\overline{\theta}(x+t\delta)=\partial_{x}^{2}T_{l}(t\delta) \overline{\theta}=T_{l}(t\delta)\partial_{x}^{2}\overline{\theta}.\]
Therefore,
\[\left\|T_{l}(\delta)\overline{\theta}-\overline{\theta}-\partial _{x}\overline{\theta}\delta\right\|_{L^{2}}^{2} \leq\delta^{4}\int_{0}^{1}(1-t)^{2}\int_{\mathbb{R}}\,\left( \partial_{x}^{2}\overline{\theta}(x+t\delta)\right)^{2}\,dx\,dt\] \[=\delta^{4}\int_{0}^{1}(1-t)^{2}\int_{\mathbb{R}}\,\left(T_{l}(t \delta)\partial_{x}^{2}\overline{\theta}\right)^{2}\,dx\,dt\] \[=\delta^{4}\left\|\partial_{x}^{2}\overline{\theta}\right\|_{L^ {2}}^{2}\int_{0}^{1}(1-t)^{2}\,dt,\]
where the last equality follows because \(T_{l}(t\delta)\) is an isometry in \(L^{2}\). A similar argument is applied to \(\partial_{x}\overline{\theta}\). Indeed,
\[\left\|T_{l}(\delta)\partial_{x}\overline{\theta}-\partial_{x} \overline{\theta}-\partial_{x}^{2}\overline{\theta}\delta\right\|_{L^{2}}^{2} \leq\delta^{4}\int_{0}^{1}(1-t)^{2}\,\int_{\mathbb{R}}\left( \partial_{x}^{3}\overline{\theta}(x+t\delta)\right)^{2}\,dx\,dt\] \[=\delta^{4}\int_{0}^{1}(1-t)^{2}\int_{\mathbb{R}}\,\left(T_{l}(t \delta)\partial_{x}^{3}\overline{\theta}\right)^{2}\,dx\,dt\] \[=\delta^{4}\left\|\partial_{x}^{3}\overline{\theta}\right\|_{L^{ 2}}^{2}\int_{0}^{1}(1-t)^{2}\,dt.\]
With the last two results, we conclude that
\[\left\|\phi(\delta)-\phi(0)-\phi^{\prime}(0)\delta\right\|_{X}\leq\frac{ \left\|\partial_{x}^{2}\overline{\theta}\right\|_{H^{1}}}{\sqrt{3}}\,\delta^{ 2}.\]
Finally, we prove that (6.7) holds. If \(W=(w_{1},w_{2})^{\top}\in H^{2}\times H^{1}\), the expressions for \(F\) and \(\phi(\delta)\) imply that
\[F(\phi(\delta))=\begin{pmatrix}0\\ -\nabla\mathcal{E}\left(T_{l}(\delta)\overline{\theta}\right)\end{pmatrix}, \quad F(\phi(\delta)+W)=\begin{pmatrix}w_{2}\\ -\nu w_{2}-\nabla\mathcal{E}\left(w_{1}+T_{l}(\delta)\overline{\theta}\right) \end{pmatrix},\]
and
\[DF(\phi(\delta))W=\mathcal{A}^{\delta}W=\begin{pmatrix}w_{2}\\ -\nu w_{2}-\mathcal{L}^{\delta}w_{1}\end{pmatrix}.\]
In order to simplify the notation, we denote \(T_{l}(\delta)\overline{\theta}\) by \(\overline{\theta}_{\delta}\). Then, a substitution on the left hand side of (6.7) implies
\[\left\|F(\phi(\delta)+W)-F(\phi(\delta))-DF(\phi(\delta))W\right\|_{X}=\left\| \nabla\mathcal{E}(\overline{\theta}_{\delta}+w_{1})-\nabla\mathcal{E}( \overline{\theta}_{\delta})-\mathcal{L}^{\delta}\,w_{1}\right\|_{L^{2}}.\]
From Proposition 2.1 2 we have that
\[\nabla\mathcal{E}(\overline{\theta}_{\delta}+w_{1}) =-\partial_{x}^{2}\overline{\theta}_{\delta}-\partial_{x}^{2}w_{1 }-\sin(\overline{\theta}_{\delta}+w_{1})\,\left(1+(-\Delta)^{1/2}\right)\cos( \overline{\theta}_{\delta}+w_{1}),\] \[-\nabla\mathcal{E}(\overline{\theta}_{\delta}) =\partial_{x}^{2}\overline{\theta}_{\delta}+\sin\overline{\theta }_{\delta}\,\left(1+(-\Delta)^{1/2}\right)\cos\overline{\theta}_{\delta},\] \[-\mathcal{L}^{\delta}\,w_{1} =\partial_{x}^{2}w_{1}-\sin\overline{\theta}_{\delta}(1+(-\Delta)^ {1/2})\sin\overline{\theta}_{\delta}w_{1}+w_{1}\cos\overline{\theta}_{\delta}( 1+(-\Delta)^{1/2})\cos\overline{\theta}_{\delta}.\]
By letting \(\mathcal{K}:=\nabla\mathcal{E}(\overline{\theta}_{\delta}+w_{1})-\nabla\mathcal{E}( \overline{\theta}_{\delta})-\mathcal{L}^{\delta}\,w_{1}\), we have
\[\mathcal{K}= -\sin(\overline{\theta}_{\delta}+w_{1})\,\left(1+(-\Delta)^{1/2} \right)\cos(\overline{\theta}_{\delta}+w_{1})+\sin\overline{\theta}_{\delta} \,\left(1+(-\Delta)^{1/2}\right)\cos\overline{\theta}_{\delta}+\] \[-\sin\overline{\theta}_{\delta}(1+(-\Delta)^{1/2})\sin\overline{ \theta}_{\delta}w_{1}+w_{1}\cos\overline{\theta}_{\delta}(1+(-\Delta)^{1/2}) \cos\overline{\theta}_{\delta}.\]
Next, we rearrange the last expression by adding and subtracting the term \(\sin(\overline{\theta}_{\delta}+w_{1})\,\left[1+(-\Delta)^{1/2}\right](\cos( \overline{\theta}_{\delta})-w_{1}\sin(\overline{\theta}_{\delta}))\). Hence \(\mathcal{K}=A_{1}+A_{2}+A_{3}\) where
\[A_{1} :=-\sin(\overline{\theta}_{\delta}+w_{1})\,\left[1+(-\Delta)^{1 /2}\right]\left(\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta}+w_{1}\sin\overline{\theta}_{\delta}\right),\] \[A_{2} :=(\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta})[1+(-\Delta)^{1/2}](w_{1}\sin\overline{\theta}_{\delta}),\] \[A_{3} :=-(\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}-w_{1}\cos\overline{\theta}_{\delta})[1+(-\Delta)^{1/2}]\cos\overline{ \theta}_{\delta}.\]
From standard calculus, we know that
\[\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta} =w_{1}\int_{0}^{1}\cos(\theta_{\delta}+\xi w_{1})d\xi,\] \[\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta} =-w_{1}\int_{0}^{1}\sin(\theta_{\delta}+\xi w_{1})d\xi.\]
Then, by applying the same procedure, we achieve
\[\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}-w_{1}\cos\overline{\theta}_{\delta}= w_{1}\int_{0}^{1}\left[\cos(\theta_{\delta}+\xi w_{1})-\cos \overline{\theta}_{\delta}\right]\,d\xi\] \[= -w_{1}^{2}\int_{0}^{1}\int_{0}^{1}\xi\sin(\overline{\theta}_{ \delta}+\xi\eta w_{1})\,d\eta d\xi,\]
and
\[\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta}+w_{1}\sin\overline{\theta}_{\delta}= -w_{1}\int_{0}^{1}\left[\sin(\theta_{\delta}+\xi w_{1})-\sin \overline{\theta}_{\delta}\right]\,d\xi\] \[= -w_{1}^{2}\int_{0}^{1}\int_{0}^{1}\xi\cos(\overline{\theta}_{ \delta}+\xi\eta w_{1})\,d\eta d\xi.\]
Therefore, we have that:
\[|\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}| \leq|w_{1}|,\] \[|\sin(\overline{\theta}_{\delta}+w_{1})-\sin\overline{\theta}_{ \delta}-w_{1}\cos\overline{\theta}_{\delta}| \leq\tfrac{1}{2}|w_{1}|^{2}, \tag{6.9}\] \[|\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{\theta}_{ \delta}+w_{1}\cos\overline{\theta}_{\delta}| \leq\tfrac{1}{2}|w_{1}|^{2}.\]
Notice that \(w_{1}\in L^{\infty}\), due to \(w_{1}\in H^{2}(\mathbb{R})\) and the Sobolev imbedding theorems. This fact and Holder inequality imply that \(w_{1}^{2}\in L^{2}\) due to \(\left\|w_{1}^{2}\right\|_{L^{2}}^{2}\leq\left\|w_{1}\right\|_{L^{2}}^{2}\left\| w_{1}\right\|_{L^{\infty}}^{2}\). Moreover \(w_{1}^{2}\in H^{1}\) with \(\left\|w_{1}^{2}\right\|_{H^{1}}\leq 2\left\|w_{1}\right\|_{H^{1}}^{2}\) since
\[\left\|w_{1}^{2}\right\|_{H^{1}}^{2}= \left\|w_{1}^{2}\right\|_{L^{2}}^{2}+\left\|2w_{1}\partial_{x}w_{ 1}\right\|_{L^{2}}^{2}\] \[\leq \left\|w_{1}\right\|_{L^{\infty}}^{2}\left(\left\|w_{1}\right\|_{ L^{2}}^{2}+4\left\|\partial_{x}w_{1}\right\|_{L^{2}}^{2}\right)\] \[\leq 4\left\|w_{1}\right\|_{L^{\infty}}^{2}\left\|w_{1}\right\|_{H^{ 1}}^{2}\] \[\leq 4\left\|w_{1}\right\|_{H^{1}}^{4}.\]
This property allows us to easily estimate \(L^{2}\)-norm of \(A_{1}\), since
\[\left\|A_{1}\right\|_{L^{2}} \leq C\left\|\cos(\overline{\theta}_{\delta}+w_{1})-\cos\overline{ \theta}_{\delta}+w_{1}\sin\overline{\theta}_{\delta}\right\|_{H^{1}}\] \[\leq C\left\|\frac{w_{1}^{2}}{2}\right\|_{H^{1}}\] \[\leq C\left\|w_{1}\right\|_{H^{1}}^{2},\]
where the very first inequality followed since \(\left\|[1+(-\Delta)^{1/2}]u\right\|_{L^{2}}\leq C\left\|u\right\|_{H^{1}}\) for every \(u\in H^{1}\). Also the \(L^{2}\)-norm for the terms \(A_{2}\) and \(A_{3}\) can be bounded using (6.9),
\[\left\|A_{2}\right\|_{L^{2}}^{2} \leq \left\||w_{1}|[1+(-\Delta)^{1/2}](w_{1}\sin\overline{\theta}_{ \delta})\right\|_{L^{2}}^{2}\] \[\leq \left\|w_{1}\right\|_{L^{\infty}}^{2}\left\|[1+(-\Delta)^{1/2}]( w_{1}\sin\overline{\theta}_{\delta})\right\|_{L^{2}}^{2}\] \[\leq C^{2}\left\|w_{1}\right\|_{L^{\infty}}^{2}\left\|w_{1}\sin \overline{\theta}_{\delta}\right\|_{H^{1}}^{2},\]
\[\left\|A_{3}\right\|_{L^{2}}^{2} \leq \left\|\frac{\left|w_{1}\right|^{2}}{2}[1+(-\Delta)^{1/2}]\cos \overline{\theta}_{\delta}\right\|_{L^{2}}^{2}\] \[\leq \frac{1}{4}\left\|w_{1}\right\|_{L^{\infty}}^{4}\left\|[1+(- \Delta)^{1/2}]\cos\overline{\theta}_{\delta}\right\|_{L^{2}}^{2}\] \[\leq \frac{C^{2}}{4}\left\|w_{1}\right\|_{L^{\infty}}^{4}\left\|\cos \overline{\theta}_{\delta}\right\|_{H^{1}}^{2}.\]
Due to the Sobolev inequality \(\left\|w_{1}\right\|_{L^{\infty}}^{2}\leq 2\left\|w_{1}\right\|_{L^{2}}\left\| \partial_{x}w_{1}\right\|_{L^{2}}\), we have that \(\left\|w_{1}\right\|_{L^{\infty}}\leq\left\|w_{1}\right\|_{H^{1}}\). Also, we notice that
\[\left\|w_{1}\sin\overline{\theta}_{\delta}\right\|_{H^{1}}^{2}\leq\left\|w_{1 }\right\|_{L^{2}}^{2}+\left\|\partial_{x}w_{1}\right\|_{L^{2}}^{2}+\left\|w_{ 1}\partial_{x}\overline{\theta}_{\delta}\right\|_{L^{2}}^{2}\leq\left(1+ \left\|\partial_{x}\overline{\theta}_{\delta}\right\|_{L^{\infty}}^{2}\right) \left\|w_{1}\right\|_{H^{1}}^{2}.\]
Thus, we obtain
\[\left\|A_{2}\right\|_{L^{2}}\leq C\sqrt{1+\left\|\partial_{x}\overline{\theta} _{\delta}\right\|_{L^{\infty}}^{2}}\left\|w_{1}\right\|_{H^{1}}^{2},\quad \text{and}\quad\left\|A_{3}\right\|_{L^{2}}\leq\frac{C\left\|\cos\overline{ \theta}_{\delta}\right\|_{H^{1}}}{2}\left\|w_{1}\right\|_{H^{1}}^{2}.\]
Gluing together these three inequalities, we have that
\[\left\|\mathcal{K}\right\|_{L^{2}}\leq\left\|A_{1}\right\|_{L^{2}}+\left\|A_{2 }\right\|_{L^{2}}+\left\|A_{3}\right\|_{L^{2}}\leq\tilde{C}\left\|w_{1} \right\|_{H^{1}}^{2}\leq\tilde{C}\left\|W\right\|_{X}^{2}\]
and (H3) in Theorem 6.2 is verified. The proof is complete.
## Acknowledgements
A. Capella and R. G. Plaza thank Professors Yuri Latushkin and Jaime Angulo Pava for enlightening conversations and useful suggestions during a workshop at the Casa Matematica Oaxaca (BIRS-CMO). The work of A. Capella and R. G. Plaza was partially supported by CONAHCyT, Mexico, grant CF-2023-G-122. The work of L. Morales was supported by CONAHCyT, Mexico, through the Program "Estancias Postdoctorales por Mexico 2022".
|
2309.14621 | Confidence Intervals for the F1 Score: A Comparison of Four Methods | In Natural Language Processing (NLP), binary classification algorithms are
often evaluated using the F1 score. Because the sample F1 score is an estimate
of the population F1 score, it is not sufficient to report the sample F1 score
without an indication of how accurate it is. Confidence intervals are an
indication of how accurate the sample F1 score is. However, most studies either
do not report them or report them using methods that demonstrate poor
statistical properties. In the present study, I review current analytical
methods (i.e., Clopper-Pearson method and Wald method) to construct confidence
intervals for the population F1 score, propose two new analytical methods
(i.e., Wilson direct method and Wilson indirect method) to do so, and compare
these methods based on their coverage probabilities and interval lengths, as
well as whether these methods suffer from overshoot and degeneracy. Theoretical
results demonstrate that both proposed methods do not suffer from overshoot and
degeneracy. Experimental results suggest that both proposed methods perform
better, as compared to current methods, in terms of coverage probabilities and
interval lengths. I illustrate both current and proposed methods on two
suggestion mining tasks. I discuss the practical implications of these results,
and suggest areas for future research. | Kevin Fu Yuan Lam, Vikneswaran Gopal, Jiang Qian | 2023-09-26T02:20:13Z | http://arxiv.org/abs/2309.14621v2 | # Confidence Intervals for the \(F_{1}\) Score:
###### Abstract
In Natural Language Processing (NLP), binary classification algorithms are often evaluated using the \(F_{1}\) score. Because the sample \(F_{1}\) score is an estimate of the population \(F_{1}\) score, it is not sufficient to report the sample \(F_{1}\) score without an indication of how accurate it is. Confidence intervals are an indication of how accurate the sample \(F_{1}\) score is. However, most studies either do not report them or report them using methods that demonstrate poor statistical properties. In the present study, I review current analytical methods (i.e., Clopper-Pearson method and Wald method) to construct confidence intervals for the population \(F_{1}\) score, propose two new analytical methods (i.e., Wilson direct method and Wilson indirect method) to do so, and compare these methods based on their coverage probabilities and interval lengths, as well as whether these methods suffer from overshoot and degeneracy. Theoretical results demonstrate that both proposed methods do not suffer from overshoot and degeneracy. Experimental results suggest that both proposed methods perform better, as compared to current methods, in terms of coverage probabilities and interval lengths. I illustrate both current and proposed methods on two suggestion mining tasks. I discuss the practical implications of these results, and suggest areas for future research.
Confidence Intervals, Delta Method, \(F_{1}\) Score, Natural Language Processing, Supervised Learning.
## I Introduction
### _Background_
Natural Language Processing (NLP) is a subfield of computer science which uses computational techniques to learn, understand and produce human language content [1]. In NLP, computational techniques are used to address problems which include, but are not limited to, supervised learning [2].
In supervised learning, all of the data are labelled: the problem is a regression problem if the label is quantitative; and it is a classification problem if the label is qualitative [3]. In a classification problem, the labels can include two (i.e., binary classification problem) or more (i.e., multi-class classification problem) categories [4].
Regardless of whether the problem is a regression problem or a classification problem, the data are often split into a training set, a validation set and a test set: the training set is used to train the model, the validation set is used to tune the hyperparameters in the model, and the test set is used to evaluate the model [4].
In a binary classification problem, some metrics used to evaluate the performance of the model are accuracy, precision, recall and the \(F_{1}\) score [4]. Although accuracy might seem to be a natural metric, it is seldom used in NLP because it does not perform well if the categories are imbalanced. Instead, precision, recall and the \(F_{1}\) score, a single metric that incorporates both precision and recall, are preferred [4, 5, 6]. In fact, the \(F_{1}\) score has been used to evaluate the most recent developments in NLP, including Large Language Models (LLMs), from Bidirectional Encoder Representations from Transformers (BERT) by Google [7] to Large Language Model Meta AI-2 (LLaMA-2) by Meta [8]. Given both its prevalence in and its relevance to NLP, the \(F_{1}\) score will be the focus of the present paper.
### _Problem_
Regardless of the metric that is used to evaluate the performance of a model on the test set, it is assumed that both observations in the test set and the observations in the future are drawn from the same distribution [9]. In other words, the test set is a sample from a population, and the metrics (e.g., accuracy, precision, recall, \(F_{1}\) score) are sample estimates of the corresponding population parameters. If so, then it is not sufficient to report the sample estimate of the population parameter without an indication of how accurate it is [10].
Confidence intervals are an indication of how accurate a sample estimate is [10]. However, most studies do not report confidence intervals for the population \(F_{1}\) score, both in NLP [11] and elsewhere [5, 12]. Even among the studies that do, the confidence intervals either have coverage probabilities that are far from the nominal confidence level, have long interval lengths, suffer from overshoot and degeneracy or are computationally intensive [11, 12, 13, 14, 15, 16].
### _Contribution_
Given the limitations of current methods to construct confidence intervals for the population \(F_{1}\) score, I propose two analytical methods to do so. In the process, I answer the following three research questions:
* **Research Question 1**: What are the current analytical methods to construct confidence intervals for the population \(F_{1}\) score?
* **Research Question 2**: What are proposed analytical methods (i.e., Wilson direct method and Wilson indirect method) to construct confidence intervals for the population \(F_{1}\) score?
* **Research Question 3**: How do the proposed analytical methods perform, as compared to the current analytical methods, to construct confidence intervals for the population \(F_{1}\) score?
### _Outline_
The outline of the present paper is as follows: In Section II, I review the literature to define the \(F_{1}\) score, to define the \(F^{*}\) score and to describe the relationship between the \(F_{1}\) score and the \(F^{*}\) score. In Section III, I review the literature to define the sample \(F_{1}\) score, to define the sample \(F^{*}\) score, to demonstrate that the sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score, and to derive the asymptotic distribution of the sample \(F_{1}\) score.
In Section IV, I review the literature to describe the current analytical methods to construct confidence intervals for the population \(F_{1}\) score, describe the proposed analytical methods to construct confidence intervals for the population \(F_{1}\) score, and prove that the proposed methods do not suffer from overshoot and degeneracy.
In Section V, I perform a simulation study to compare the confidence intervals constructed using the Clopper-Pearson method, the Wald method, the Wilson direct method and the Wilson indirect method, across the different simulation conditions, based on different evaluation criteria.
In Section VI, I illustrate the Clopper-Pearson method, the Wald method, the Wilson direct method and the Wilson indirect method, to construct confidence intervals for the population \(F_{1}\) score, to evaluate the performance of Bidirectional Encoder Representations from Transformers (BERT) on two suggestion mining tasks using a public dataset and a private dataset.
In Section VII, I discuss the theoretical and practical implications of the study, and suggest directions for future research. In Section VIII, I conclude the present study.
## II \(F_{1}\) Score
The \(F_{1}\) score is the weighted harmonic mean of precision and recall in which both precision and recall are given equal weights [4, 6]:
\[F_{1} = \left(\frac{\text{precision}^{-1}+\text{recall}^{-1}}{2}\right)^{-1} \tag{1}\]
In the literature, the \(F_{1}\) score is also known as Sorensen-Dice coefficient [17, 18]. In this section, I review the literature to define the \(F_{1}\) score, to define the \(F^{*}\) score and to describe the relationship between the \(F_{1}\) score and the \(F^{*}\) score.
### \(F_{1}\)_Score_
Flores _et al._[12] demonstrated that the \(F_{1}\) score can be stated in terms of either unconditional probabilities or conditional probabilities.
#### Ii-A1 Unconditional Probabilities
Let the unconditional probabilities \(\boldsymbol{p}=(p_{11},p_{10},p_{01},p_{00})\), where \(p_{11}\) is the proportion of true positives, \(p_{10}\) is the proportion of false positives, \(p_{01}\) is the proportion of false negatives and \(p_{00}\) is the proportion of true negatives, among all documents. Table 1 summarises the unconditional probabilities \(\boldsymbol{p}\) in a confusion matrix.
The unconditional probabilities \(\boldsymbol{p}\) can be used to obtain both precision and recall.
Precision, also known as the positive predictive value [19], is the proportion of true positives among all documents that are predicted positives [4]:
\[\text{precision}=\frac{p_{11}}{p_{11}+p_{10}} \tag{2}\]
Recall, also known as sensitivity [19], is the proportion of true positives among all documents that are actual positives [4]:
\[\text{recall}=\frac{p_{11}}{p_{11}+p_{01}} \tag{3}\]
Then the \(F_{1}\) score can be stated in terms of the unconditional probabilities \(\boldsymbol{p}\):
\[F_{1} = \left(\frac{\text{precision}^{-1}+\text{recall}^{-1}}{2}\right)^{-1} \tag{4}\] \[= \left(\frac{1}{2}\cdot\frac{p_{11}+p_{10}}{p_{11}}+\frac{1}{2} \cdot\frac{p_{11}+p_{01}}{p_{11}}\right)^{-1}\] (5) \[= \frac{2p_{11}}{2p_{11}+p_{10}+p_{01}} \tag{6}\]
#### Ii-A2 Conditional Probabilities
Let the conditional probabilities \(\boldsymbol{\pi}=(\pi_{11},\pi_{10},\pi_{01})\), where \(\pi_{11}=p_{11}/(p_{11}+p_{10}+p_{01})\) is the proportion of true positives, \(\pi_{10}=p_{10}/(p_{11}+p_{10}+p_{01})\) is the proportion of false positives, \(\pi_{01}=p_{01}/(p_{11}+p_{10}+p_{01})\) is the proportion of false negatives, among all relevant documents (i.e., all documents that are either actual positives or predicted positives). Table 2 summarises the conditional probabilities \(\boldsymbol{\pi}\) in a confusion matrix.
Then the \(F_{1}\) score can also be stated in terms of the conditional probabilities \(\boldsymbol{\pi}\):
\[F_{1} = \frac{2p_{11}}{2p_{11}+p_{10}+p_{01}} \tag{7}\] \[= \frac{2\pi_{11}(p_{11}+p_{10}+p_{01})}{(2\pi_{11}+\pi_{10}+\pi_{ 01})(p_{11}+p_{10}+p_{01})}\] (8) \[= \frac{2\pi_{11}}{1+\pi_{11}} \tag{9}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline PredictedActual & Positive & Negative \\ \hline Positive & \(p_{11}\) & \(p_{10}\) \\ \hline Negative & \(p_{01}\) & \(p_{00}\) \\ \hline \end{tabular}
\end{table}
Table 1: Confusion Matrix For All Documents
\begin{table}
\begin{tabular}{|c|c|c|} \hline PredictedActual & Positive & Negative \\ \hline Positive & \(\pi_{11}\) & \(\pi_{10}\) \\ \hline Negative & \(\pi_{01}\) & – \\ \hline \end{tabular}
\end{table}
Table 2: Confusion Matrix For All Relevant Documents
### \(F^{*}\) _Score_
Hand _et al._[20] termed \(\pi_{11}\) as the \(F^{*}\) score, also known as the Jaccard coefficient [21]. In other words, the \(F_{1}\) score can be stated in terms of the \(F^{*}\) score:
\[F_{1} = \frac{2F^{*}}{1+F^{*}} \tag{10}\]
### \(F_{1}\) _Score and \(F^{*}\) Score_
Hand _et al._[20] demonstrated that the \(F_{1}\) score is a monotonic function of the \(F^{*}\) score on the interval [0,1].
**Proposition 1.** The \(F_{1}\) score is a monotonic function of the \(F^{*}\) score on the interval [0,1].
**Proof.** From (10), the first derivative of the \(F_{1}\) with respect to \(F^{*}\) is as follows:
\[\frac{\partial F_{1}}{\partial F^{*}} = \frac{2}{1+F^{*}}-\frac{2F^{*}}{(1+F^{*})^{2}} \tag{11}\] \[= \frac{2}{(1+F^{*})^{2}} \tag{12}\]
By the First Derivative Test (e.g., [22, p. 250]), because \(\frac{\partial F_{1}}{\partial F^{*}}>0\) at each point \(F^{*}\in(0,1)\), \(F_{1}\) is increasing on [0,1]. Therefore, the \(F_{1}\) score is a monotonic function of the \(F^{*}\) score on the interval [0,1]. Figure 1 illustrates the relationship between the \(F_{1}\) score and the \(F^{*}\) score on the interval [0,1].
## III Sample \(F_{1}\) Score
In this section, I review the literature to define the sample \(F_{1}\) score, to define the sample \(F^{*}\) score, to demonstrate that the sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score, and to derive the asymptotic distribution of the sample \(F_{1}\) score.
### _Sample \(F_{1}\) Score_
Flores _et al._[12] demonstrated that the sample \(F_{1}\) score can be stated in terms of either unconditional probabilities or conditional probabilities.
#### Iii-A1 Unconditional Probabilities
Let _n_=\((n_{11},n_{10},n_{01},n_{00})\), where \(n_{11}\) is the number of true positives, \(n_{10}\) is the number of false positives, \(n_{01}\) is the number of false negatives, and \(n_{00}\) is the number of true negatives, among all \(n\) documents in a sample.
Then \(n\) can be assumed to follow a multinomial distribution with parameters \(n\) and _p_[12, 16, 5, 23]:
\[\textbf{{n}}\sim Multinomial(n;\textbf{{p}}) \tag{13}\]
Both \(n\) and \(n\) can be used to obtain the unconditional probabilities \(\hat{\textbf{{p}}}=(\hat{p}_{11},\hat{p}_{10},\hat{p}_{01},\hat{p}_{00})\), where \(\hat{p}_{11}=n_{11}/n\) is the proportion of true positives, \(\hat{p}_{10}=n_{10}/n\) is the proportion of false positives, \(\hat{p}_{01}=n_{01}/n\) is the proportion of false negatives and \(\hat{p}_{00}=n_{00}/n\) is the proportion of true negatives, among all \(n\) documents in the sample.
Then the sample \(F_{1}\) score can be stated in terms of the unconditional probabilities \(\hat{\textbf{{p}}}\):
\[\hat{F}_{1} = \frac{2\hat{p}_{11}}{2\hat{p}_{11}+\hat{p}_{10}+\hat{p}_{01}} \tag{14}\]
#### Iii-A2 Conditional Probabilities
Let _\(\boldsymbol{\nu}\)_=\((n_{11},n_{10},n_{01})\), where \(n_{11}\) is the number of true positives, \(n_{10}\) is the number of false positives and \(n_{01}\) is the number of false negatives, among all \(\nu=n-n_{00}\) documents in a sample.
Then \(\nu\) can be assumed to follow a binomial distribution with parameters \(n\) and \(p_{11}+p_{10}+p_{01}\)[12]:
\[\nu\sim Binomial(n,p_{11}+p_{10}+p_{01}) \tag{15}\]
And \(n_{11}\) can be assumed to follow a binomial distribution with parameters \(\nu\) and \(F^{*}\), conditioned on the observed value of \(\nu\)[12]:
\[n_{11}\sim Binomial(\nu,F^{*}) \tag{16}\]
Both \(\nu\) and \(\boldsymbol{\nu}\) can be used to obtain the conditional probabilities \(\hat{\boldsymbol{\pi}}\)=\((\hat{\pi}_{11},\hat{\pi}_{10},\hat{\pi}_{01})\), where \(\hat{\pi}_{11}=n_{11}/\nu\) is the proportion of true positives, \(\hat{\pi}_{10}=n_{10}/\nu\) is the proportion of false negatives, among all \(\nu\) relevant documents in the sample.
Then the sample \(F_{1}\) score can also be stated in terms of the conditional probabilities \(\hat{\boldsymbol{\pi}}\):
\[\hat{F}_{1} = \frac{2\hat{\pi}_{11}}{1+\hat{\pi}_{11}} \tag{17}\]
### _Sample \(F^{*}\) Score_
If \(\hat{\pi}_{11}\) is termed the sample \(F^{*}\) score, then the sample \(F_{1}\) score can also be stated in terms of the sample \(F^{*}\) score:
\[\hat{F}_{1} = \frac{2\hat{F}^{*}}{1+\hat{F}^{*}} \tag{18}\]
Figure 1: \(F_{1}\) is a monotonic function of \(F^{*}\) on the interval [0,1].
### _Maximum Likelihood Estimation of the Population \(F_{1}\) Score_
Regardless of whether the sample \(F_{1}\) score is stated in terms of the unconditional probabilities \(\hat{\mathbf{p}}\) or the conditional probabilities \(\hat{\mathbf{\pi}}\) (i.e., the sample \(F^{*}\) score), the sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score.
**Proposition 2.** The sample \(F_{1}\) score is the maximum likelihood estimator of the population \(F_{1}\) score.
**Proof.** By the invariance property of maximum likelihood estimators, (e.g., [24, p. 320]), because the sample proportion (i.e., \(\hat{F}^{*}\)) is the maximum likelihood estimator of the population proportion (i.e., \(F^{*}\)) (e.g., [24, p. 318]), and \(F_{1}\) is a one-to-one function of \(F^{*}\) on the interval [0,1] (Proposition 1), the sample \(F_{1}\) score is also the maximum likelihood estimator of the population \(F_{1}\) score.
### _Asymptotic Distribution of the Sample \(F_{1}\) Score_
Flores _et al._[12] used the Delta Method to derive the asymptotic distribution of the sample \(F_{1}\) score.
**Proposition 3.** If \(\nu\) is sufficiently large, then \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately the standard normal distribution.
**Proof.** By the Central Limit Theorem (e.g., [25, p. 107]), or the De Moivre-Laplace Theorem (e.g., [25, p. 108]), \((\hat{F}^{*}-F^{*})/\sigma_{\hat{F}^{*}}\) converges in distribution to the standard normal distribution:
\[\frac{(\hat{F}^{*}-F^{*})}{\sigma_{\hat{F}^{*}}}\xrightarrow{D}N(0,1) \tag{19}\]
where \(\sigma_{\hat{F}^{*}}^{2}\) is the variance of \(\hat{F}^{*}\):
\[\sigma_{\hat{F}^{*}}^{2} = \frac{F^{*}(1-F^{*})}{\nu} \tag{20}\]
By the Delta Method (e.g., [25, p. 131; 26, p. 637]), \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) also converges in distribution to the standard normal distribution:
\[\frac{(\hat{F}_{1}-F_{1})}{\sigma_{\hat{F}_{1}}^{2}}\xrightarrow{D}N(0,1) \tag{21}\]
where \(\sigma_{\hat{F}_{1}}^{2}\) is the variance of \(\hat{F}_{1}\):
\[\sigma_{\hat{F}_{1}}^{2} = \left[\frac{\partial F_{1}}{\partial F^{*}}\right]^{2}\frac{ \sigma_{\hat{F}^{*}}^{2}}{\nu} \tag{22}\] \[= \left[\frac{2}{(1+F^{*})^{2}}\right]^{2}\frac{F^{*}(1-F^{*})}{\nu}\] (23) \[= \frac{4F^{*}(1-F^{*})}{\nu(1+F^{*})^{4}}\] (24) \[= \frac{F_{1}(1-F_{1})(2-F_{1})^{2}}{2\nu} \tag{25}\]
Therefore, if \(\nu\) is sufficiently large, then \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately the standard normal distribution.
## IV Confidence Intervals for the \(F_{1}\) Score
A \((1-\alpha)100\%\) confidence interval for a population parameter, where \(\alpha\) is the level of statistical significance (e.g., 0.05), is the expected percentage of intervals that include the true population parameter if repeated samples from the population are taken, and a confidence interval is constructed using the same method for each possible sample (e.g., [10]).
In this section, I review the literature to describe the current analytical methods to construct confidence intervals for the population \(F_{1}\) score (i.e., Clopper-Pearson method and Wald method), describe the proposed analytical methods to construct confidence intervals for the population \(F_{1}\) score (i.e., Wilson direct method and Wilson indirect method), and prove that the proposed methods do not suffer from both overshoot and degeneracy.
### _Clopper-Pearson Method_
The Clopper-Pearson method assumes that \(n_{11}\) has a binomial distribution with parameters \(\nu\) and \(F^{*}\) (16) and inverts the binomial test for the sample \(F^{*}\) score [14].
The endpoints of the confidence interval for the population \(F^{*}\) score are the solutions in \(F^{*}\) to the following equations:
\[P_{F^{*}}(n_{11}<\tilde{n}_{11})=1-\alpha/2 \tag{26}\]
and
\[P_{F^{*}}(n_{11}>\tilde{n}_{11})=1-\alpha/2 \tag{27}\]
where \(\tilde{n}_{11}\) is the realisation of \(n_{11}\).
In particular, the lower endpoint is the \(\alpha/2\) quantile of a beta distribution with parameters \(\tilde{n}_{11}\) and \(\nu-\tilde{n}_{11}+1\), and the upper endpoint is the \(1-\alpha/2\) quantile of a beta distribution with parameters \(\tilde{n}_{11}+1\) and \(\nu-\tilde{n}_{11}\) (e.g., [14]).
The endpoints of the confidence interval for the population \(F_{1}\) score are the abovementioned solutions' transformation via (10).
### _Wald Method_
The Wald method assumes that \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately a standard normal distribution (Proposition 3) and inverts the Wald test for the sample \(F_{1}\) score [14]:
\[\left|\frac{\hat{F}_{1}-F_{1}}{\hat{\sigma}_{\hat{F}_{1}}}\right|<z_{\alpha/2} \tag{28}\]
where \(z_{\alpha/2}=\Phi^{-1}(1-\alpha/2)\), \(\Phi(z)\) is the standard normal distribution function, and \(\hat{\sigma}_{\hat{F}_{1}}^{2}\) is the estimated variance of the sample \(F_{1}\) score using the maximum likelihood estimator of the population \(F_{1}\) score (Proposition 2):
\[\hat{\sigma}_{\hat{F}_{1}}^{2} = \frac{\hat{F}_{1}(1-\hat{F}_{1})(2-\hat{F}_{2})^{2}}{2\nu} \tag{29}\]
The endpoints of the confidence interval for the population \(F_{1}\) score using the Wald method are the solutions in \(F_{1}\) to the
following equations:
\[F_{1} = \hat{F}_{1}-z_{\alpha/2}\times\hat{\sigma}_{\hat{F}_{1}} \tag{30}\]
and
\[F_{1} = \hat{F}_{1}+z_{\alpha/2}\times\hat{\sigma}_{\hat{F}_{1}} \tag{31}\]
### _Wilson Direct Method_
The Wilson direct method also assumes that that \((\hat{F}_{1}-F_{1})/\sigma_{\hat{F}_{1}}\) has approximately a standard normal distribution (Proposition 3). However, it uses the null variance, as provided in (25), instead of the estimated variance, as provided in (29), when inverting the score test for the sample \(F_{1}\) score [14]:
\[\left|\frac{\hat{F}_{1}-F_{1}}{\sigma_{\hat{F}_{1}}}\right|<z_{\alpha/2} \tag{32}\]
The endpoints of the confidence interval for the population \(F_{1}\) score are the real (i.e., not imaginary) solutions in \(F_{1}\) to the following quartic equation, most conveniently solved by iteration (e.g., [27]):
\[\left(\frac{\hat{F}_{1}-F_{1}}{\sigma_{\hat{F}_{1}}}\right)^{2} = z_{\alpha/2}^{2} \tag{33}\] \[\left(\hat{F}_{1}-F_{1}\right)^{2} = z_{\alpha/2}^{2}\frac{F_{1}(1-F_{1})(2-F_{1})^{2}}{2\nu} \tag{34}\]
If \(k=z_{\alpha/2}^{2}/\nu\), then
\[2F_{1}^{2}-4\hat{F}_{1}F_{1}+2\hat{F}_{1}^{2}=k(F_{1}-F_{1}^{2})(4-4F_{1}+F_{ 1}^{2}) \tag{35}\]
\[2F_{1}^{2}-4\hat{F}_{1}F_{1}+2\hat{F}_{1}^{2}=k(4F_{1}-8F_{1}^{2} +5F_{1}^{3}-F_{1}^{4}) \tag{36}\]
\[kF_{1}^{4}-5kF_{1}^{3}+2(4k+1)F_{1}^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad-4(k+\hat{F}_{1})F_{1}+2\hat{ F}_{1}^{2}=0 \tag{37}\]
### _Wilson Indirect Method_
The Wilson indirect method assumes that \((\hat{F}^{*}-F^{*})/\sigma_{\hat{F}^{*}}\) has approximately the standard normal distribution (19) and inverts the score test for the sample \(F^{*}\) score:
\[\left|\frac{\hat{F}^{*}-F^{*}}{\sigma_{\hat{F}^{*}}}\right|<z_{\alpha/2} \tag{38}\]
where \(\sigma_{\hat{F}^{*}}^{2}\) is the null variance of \(\hat{F}^{*}\) as provided in (20).
The endpoints of the confidence interval for the population \(F^{*}\) score using the Wilson indirect method are the solutions in \(F^{*}\) to the following quadratic equation:
\[\left(\frac{\hat{F}^{*}-F^{*}}{\sigma_{\hat{F}^{*}}}\right)^{2} = z_{\alpha/2}^{2} \tag{39}\] \[\left(\hat{F}^{*}-F^{*}\right)^{2} = z_{\alpha/2}^{2}\frac{F^{*}(1-F^{*})}{\nu} \tag{40}\]
If \(k=z_{\alpha/2}^{2}/\nu\), then
\[\hat{F}^{*^{2}}-2\hat{F}^{*}F^{*}+{F^{*}}^{2}=kF^{*}-k{F^{*}}^{2} \tag{41}\] \[(1+k){F^{*}}^{2}-(2\hat{F}^{*}+k)F^{*}+\hat{F}^{*^{2}}=0 \tag{42}\]
The endpoints of the confidence interval for the population \(F_{1}\) score using the Wilson indirect method are the abovementioned solutions' transformation via (10).
### _Overshoot and Degeneracy_
Because confidence intervals for the population \(F_{1}\) score constructed using the Wald method produce intervals centred on the point estimate, these intervals suffer from both overshoot, in which either the upper limit is greater than 1 or the lower limit is less than 0, and degeneracy, in which the confidence interval has zero width [5, 12, 15]. And because confidence intervals for the population \(F^{*}\) score, constructed using either the Clopper-Pearson method or the Wilson score method, do not suffer from overshoot and degeneracy [15], those for the population \(F_{1}\) score, constructed using either the Clopper-Pearson method or the Wilson indirect method, obtained from a strictly monotonic transformation via (10), also do not suffer from overshoot and degeneracy.
However, the properties of confidence intervals constructed using the Wilson direct method are less apparent and have not been studied in the literature, to the best of my knowledge. In the remainder of the section, I prove that confidence intervals for the population \(F_{1}\) score constructed using the Wilson direct method also do not suffer from overshoot and degeneracy because they have exactly 2 distinct real roots under almost all conditions in practice (e.g., to have at least 3 relevant observations in the test set if the level of statistical significance is set at 0.05).
In the proof, let \(f(F_{1})\) be the quartic polynomial in the left-hand-side of (37). Then \(f(F_{1})\) can also expressed as follows, from (34), in order to facilitate the proof:
\[f(F_{1})=kF_{1}^{4}-5kF_{1}^{3}+2(4k+1)F_{1}^{2}\\ -4(k+\hat{F}_{1})F_{1}+2\hat{F}_{1}^{2} \tag{43}\]
\[=kF_{1}(F_{1}-1)(F_{1}-2)^{2}+2(F_{1}-\hat{F}_{1})^{2} \tag{44}\]
The first derivative is as follows:
\[f^{\prime}(F_{1})=\frac{\partial}{\partial F_{1}}f(F_{1}) \tag{45}\]
\[=4kF_{1}^{3}-15kF_{1}^{2}+16kF_{1}+4F_{1}-4(k+\hat{F}_{1}) \tag{46}\]
\[=4kF_{1}^{3}-15kF_{1}^{2}+16kF_{1}+4F_{1}-4k-4\hat{F}_{1} \tag{47}\]
\[=4kF_{1}^{3}-15kF_{1}^{2}+16kF_{1}-4k+4(F_{1}-\hat{F}_{1}) \tag{48}\]
\[=k(4F_{1}^{3}-15F_{1}^{2}+16F_{1}-4)+4(F_{1}-\hat{F}_{1}) \tag{49}\]
\[=k(F_{1}-2)(4F_{1}^{2}-7F_{1}+2)+4(F_{1}-\hat{F}_{1}) \tag{50}\]
And the second derivative is as follows:
\[f^{\prime\prime}(F_{1}) =\frac{\partial^{2}}{\partial F_{1}^{2}}f(F_{1}) \tag{51}\] \[=12kF_{1}^{2}-30kF_{1}+16k+4 \tag{52}\]
**Lemma 1.**\(f(F_{1})\) has at least 2 distinct real roots in the interval [0,1] if \(k>0\).
**Proof.** First, suppose that \(\hat{F}_{1}=0\). Then \(F_{1}=0\) is a root. By the Intermediate Value Theorem (e.g., [22, p. 127]), because \(f^{\prime}(0)<0\) (i.e., \(f(0+h)<0\) for some small increment \(h\)) if \(k>0\), and \(f(1)>0\), \(f(F_{1})\) has a real root in the interval (0,1) if \(k>0\).
Second, suppose that \(0<\hat{F}_{1}<1\). By the Intermediate Value Theorem, because \(f(\hat{F}_{1})<0\) if \(k>0\), and \(f(0)>0\), \(f(F_{1})\) has a root in the interval (0, \(\hat{F}_{1}\)) if \(k>0\). And because \(f(1)>0\), \(f(F_{1})\) also has a root in the interval (\(\hat{F}_{1}\), 1) if \(k>0\).
Last, suppose that \(\hat{F}_{1}=1\). Then \(F_{1}=1\) is a root. By the Intermediate Value Theorem, because \(f^{\prime}(1)>0\) (i.e., \(f(1-h)<0\) for some small decrement \(h\)) if \(k>0\), and \(f(0)>0\), \(f(F_{1})\) has a real root in the interval (0,1) if \(k>0\).
Therefore \(f(F_{1})\) has at least 2 distinct real roots in the interval [0,1] if \(k>0\).
**Lemma 2.**\(f(F_{1})\) has at most 2 distinct real roots if \(0<k<16/11\).
**Proof.** If \(f(F_{1})\) is concave up, then it has at most 2 distinct real roots. By the Second Derivative Test for Concavity (e.g., [22, p. 256]), \(f(F_{1})\) is concave up if \(f^{\prime\prime}(F_{1})>0\). And \(f^{\prime\prime}(F_{1})>0\) if it is also concave up (i.e., \(12k>0\)) and its discriminant is strictly negative:
\[(-30k)^{2}-4(12k)(16k+4) <0 \tag{53}\] \[132k^{2}-192k <0 \tag{54}\]
Which implies that
\[0<k<16/11 \tag{55}\]
Therefore, \(f(F_{1})\) has at most 2 distinct real roots if \(0<k<16/11\).
**Theorem 1.**\(f(F_{1})\) has exactly 2 distinct roots in the interval [0,1] if \(0<k<16/11\).
**Proof.** The theorem follows from Lemma 1 and Lemma 2.
Therefore, the confidence intervals for the population \(F_{1}\) score constructed using the Wilson direct method do not suffer from overshoot and degeneracy if \(\nu>(11/16)\times z_{\alpha/2}^{2}\) and \(z_{\alpha/2}^{2}>0\). For example, if the level of statistical significance is 0.05, then the number of relevant observations in the test set should be at least 3.
## V Simulation Study
In this section, I perform a simulation study to compare the 95% confidence intervals for the population \(F_{1}\) score constructed using the Clopper-Pearson method, the Wald method, the Wilson direct method and the Wilson indirect method, across the different simulation conditions, based on different evaluation criteria. In particular, I describe the simulation conditions, the evaluation criteria and the study results.
### _Simulation Conditions_
The simulation conditions were adapted from Takahashi _et al._[5]. In particular, 18 simulation conditions were obtained from crossing six \(n\) (i.e., 25, 50, 100, 500, 1000, 5000) against three scenarios:
* **Scenario 1:** The positive class has moderate prevalence (50%), high precision (80%) and high recall (80%). Therefore, \(p_{11}=0.4\), \(p_{10}=0.1\), \(p_{01}=0.1\), \(p_{00}=0.4\) and \(F_{1}=0.8\).
* **Scenario 2:** The positive class has high prevalence (80%), high precision (80%) and high recall (80%). Therefore, \(p_{11}=0.64\), \(p_{10}=0.16\), \(p_{01}=0.16\), \(p_{00}=0.04\) and \(F_{1}=0.8\).
* **Scenario 3:** The positive class has high prevalence (80%), high precision (80%) and low recall (20%). Therefore, \(p_{11}=0.16\), \(p_{10}=0.04\), \(p_{01}=0.64\), \(p_{00}=0.16\) and \(F_{1}=0.32\).
For each simulation condition, I generated \(10^{6}\) replicates. Each replicate was generated from a multinomial distribution with parameters corresponding to the \(n\), \(p_{11}\), \(p_{10}\), \(p_{01}\) and \(p_{00}\) for that condition. The generation of replicates from multinomial distributions has been applied in previous studies which examined the properties of confidence intervals for the population \(F_{1}\) score [5, 12].
### _Evaluation Criteria_
In each simulation condition, I evaluated each method based on their coverage probabilities, expected lengths, overshoot probabilities and degeneracy probabilities. The coverage probability is the proportion of times the confidence interval includes the true population parameter [15]. The expected length is the expected difference between the upper and lower limits of the confidence interval [14]. The overshoot probability is the proportion of times the confidence interval has either an upper limit that is greater than 1 or a lower limit that is less than 0 [15]. The degeneracy probability is the proportion of times the confidence interval has zero width (i.e., the difference between the upper and lower limits of the confidence interval is zero) [15]. It is desirable to have confidence intervals with coverage probabilities near the nominal confidence level, with small expected lengths, and that do not suffer from overshoot and degeneracy.
### _Study Results_
#### V-C1 Coverage Probabilities
For small \(n\), across all scenarios, the Clopper-Pearson method demonstrated coverage probabilities that were far greater than the nominal confidence level, the Wald method demonstrated coverage probabilities that were far less than the nominal confidence level, and both the Wald direct
method and the Wald indirect method demonstrated coverage probabilities that were near the nominal confidence level. For large \(n\), across all scenarios, all four methods demonstrated coverage probabilities that were near the nominal confidence level. Table III summarises the coverage probabilities for each method across all simulation conditions.
#### V-C2 Expected Lengths
For small \(n\), in Scenarios 1 and 2, the Wilson indirect method demonstrated the shortest expected lengths as compared to the Clopper-Pearson method, the Wald method and the Wilson direct method; but in Scenario 3, the Wilson direct method demonstrated the shortest expected lengths as compared to the Clopper-Pearson method, the Wald method and the Wilson indirect method. For large \(n\), across all scenarios, all four methods demonstrated comparable expected lengths. For each scenario, all methods' expected lengths decreased as \(n\) increased, as expected. Table IV summarises the expected lengths for each method across all simulation conditions.
The difference in expected lengths between the confidence intervals constructed using the Wilson direct method and the Wilson indirect method occurs because both methods' interval lengths depend on both \(\nu\) and \(\hat{F}_{1}\). Figure 2 illustrates a comparison of the interval lengths between the Wilson direct method, obtained from the absolute difference between the roots to (37), and the Wilson indirect method, obtained from the absolute difference between the transformed roots to (42) (i.e., via (10)), across different \(\nu\) and \(\hat{F}_{1}\).
If \(\nu\) is small and \(\hat{F}_{1}\) is small, then the Wilson direct method produces confidence intervals that are shorter as compared to the Wilson indirect method. However, if \(\nu\) is small and \(\hat{F}_{1}\) is large, then the Wilson indirect method produces confidence intervals that are shorter as compared to the Wilson direct method. And if \(\nu\) is large, then both methods produce confidence intervals that are of comparable lengths.
For example, in Scenario 3 where \(n=25\) and \(F_{1}=0.32\), the Wilson direct method demonstrated a shorter expected length (0.395) as compared to the Wilson indirect method (0.414) (Table IV). This occurred because \(n\) was small, and therefore \(\nu\leq n\) was small, and because the population \(F_{1}\) score was small, and therefore \(\hat{F}_{1}\) tended to be small.
#### V-C3 Overshoot Probabilities
For small \(n\), across all scenarios, the Wald method demonstrated overshoot. However, for large \(n\), across all scenaiors, the Wald method did not demonstrate overshoot. Regardless of \(n\), across all scenarios, neither the Clopper-Pearson method, the Wilson direct method nor the Wilson indirect method demonstrated degeneracy. Table IV summarises the degeneracy probabilities for each method across all simulation conditions.
tations from Transformers (BERT) on two suggestion mining tasks using a public dataset and a private dataset.
On the one hand, suggestion mining is the automatic extraction of suggestions using techniques in NLP [28]. In suggestion mining, an explicit suggestion directly suggests or recommends an action or entity, and an implicit suggestion indirectly suggests or recommends an action or entity [29]. In the literature, suggestion mining often focuses on explicit suggestions, and has been applied to extract suggestions across a number of industries including, but not limited to, education, software and tourism (e.g., [30, 31]).
On the other hand, BERT is a language model built on a multi-layer bidirectional transformer encoder architecture based on the original implementation described in Vaswani _et al._[32]. BERT was pre-trained on a masked language modelling task and a next sentence prediction task using data from BooksCorpus and English Wikipedia [7]. BERT\({}_{\text{BASE}}\) comprises 12 layers (i.e., transformer blocks), 768 hidden units and 12 self-attention heads (i.e., 110 million parameters); and BERT\({}_{\text{LARGE}}\) comprises 24 layers, 1024 hidden units and 16 attention heads (i.e., 340 million parameters).
In the present examples, I used the fine-tuning approach on BERT\({}_{\text{BASE}}\) in which a simple classification layer was added to the pre-trained model and all parameters were jointly fine-tuned on the corresponding downstream task [7]. As suggested by Devlin et al. [7], I ran an exhaustive search over the following hyperparameters and chose the model that performed best on the validation set:
* **Batch size:** 16, 32
* **Learning rate (Adam):** 5e-5, 3e-5, 2e-5
* **Number of epochs:** 2, 3, 4
### _Public Dataset_
Yamamoto and Sekiya [33] used the fine-tuning approach on BERT\({}_{\text{BASE}}\) and applied it to the Semantic Evaluation 2019 (SemEval2019) Task 9 Subtask A dataset [31]. The dataset was scraped from a software forum. It was annotated in two phases. In Phase 1, crowdsourced annotators labelled all observations as either a suggestion or a non-suggestion. And in Phase 2, expert annotators labelled some samples, in particular only those labelled as a suggestion in Phase 1, as either a suggestion or a non-suggestion. In the final dataset, an observation was labelled as a suggestion if it was labelled as such in both Phase 1 and Phase 2. Otherwise, the observation was labelled as a non-suggestion.
The final dataset comprised 9925 observations among which 2468 (25%) were suggestions. The dataset was split into a training set, a validation set and a test set. The training set comprised 8,500 observations among which 2,085 (25%) were suggestions. The validation set comprised 592 observations among which 296 were suggestions (50%). And the test set comprised 833 observations among which 87 (10%) were suggestions.
In Yamamoto and Sekiya [33], BERT\({}_{\text{BASE}}\) achieved an \(F_{1}\) score of 0.731 on the test set. In the present example, BERT\({}_{\text{BASE}}\) achieved 77 true positives, 44 false positives, 10 false negatives and 702 true negatives on the same test set. This corresponds to an \(F_{1}\) score of 0.740.
Using the Clopper-Pearson method, the 95% confidence interval is [0.665,0.805] and has an interval length of 0.139. Using the Wald method, the 95% confidence interval is [0.674,0.807] and has an interval length of 0.134. Using the Wilson direct method, the 95% confidence interval is [0.664,0.799] and has an interval length of 0.135. And using the Wilson indirect method, the 95% confidence interval is [0.669,0.801] and has an interval length of 0.133.
As expected, the confidence interval constructed using the Clopper-Pearson method was the longest. The confidence interval constructed using the Wilson indirect method was shorter as compared to that constructed using the Wilson direct method because the sample \(F_{1}\) score was relatively large.
### _Private Dataset_
I applied BERT\({}_{\text{BASE}}\) to the course feedback for a programme offered, to more than 4300 executive and administrative staff, as well as laboratory technicians, at a university. Upon completion of the programme, all participants were required to complete questionnaires which comprised, among others, the following questions:
1. What do you like about the onsite workshop?
2. Were there any issues or problems that you encountered in the onsite workshop?
3. Overall, what is the most useful topic you learnt in the programme?
4. Have the skills that you learnt in the programme helped you in your role at the university? Please give a brief description.
I labelled all responses as either a suggestion or a non-suggestion based on the definition provided in Negi _et al._[29]. The final dataset comprised 12303 observations among which 802 (7%) were suggestions. The final dataset was split into a training set, a validation set and a test set. The training set comprised 9841 samples among which 634 (6%) were suggestions. The validation set comprised 1231 observations among which 71 (6%) were suggestions. And the test set comprised 1231 observations among which 97 (8%) were suggestions.
In the present example, BERT\({}_{\text{BASE}}\) achieved 83 true positives, 9 false positives, 14 false negatives and 1125 true negatives on the test set. This corresponds to an \(F_{1}\) score of 0.878.
Using the Clopper-Pearson method, the 95% confidence interval is [0.818,0.923] and has an interval length of 0.105. Using the Wald method, the 95% confidence interval is [0.829,0.928] and has an interval length of 0.099. Using the Wilson direct method, the 95% confidence interval is [0.817,0.918] and has an interval length of 0.102. And using the Wilson indirect method,
the 95% confidence interval is [0.820,0.919] and has an interval length of 0.099.
As expected, the confidence interval constructed using the Clopper-Pearson method was the longest. The confidence interval constructed using the Wilson indirect method was shorter as compared to that constructed using the Wilson direct method because the sample \(F_{1}\) score was relatively large.
## VII Discussion
In the present paper, I reviewed the current analytical methods to construct confidence intervals for the population \(F_{1}\) score, proposed two analytical methods to do so, and compared their performances. The answers to the three research questions formulated in Section I are as follows:
* First, the current analytical methods to construct the confidence interval for the population \(F_{1}\) score include the Clopper-Pearson method, which inverts the Binomial test for the sample count, and the Wald method, which inverts the Wald test for the sample \(F_{1}\) score.
* Second, the proposed analytical methods to construct confidence intervals for the population \(F_{1}\) score include the Wilson direct method, which inverts the score test for the sample \(F_{1}\) score, and the Wilson indirect method, which inverts the score test for the sample \(F^{*}\) score.
* Last, both the Wilson direct method and the Wilson indirect method perform better, in terms of coverage probabilities and average lengths, as compared to the Clopper-Pearson method and the Wald method to construct confidence intervals for the population \(F_{1}\) score. In addition, unlike the Wald method, neither the Wilson direct method nor the Wilson indirect method suffer from overshoot and degeneracy.
In accordance with these findings, Takahashi _et al._[5] also reported that the coverage probabilities for the Wald method to construct confidence intervals for the population \(F_{1}\) score tended to be far smaller as compared to the nominal confidence level when \(n<100\) (p. 10).
Because the sample \(F_{1}\) score is an estimate of the population \(F_{1}\) score [10], confidence intervals should be used to indicate how accurate the estimate is. In the construction of confidence intervals for the population \(F_{1}\) score, if the test set is small (i.e., \(n\leq 100\)), then both the Wilson direct method and the Wilson indirect method are recommended, especially since both methods demonstrate better coverage probabilities and interval lengths as compared to the Clopper-Pearson method and the Wald method. Because both methods demonstrate comparable coverage probabilities for small \(n\), the choice between them depends on the interval length. If the test set is large (i.e., \(n>100\)), then both the Wilson direct method and the Wilson indirect method are also recommended, especially since both methods do not suffer from overshoot and degeneracy. Because both methods demonstrate comparable coverage probabilities and interval lengths for large \(n\), the choice between them depends on individual preference.
The recommendation of methods which construct confidence intervals using the score test (i.e., Wilson direct method and Wilson indirect method), as compared to using either the binomial test (i.e., Clopper-Pearson method) or the Wald test (i.e., Wald method), is also consistent with the literature. In the construction of confidence intervals for the population proportion, Brown et al. [14] recommended the Wilson method regardless of sample size. The authors also argued that the Clopper-Pearson method is "wastefully conservative and is not a good choice for practical use" (p. 113), and that the Wald method is "persistently chaotic and unacceptably poor" (p. 115). And in the comparisons of predictive values for binary diagnostic tests for paired designs, Leisenring et al. [34] argued that although "the gains are small, the score statistic has consistently better size and power than a generalised Wald statistic" (p. 349).
To the best of my knowledge, the present paper is the first to propose the Wilson direct method and the Wilson indirect method to construct confidence intervals for the population \(F_{1}\) score, to prove that the Wilson direct method does not suffer from both overshoot and degeneracy, and to compare the performance of both methods against the Clopper-Pearson method and the Wald method. Nonetheless, it is not without limitations. First, the present paper focused on constructing confidence intervals for a single population \(F_{1}\) score but not for the difference between two or more population \(F_{1}\) scores. Confidence intervals for the difference between two or more population \(F_{1}\) scores should account for non-independence between observations, especially if the confidence intervals are constructed using the same test set. Future research can investigate this, perhaps through the use of Generalised Estimating Equations (GEEs; e.g., [34]). Second, the present paper focused on analytical but not computational, and frequentist but not Bayesian, methods to construct either confidence or credible intervals for the population \(F_{1}\) score. Future research can investigate this, perhaps building on current research focused on computational and/or Bayesian methods [11, 12, 16, 35]. Last, the present paper focused on the \(F_{1}\) score but not the \(F\)-beta score. The \(F\)-beta score is the generalised form of the \(F_{1}\) score in which the weights for both precision and recall are not constrained to be equal [4, 6]. Future research can investigate this, perhaps through the use of the multivariate Delta Method [25, 26].
## VIII Conclusion
In conclusion, both the Wilson direct method and the Wilson indirect method are promising alternatives to both the Clopper-Pearson method and the Wald method to analytically constructing confidence intervals for the population \(F_{1}\) score. Given the stochastic nature of evaluation in machine learning, it is recommended to construct and report confidence intervals when evaluating the performance of NLP algorithms.
## Acknowledgements
I would like to extend my heartlet gratitude to both Dr. Vik Gopal and Dr. Qian Jiang for their invaluable inputs to improve this paper. |
2310.01429 | Chatmap : Large Language Model Interaction with Cartographic Data | The swift advancement and widespread availability of foundational Large
Language Models (LLMs), complemented by robust fine-tuning methodologies, have
catalyzed their adaptation for innovative and industrious applications.
Enabling LLMs to recognize and interpret geospatial data, while offering a
linguistic access to vast cartographic datasets, is of significant importance.
OpenStreetMap (OSM) is the most ambitious open-source global initiative
offering detailed urban and rural geographic data, curated by a community of
over 10 million contributors, which constitutes a great potential for LLM
applications. In this study, we demonstrate the proof of concept and details of
the process of fine-tuning a relatively small scale (1B parameters) LLM with a
relatively small artificial dataset curated by a more capable teacher model, in
order to provide a linguistic interface to the OSM data of an arbitrary urban
region. Through this interface, users can inquire about a location's
attributes, covering a wide spectrum of concepts, such as its touristic appeal
or the potential profitability of various businesses in that vicinity. The
study aims to provide an initial guideline for such generative artificial
intelligence (AI) adaptations and demonstrate early signs of useful emerging
abilities in this context even in minimal computational settings. The
embeddings of artificially curated prompts including OSM data are also
investigated in detail, which might be instrumental for potential geospatially
aware urban Retrieval Augmented Generation (RAG) applications. | Eren Unlu | 2023-09-28T15:32:36Z | http://arxiv.org/abs/2310.01429v1 | # Chatmap : Large Language Model Interaction with Cartographic Data
###### Abstract
The swift advancement and widespread availability of foundational Large Language Models (LLMs), complemented by robust fine-tuning methodologies, have catalyzed their adaptation for innovative and industrious applications. Enabling LLMs to recognize and interpret geospatial data, while offering a linguistic access to vast cartographic datasets, is of significant importance. OpenStreetMap (OSM) is the most ambitious open-source global initiative offering detailed urban and rural geographic data, curated by a community of over 10 million contributors, which constitutes a great potential for LLM applications. In this study, we demonstrate the proof of concept and details of the process of fine-tuning a relatively small scale (1B parameters) LLM with an artificial dataset curated by a more capable teacher model, in order to provide a linguistic interface to the OSM data of an arbitrary urban region. Through this interface, users can inquire about a location's attributes, covering a wide spectrum of concepts, such as its touristic appeal or the potential profitability of various businesses in that vicinity. The study aims to provide an initial guideline for such generative artificial intelligence (AI) adaptations and demonstrate early signs of useful emerging abilities in this context even in minimal computational settings. The embeddings of artificially curated prompts including OSM data are also investigated in detail, which might be instrumental for potential geospatially aware urban Retrieval Augmented Generation (RAG) applications.
Generative AI Cartographic Data Large Language Models
## 1 Introduction
In recent years, the explosive growth in the capabilities and utilities of Large Language Models (LLMs) has brought forth a paradigm shift in how we interact with and leverage data [2][3]. Traditionally, extracting value from vast datasets, especially those with specialized content like cartographic data, required a combination of domain expertise, time-consuming analysis, and specialized tools. With the advent of LLMs, there is an enticing opportunity to simplify this extraction process, making it more accessible and intuitive. The concept of linguistically querying geospatial datasets presents a confluence of the advances in natural language processing (NLP) and the ubiquity of current open digital cartographic data.
The implications of such advancements are vast and transformational. Corporate officials with no technical background can seek insights about whether a commercial venture they are considering is viable in an area that they simply selected by clicking on a map. Tourists could ask about the historical significance of regions they plan to visit and automatic touristic itinerary generation applications powered by LLMs with linguistic interfaces can be developed. Policy makers can profit from such a framework to optimally plan infrastructure investments such as new lines of subway in more human centric and geospatially aware interactions. Therefore, integration of LLMs with OSM like cartographic data is crucial.
In this paper, we present a basic framework aligned with this goal requiring very minimal computational budget and human labeled data in order to constitute a pioneering guideline and demonstrate that such productive applications
in future can be developed with a reasonable amount of effort and time. The central idea of the paper is to fine-tune a foundational language model to comprehend the OSM features in an arbitrary area and align the content with the human intentions. The OSM database contains a vast amount of highly variant attributes from detailed mappings of power lines, road networks, buildings, designated areas to any type of entities such as cafes, schools, public drinking water fountains [1][4]. Consequently, for minimal proof of concept demonstration, we have strictly limited the OSM content we use in this study.
In our exemplary study, without loss of generality, circular areas with a 300-meter radius in the most densely tagged districts of Istanbul, Turkey, were selected. Specific quantitative aspects of selected OSM attributes within these circular areas are described verbally, which we refer to as 'preprompts'. Using a foundational LLM as a competent teacher, various prompt-response pairs were generated to form a training dataset for our fine-tuned model. Details on preprompt construction and guidance from the teacher model for effective artificial dataset curation are provided in the article.
An unaligned, open source foundational model of approximately just 1 billion parameters to fine-tune which is effectively pretrained on vast datasets is preferred to demonstrate the development of such abilities with significantly low amounts of resources. Using Low Rank Adaptation (LORA) and 8-bit quantization the fine-tuning process is performed on a single, moderate GPU [5][6].
Thanks to the effective training dataset and relatively advanced linguistic capabilities of the pretrained base LLM compared to its moderate size, our fine-tuned model shows early signs of emergent abilities in this context. For locations which were not included in the fine-tuning dataset the model is queried to answer concepts mostly abstained in the training. In addition to LLM development, we have also investigated the embeddings of curated prompts including OSM attributes, which reflects the latent structure of urban landscape.
We believe that the minimal framework represented in this paper can encourage researchers to develop much advanced cartographic data aware generative AI applications. The prospects of such potential paradigms are also discussed.
## 2 OSM Data and Preprompts
OSM contains an immense amount of detailed data about specific geolocations. For our minimal setting, we have limited the quantitative data to a few key urban details. The concept presented in this paper is to define a point within an urban environment and then consider a circular area around it. For simplicity, we consistently chose radii of 300 meters in this study. However, varying radii or alternative methods of area selection could be considered. Urban attributes within this circular area are then retrieved from the OSM database. These attributes are subsequently articulated verbally. This verbal articulation serves as a "pre-prompt," establishing the context for the questions posed to the LLM. For this study's purposes, we have limited our scope to the more densely tagged central districts of Istanbul, Turkey.
Chosen urban data to be included in the preprompts are :
* Number of different types of amenities : Amenities are primary categories that denote facilities and services vital for both visitors and residents. Examples of amenities include but are not limited to schools, ATMs, banks, pharmacies, libraries, and bus stops. An amenity essentially highlights features that offer convenience or serve essential functions in a particular geographical area. The number of each type of amenity in the area is verbally described.
* Intersecting administrative divisions : OSM database includes boundaries of all levels of administrative divisions of that particular country of interest. In order to incorporate urban geolocation awareness in the fine-tuning process, we have included the names of administrative bodies intersecting this area. For this case, only "districts" and "provinces" are considered. Note that, all geolocations in this study are in the province of "Istanbul".
* Number and Surface Area of Buildings : In-detail tagged areas include polygon representations of buildings. As a measure of dense urbanization we have included the number of buildings residing in the circular area and the percentage of building surface area to the total circular area.
* Landuses : Landuses are polygon areas in OSM database defined according to their usage such as "military", "park", "construction", "commercial", "industrial" etc. As tagged residential areas are most of the time lacking in the region, they are excluded in this study. The percentages of each type of landuse surface to the total area are verbally expressed. For the sake of efficient contextualisation only landuse areas exceeding 2% of the total surface area are included.
* Leisures : Leisures are polygon areas where they are defined based on their recreational purposes such as "water park", "stadium", "marina" etc. For the sake of efficient contextualisation only leisure areas exceeding 2% of the total surface area are included.
* Roads and Railways : OSM offers a very detailed description of all types of land roads with their types such as "motorway", "residential street", "secondary road", "pedestrian road" etc and railways also including tram and subway lines. We calculate the total length in meters of each types of roads and railways in the circular area and verbally express it.
For instance, the preprompt of an arbitrary area is as follows :
```
1insisacircularareaofradiusof300metersthatintersectsproving(s)ofistanbulanddistrict(s)ofFath.Thereare3at(s),2bank(s),1bureau.de.change(s),1iscafe(s),2clinic(s),1court.house(s),2dontist(s),1driving.school(s),2events.venue(s),11fast.food(s),1guest.house(s),3hospital(s),11parking(s),33pharmacy(s),9place.of.worship(s),1post.office(s),43restaurant(s),5school(s),1shover(s).Thereare525buildingswhichcover81%ofthetotalarea.Itcontains289metersofplatformrail,100metersoffootwayroad,80metersofpedestrianroad,44metersofprimary.linkroad,2786metersofresidentialroad,283metersofserviceFood,20metersofstepsroad,1005metersoftertiaryroad,62metersoftertiary.linkroad,249metersofunclassifiedroad.
```
Note that, native texts of OSM values are used taking into consideration that the advanced teacher model is able to process them properly (also familiar with OSM nomenclature) to generate proper prompt-answer pairs.
## 3 Artificial Dataset Curation with a Teacher Model
Using advanced publicly available LLMs to curate supervised datasets to train or fine-tune smaller models is a well established concept. [7] is one of the most widely known examples of such approaches. By leveraging properly the detailed information and cognitive power of these models one can generate a quasi-infinite number of datapoints with sufficient quality. We have used OpenAI ChatGPT 3.5-turbo [8] to generate prompt-answer pairs for given preprompts. Some examples of prompts queried to ChatGPT 3.5-turbo are as follows :
```
1willgivethesetypesofprepromptsandyouwillgenerateprompt-answerpairsinpythonlistofdictionariesformat.Thesepromptsshouldbequestionsthatbusinessmen,citizens,touristswoulddemandbasedonthedatainthepreprompt.Generate60prompt-answerpairswithverydiversetopics.Important:Donotgeneratepromptsthatdatainprepromptisnotsufficienttoanswer!
```
Preprompt = 'Thisisacircularareaofradiusof300metersthatintersectsproving(s)ofistanbulanddistrict(s)ofFath.Thereare10at(s),2bank(s),1bar(s),1bench(s),1bureau.de.change(s),22cafe(s),1car.rental(s),1clinic(s),7fast.food(s),5fountain(s),1ic.cream(s),4.library(s),1none.transfer(s),1.more.transfer(s),1.more.transfer(s),1.more.transfer(s),5parking(s),7pharmacy(s),14place.of.worship(s),2police(s),1post.office(s),66restaurant(s),1school(s),1breatre(s),6toile(s),1univers(s),2waste.basset(s),1waste.disposal(s).Thereare200buildingswhichcover2%ofthetotalarea.Itcontains112metersofplatformrail,291metersoffootrayroad,730metersofpedestrianroad,227metersofresidentialroad,236metersofserviceroad,270metersofstepsroad,330metersoftertiaryroad.' ```
Ifwillgivethesetypesofprepromptsandyouwillgenerateprompt-answerpairsinpythonlistofdictionariesformat.Thesepromptsshouldbequestionsthatbusinessmen,citizens,touristswoulddemandbasedonthedatainthepreprompt.Generate60prompt-answerpairswithverydiversetopics.Important:Donotgeneratepromptsthatdatainprepromptisnotsufficienttoanswer!
```
Generate50prompt-answerpairs,butbeccreative.Trytocoververydifferentaspectsinprompts,suchaswhichtypeofcommercialventurewouldsuthere,whetheritisresidentialortouristic,horyoucandescribethisareaetc.
ImportantWarning: "Donotincludequestionsthatwedonhavesufficientdatainpreprompttoanswer"Beforegenerating,repeatthislastImportantWarning:gave,for
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
## 5 Results
The fine-tuned model is queried with different geolocations in the same districts with different types of questions in the training dataset and sometimes with different semantic context. Results suggest early signs of emerging abilities in this type of a paradigm and applicability of similar frameworks even with a very minimal amount of resources. Some examples of the responses received from the model are as follows :
``` Area:Thisisacircularareaofradiusof300metersthatintersectsprovince(s) ofistanbulanddistrict(s)ofAtagheir,Kadiky.Thereare10at(s),1bank(s),4bench(s),2caffe(s),1clanic(s),1dentist(s),1festfood(s),1fountain(s),9parking(s),5pharmacy(s),1placeofworship(s),3restaurant(s),3school(s),1wastebasket(s).Thereare140buildingswhichcover18%ofthetotalarea.Itcontains205metersoffootwayread,66metersofpedestrianroad,94metersofresidentialroad,532metersofserviceroad,64metersoftertiaryroad.Question:HowcanweclassifythisregionofIstanbulbasedonthedatainpreprevent?ExplainintdetailAnswer:Basedonthedata,thisareaappearstobeasresidentialareawithabylofresidentialandcommercialbuildings.Itcouldbeclassified
Figure 2: Training (blue) and validation (orange) losses during approximately 2 epochs long fine-tuning process.
markeplace(s), 1 place_of_worthy(s), 1 restaurant(s), 1 politics(s), There are 26 buildings which cover 4% of the total area. The area is covered by 7% construction. It contains 603 meters of platform trail, 60 meters of tram trail, 1361 meters of footway road, 287 meters of residential road, 164 meters of steps road. Question : Which age group of people would enjoy visiting here 7 answer : The presence of cafes and restaurants suggests that this area could be suitable for a variety of age groups.
Area : This is a circular area of radius of 300 meters that intersects province(s) of 1stanbul and district(s) of 1stanbul and district(s) of 1stanbul. There are 1 bank(s), 3 cafe(s), 4 fast_food(s), 2 hospital(s), 13 pharmacy(s), 2 place_of_worthy(s), 1 post_office(s), 5 restaurant(s), 1 theatre(s). There are 106 buildings which cover 1% of the total area. It contains 83 meters of footway road, 72 meters of pedestrian road, 1233 meters of primary road, 157 meters of primary_link road, 504 meters of residential road, 37 meters of secondary_link road, 210 meters of service road, 78 meters of tertiary road. Question : I m a lawyer. Should 1 open an office in this area 7 answer : The presence of a band and a hospital suggests a need for legal services. Opening an office may be a good option.
## 6 Embeddings of Preprompts
It is a convenient idea to inspect the embeddings of the curated preprompts to have an insight on the general attributes of various geolocations. Without loss of generality, we have used average GLOVE word embeddings [14] and projected them on to two dimensional space using Universal Manifold Approximation and Projection (UMAP) algorithm [15]. Fig. 3 shows the plot of projected UMAP values, where color channels are determined linearly proportional to these for illustrative purposes. Locations corresponding to these preprompt embeddings are displayed on the map with same color values in Fig. 4. As expected, even with basic embedding mechanisms and models, the latent structure of verbal descriptions of OSM data yields insightful patterns. Bright red colors indicate more touristic locations, dark red colors more indicate business/commercial districts and bright greenish colors indicate relatively empty spaces, residential areas. Note that, as mentioned previously this study is a preliminary attempt of a minimalistic proof of concept, thus much more complex frameworks can be imagined enabling creative Retrieval Augmented Generation (RAG) applications in this context.
Figure 3: Two dimensional UMAP values of various preprompts, where color values are linearly proportional to UMAP scores.
## 7 Conclusion and Prospects
In this research, we explored the integration of LLMs with intricate cartographic datasets. Expressing OSM data of an urban region verbally and using these phrases as pre-context for queries, we have shown that such frameworks can be developed even with very minimal resources. Advanced publicly available LLMs can be used to generate artificial datasets automatically for this purpose such as in this study. Blending cartographic data with generative linguistic AI models offer vast amounts of possibilities.
|
2301.00076 | Higgs Boson Mass Corrections at Three-Loops in the Top-Yukawa Sector of
the Standard Model | The search for new physics signals in Higgs precision measurements plays a
pivotal role in the High-Luminosity Large Hadron Collider (HL-LHC) and future
colliders programs. The Higgs properties are expected to be measured with great
experimental precision, implying higher-order perturbative computations of the
electroweak parameters from the theoretical side. In particular, the
renormalized Higgs boson mass parameter in the Standard Model shows significant
variation around the electroweak scale, resulting in a lower-bound theoretical
uncertainty that exceeds future collider expectations. A more stable result
under the renormalization group can be computed from a non-zero external
momentum Higgs self-energy, for which available calculations include 3-loop
corrections in the QCD sector. In this work, we present an additional
contribution by estimating the leading non-QCD 3-loop corrections to the mass
of the Higgs boson in the top-Yukawa sector of order $y_t^6$. The
momentum-dependent Higgs self-energy is computed in the tadpole-free scheme for
the Higgs vacuum expectation value in the Landau gauge, and the explicit
dependence upon the Higgs boson and top quark masses is shown. The obtained
result is expressed in dimensional regularization as a superposition of a set
of master integrals with coefficients that are free of poles in four space-time
dimensions, and the corrections are evaluated numerically by the sector
decomposition method. | E. A. Reyes R., A. R. Fazio | 2022-12-31T00:03:11Z | http://arxiv.org/abs/2301.00076v3 | # Higgs Boson Mass Corrections at N\({}^{3}\)LO in the Top-Yukawa Sector of the Standard Model
###### Abstract
The search of new physics signals in the Higgs precision measurements plays a pivotal role in the High-Luminosity Large Hadron Collider (HL-LHC) and the future colliders programs. The Higgs properties are expected to be measured with great experimental precision, implying higher-order perturbative computations of the electroweak parameters from the theoretical side. In particular, the Higgs boson mass parameter in the Standard Model runs over several tens of MeV with a corresponding large theoretical uncertainty. A more stable result under the renormalization group can be computed from a non-zero external momentum Higgs self-energy, for which available calculations include three-loop corrections in the QCD sector. In this work we present an additional contribution, by estimating the leading non-QCD three-loop corrections to the mass of the Higgs boson in the Top-Yukawa sector of order \(y_{t}^{6}\). The momentum dependent Higgs self-energy is computed in the tadpole-free scheme for the Higgs vacuum expectation value in the Landau gauge and the explicit dependence upon the Higgs boson and top quark masses is shown. The obtained result is expressed in dimensional regularization as a superposition of a set of master integrals with coefficients that are free of poles in four space-time dimensions and the corrections are evaluated numerically by the sector decomposition method.
## I Introduction
The experiments have recently showed that high-precision measurements of the observables in the electroweak (EW) sector of the Standard Model (SM) are moving away from the theoretical expectations. In the past year, the Fermilab MUON g-2 collaboration [1] published its results concerning the muon anomalous magnetic moment, showing a discrepancy between the experimental value and the SM predictions corresponding to a \(4.2\sigma\) difference. Recently, another EW observable joins to this list of anomalous measurements, namely the mass of the \(W\)-boson. The CDF collaboration [2] reported a new and more precise value, \(M_{W}=80433.5\pm 9.4\,MeV\), together with the complete dataset collected by the CDF II detector at the Fermilab Tevatron. The current SM prediction evidences a tension of \(7\sigma\) compared with the CDF measurement, suggesting the possibility to improve the SM calculations or to extend the SM. New and more precise experiments of the SM observables can help to explain the origin of those discrepancies, but this requires also an improvement on the precision of the theoretical calculations. In particular, the Higgs boson mass is an input parameter in the theoretical expressions for the above mentioned observables and an improvement of its theoretical uncertainties can lead to more precise predictions to be compared with measurements at future accelerators. The improvement can come from the computation of the missing higher order corrections to the Higgs mass which are left out due to the assumption of some kinematic limit or due to the truncation of the perturbative expansions at some level. In the SM, the truncation is done at three-loop order. The one- and two-loop level corrections to the Higgs self-energy have been completely computed [3; 4; 5; 6; 7] and implemented in the public computer codes mr[8] and SMDR[9]. In the former mr code the renormalized vacuum expectation value of the Higgs field is defined as the minimum of the tree-level Higgs potential. The corrections to the mass parameters are consequently gauge invariant due to the explicit insertion of the tadpole diagrams. The disadvantage of this approach is that the Higgs tadpoles can include negative powers of the Higgs quartic self-coupling leading to very large corrections in \(\overline{MS}\) schemes that deteriorates the perturbative stability. On the other hand, the corrections included in SMDR typically leads to stable perturbative predictions but suffers from gauge dependences since the vacuum is defined as the minimum of the Higgs effective potential and therefore the tadpoles are removed by imposing an appropriate renormalization condition. It would be convenient to have a gauge independent prediction with a stable perturbative behaviour, as highlighted in [10; 11] where the longstanding discussion about a suitable prescription for tadpole contributions in EW renormalization is solved at one-loop level. Additionally, the three-loop corrections have been evaluated in the gauge-less limit where the EW contributions are disregarded. In this computation the external momentum dependence of the contributions that are proportional to \(g_{s}^{4}y_{t}^{2}M_{t}^{2}\) is included, where \(g_{s}\) is the strong coupling constant, \(y_{t}\) is the top quark Yukawa coupling and \(M_{t}\) is the top quark mass. There are also included the three-loop contribu
tions proportional to \(g_{s}^{2}y_{t}^{4}M_{t}^{2}\) and \(y_{t}^{6}M_{t}^{2}\) using the 1PI effective potential, from which the 1PI self-energies at vanishing external momenta can be derived. All those three-loop corrections are implemented in the last version of SMDR [12; 13]. Although these SMDR predictions are rather precise, they contain a renormalization scale dependence of several tens of MeV implying theoretical uncertainties larger than the expected experimental ones, of about 10-20 MeV, for the Higgs boson mass measurements at the HL-LHC, ILC and FCCee [14]. A more refined calculation including the missing higher order contributions is therefore required.
In this paper we compute an additional three-loop contribution to the mass of the Higgs boson coming from the non-QCD Top-Yukawa sector in the gaugeless limit where the three-loop Higgs self-energy corrections at order \(y_{t}^{6}\) are calculated. These three-loop corrections are meant to be included into the prediction of the physical Higgs boson mass (\(M_{h}\)) which comes from the complex pole of the Higgs propagator in an on-shell scheme and therefore the Higgs self-energies are evaluated at non-vanishing external momentum, \(p^{\mu}\neq 0\). Since the ratio \(M_{h}/M_{t}\approx 0.6\) is not a really small expansion parameter, the leading three-loop corrections may receive significant contributions from the external momentum dependent terms evaluated at \(p^{2}=M_{h}^{2}\). Additionally, the inclusion of the non-vanishing external momentum self-energies are expected to cancel the renormalization scale dependence introduced in the propagator pole by the running Higgs mass computed in the effective potential approach [15; 16].
Finally, we point out that electroweak contributions at three-loop level is still missing, but the analytic results for all master integrals contributing to the three-loop Higgs self-energy diagrams in the mixed EW-QCD sector at order \(\alpha^{2}\alpha_{s}\) and including terms proportional to the product of the bottom and top Yukawa couplings, \(y_{b}y_{t}\), have been presented in [17]. Besides, additional identities satisfied by three-loop self-energy Master Integrals (MIs) with four and five propagators, which enable a straightforward numerical evaluation for a generic configuration of the masses in the propagators, have been recently reported in [18].
The paper is organized as follows. In Section II we show the technical details about the generation and regularization of the amplitudes for the three-loop Higgs self-energies involved in our calculation. In Section III a Feynman integral reduction procedure is presented and the election of a good basis of master integrals is discussed. A numerical analysis, where the obtained three-loop corrections to the Higgs mass at O(\(y_{t}^{6}\)) is evaluated as a function of the renormalization scale, is shown in Section IV. Finally, we give our conclusions and a further research outlook in Section V.
## II Regularized Higgs self-energies
In this work we have focused our attention on the contributions coming from the three-loop self-energy corrections to the Higgs boson mass including the external momentum dependence. The Higgs self-energies have been computed at order \(y_{t}^{6}\) in the non-QCD sector of the SM. Thus, the non-light fermion limit is assumed and therefore the Yukawa couplings and the masses of the other fermions are disregarded with respect to the top quark ones. The complete expression is written as
\[\Sigma_{hh}^{(3l)} = y_{t}^{6}(\Delta_{0}+t\Delta_{1}+t^{2}\Delta_{2}+t^{3}\Delta_{3}) \tag{1}\] \[+ s\,y_{t}^{6}(\Delta_{0}^{\ast}+t\Delta_{1}^{\ast}+t^{2}\Delta_{ 2}^{\ast}),\]
where \(t\) represents the squared top mass, \(t=M_{t}^{2}\), while \(s\) stands for the squared momentum in the external lines of the Higgs self-energies, \(s=p^{2}\).
In order to obtain the expressions of \(\Delta_{i}\) and \(\Delta_{j}^{s}\) it is necessary to generate the Higgs self-energy diagrams and their corresponding amplitudes. This has been done with the help of the Mathematica package FeynArts[19; 20]. At the considered perturbative order, only the nine different self-energy topologies depicted in FIG. 1 contribute. Note that topologies with just cubic vertices are required, this is equivalent to impose an adjacency of three lines in the CreateTopologies function of FeynArts. Moreover, the computation was done in the so called _Parameter Renormalized Tadpole_ (PRT) scheme [21; 22; 10; 23] where the renormalized vacuum expectation value of the Higgs field is the minimum of the Higgs effective potential and therefore the self-energies are made of 1PI diagrams that do not contain tadpole insertions. Although this scheme is known to be numerical stable as terms with negative powers of the Higgs self-coupling are not included, it has the unpleasant feature that self-energies are gauge-dependent quantities. In this work we have adopted the
Figure 1: Examples of diagrams contributing to the O(\(y_{t}^{6}\)) Higgs self-energy corrections in the non-QCD sector. The external dashed lines represent the Higgs boson field. The internal dashed lines represent all possible contributing scalar fields, while the solid lines represent a top or a bottom quark. Only propagators with a top quark line are massive.
Landau gauge, where the Goldstone bosons are massless, in order to minimize the number of energy scales appearing in the Feynman integrals.
Once the content of the particles are included in the nine topologies, with the help of the InsertFields function of FeynArts, the number of generated self-energy diagrams, whose amplitudes are different from zero at \(y_{t}^{6}\), increases to \(125\). Examples of such diagrams are also shown in FIG. 1. Note that the external dashed lines propagate only the Higgs field (\(h\)), while the internal lines in the no-light fermions limit of the non-QCD sector can propagate fermions (solid lines) like the top quark (\(t\)) and bottom quark (\(b\)) fields, as well as scalars like the Higgs and the Goldstone bosons (\(G^{0}\) and \(G^{\pm}\)) fields. The cubic vertices involved in the computation are \(hht\), \(G^{0}G^{0}t\) and \(G^{\pm}tb\). The contribution of the bottom mass to the latter vertex is disregarded when appears in the numerators of the integrands.
The considered three-loop self-energy integrals are ultraviolet divergent in four-dimensions since all of them contain two scalar and six fermionic propagators; therefore, they are analytically continued to \(D=4-2\varepsilon\) dimensions using the dimensional regularization (DREG) scheme [24; 25; 26; 27]. In order to implement the regularization prescription, the FeynArts amplitudes are exported to the language of FeynCalc[28; 29] which is a Mathematica code useful in general to perform the necessary algebraically manipulations involved in the calculation of multi-loop Feynman integrals. The gamma matrices are defined as a set of \(D\) matrices obeying
\[\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}I;\qquad\text{Tr}I=4. \tag{2}\]
Feynman diagrams involving the charged Goldstone bosons, \(G^{\pm}\), where traces with \(\gamma_{5}\) and an arbitrary number of gamma matrices appear, require some care. In that case we use the practical non-cyclicity prescription [30; 31] where \(\gamma_{5}\) is an anticommuting object satisfying
\[\{\gamma_{5},\,\gamma^{\mu}\}=0;\qquad\gamma_{5}^{2}=1, \tag{3}\]
together with the condition that the use of cyclicity in traces involving an odd number of \(\gamma_{5}\) matrices is forbidden. Using the above anticommutation relation and the Clifford algebra in eq. (2) any product of Dirac matrices can be ordered in a canonical way. That is, the \(\gamma_{5}\) matrices are completely eliminated for an even number of them, while for an odd number only one \(\gamma_{5}\) survives and it is always moved to the right of the product. In particular, due to the presence of four independent momentum scales, namely, the external momentum \(p\) and the loop-momenta \(q_{1}\), \(q_{2}\) and \(q_{3}\), diagrams can contain traces with a single \(\gamma_{5}\) and at most four \(\gamma\) matrices. Thus, the next relations are also required:
\[\text{Tr}\left[\gamma_{5}\right]=\text{Tr}\left[\gamma^{\mu_{1}}\dots\gamma^ {\mu_{2n-1}}\gamma_{5}\right]=0, \tag{4}\]
\[\text{Tr}\left[\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma}\gamma_{5} \right]=\begin{cases}-4i\epsilon^{\mu\nu\rho\sigma}&\mu,\nu,\rho,\sigma\in\{ 0,1,2,3\}\\ 0&\text{otherwise}\end{cases}. \tag{5}\]
A further examination of all the Feynman diagrams for each topology in FIG. 1 shows that topologies 1, 4, 6 and 9 do not contain traces with the matrix \(\gamma_{5}\). Topologies 5 and 8 contain traces with one \(\gamma_{5}\) and at most three \(\gamma\) matrices which vanish according to eq. (4). For the topologies 2 and 7 the sum of the amplitudes produces a cancellation of the terms with any trace involving the matrix \(\gamma_{5}\). Finally, topology 3 contain contributions with a trace of a single \(\gamma_{5}\) and four \(\gamma\) matrices that have to be evaluated according to eq. (5).
In addition, it is worth mentioning that for amplitudes with closed fermion-loops, which is the case of all the topologies in FIG. 1, the usual Breitenlohner-Maison scheme [32; 33] and the non-cyclicity scheme considered in our calculation produce identical results.
## III Good master integrals
Once the amplitudes are regularized, each of them can be written as a superposition of a large set of about one thousand of integrals with the following structure:
\[\left\langle\frac{\mathcal{N}(q_{i}\cdot q_{j},q_{i}\cdot p,p^{2 })}{D_{1}^{\nu_{1}}D_{2}^{\nu_{2}}D_{3}^{\nu_{3}}D_{4}^{\nu_{4}}D_{5}^{\nu_{5} }D_{6}^{\nu_{6}}D_{7}^{\nu_{7}}D_{8}^{\nu_{8}}D_{9}^{\nu_{9}}D_{0}^{\nu_{0}}} \right\rangle_{3l}, \tag{6}\] \[\left\langle(\dots)\right\rangle_{3l}=(Q^{2})^{3\varepsilon}\int \frac{d^{D}q_{1}}{(2\pi)^{D}}\int\frac{d^{D}q_{2}}{(2\pi)^{D}}\int\frac{d^{D} q_{3}}{(2\pi)^{D}},\]
where \(Q\) is the renormalization scale defined as in the \(\overline{MS}\) scheme, \(Q^{2}=4\pi e^{-\gamma_{E}}\mu^{2}\), in terms of the unit mass \(\mu\) and of the Euler-Mascheroni constant \(\gamma_{E}\). The denominators \(D_{j}\) are inverse scalar propagators:
\[\begin{array}{ll}D_{1}=\left(q_{1}^{2}-m_{1}^{2}\right),&D_{2}=\left(q_{2}^ {2}-m_{2}^{2}\right),\\ D_{3}=\left(q_{3}^{2}-m_{3}^{2}\right),&D_{4}=\left((q_{1}-q_{2})^{2}-m_{4}^{ 2}\right),\\ D_{5}=\left((q_{1}-q_{3})^{2}-m_{5}^{2}\right),&D_{6}=\left((q_{2}-q_{3})^{2}-m_ {6}^{2}\right),\\ D_{7}=\left((q_{1}+p)^{2}-m_{2}^{2}\right),&D_{8}=\left((q_{2}+p)^{2}-m_{8}^{2 }\right),\\ D_{9}=\left((q_{3}+p)^{2}-m_{9}^{2}\right),&D_{0}=\left((q_{1}-q_{2}+q_{3}+p)^{ 2}-m_{0}^{2}\right),\end{array} \tag{7}\]
while the numerator \(\mathcal{N}\) is a function of scalar products involving the three loop momenta and the external momenta. At this point the coefficients of the integrals depend on \(y_{t}\), \(t\) and \(s\), while the masses in the propagators \(D_{j}^{-1}\) can be \(m_{j}=0,M_{t}\). The precise configuration of the masses defines the family to which the integrals belong, while the set of exponents \(\{\nu_{j}\}\) defines sectors from the families. For the planar diagrams, represented by the topologies 1 to 8, one must remove the denominator \(D_{0}\) which is equivalent to set \(\nu_{0}=0\), while the non-planar diagrams contained in the topology 9 satisfy \(\nu_{8}=0\). Note that, in order to express any scalar product in \(\mathcal{N}\) as a combination of inverse propagators, we need a basis of nine propagators for each family. Thus, the numerator \(\mathcal{N}\) is rewritten, as usual, in terms of the \(D_{j}\)'s leading to scalar integrals which can also contain irreducible numerators, that is, denominators with negative integer exponents. The resulting integral families for each topology are listed in Table 1. An individual topology can contain
multiple families and each family can contain at most six massive propagators. Besides, the exponents \(\{\nu_{j}\}\) take values from \(-3\) to \(3\).
The obtained set of scalar integrals are not independent of each other, they can be related through additional recurrence relations coming from the integration by parts (IBP) and Lorentz Invariant (LI) identities. We have used the code Reduze[34; 35] to reduce any scalar integral as a linear superposition of a basis of Master Integrals
\[\tilde{G}_{\{\nu_{0},\ldots,\nu_{0}\}}=\left\langle\prod_{j=0}^{9}D_{j}^{-\nu _{j}}\right\rangle_{3l}, \tag{8}\]
with coefficients that are rational functions of polynomials depending on the space-time dimension and all the kinematical invariants involved in the calculation. As expected, in complicated situations like the IBP reduction of three-loop self-energy integrals with at least two energy scales, the basis provided by Reduze, \(\tilde{G}_{\{i\}}\), can be inefficient since denominators of some of the MIs coefficients are quite cumbersome, containing big expressions that require a long time processing and operative memory, but also containing kinematical singularities (independent upon \(D\)) described by the Landau conditions [36] and/or divergences in \(D-4=2\varepsilon\) (independent upon the kinematical invariants) which would imply the evaluation of finite parts of the Laurent expansion in \(\varepsilon\) of the MIs [37; 38; 39]. In order to handle this situation we follow the prescription discussed in [40] based on the Sabbah's theorem [41] and therefore we have implemented in Mathematica, with the help of FIRE[42; 43], a transition from the "bad" basis of MIs, to an appropriate basis, \(G_{\{j\}}\), where denominators of the coefficients are "good" enough that are simple expressions free of kinematical and non-kinematical singularities. Thus, the election of the new master integrals has been done by imposing that polynomials in the denominators of the coefficients do not vanish in the limit where \(D-4\) goes to zero. The Sabbah's theorem guarantees the existence of such a good basis, but in practice this implies finding extra relations between the master integrals, such that
\[\tilde{G}_{\{i\}}=\sum_{j=1}^{|\sigma|}\frac{n_{i,j}}{d_{i,j}}G_{\{j\}}, \tag{9}\]
for a given sector \(\sigma\) of which \(|\sigma|\) represents the length of the related multi-index, and where the coefficients \(n_{i,j}\) must contain products of polynomials that cancel the bad denominators of the coefficients of the masters \(\tilde{G}_{\{i\}}\) in the original IBP reduction, while \(d_{i,j}\) must be a good denominator. A simple example can be found in the family \(\{134679\}\) of the first topology (see FIG. 1 and Table 1). A bad election of the basis in the reduction procedure can lead to coefficients with nul denominators for \(D=4\), of the form
\[(-5+D)(-4+D)(-3+D)(-10+3D)st^{2}(-s+2t)\] \[\times(-s+4t)(-s+10t)(s^{2}-16st+24t^{2}) \tag{10}\]
or an even worse coefficient can arise with denominator
\[2(-4+D)(s-4t)^{2}t(-38997504s^{18}+159422976Ds^{18})\] \[\times t(-288550464D^{2}s^{18}+\cdots+244\text{ terms}), \tag{11}\]
manifesting moreover threshold singularities. The denominator of eq. (11) is generated by the sector with the MIs \(\tilde{G}_{\{-1,0,1,1,0,1,1,0,0\}}\), \(\tilde{G}_{\{0,0,2,1,0,2,1,0,0\}}\) and \(\tilde{G}_{\{0,0,1,1,0,1,1,0,0\}}\). A better choice of the basis, with the master integrals \(G_{\{1,-1,1,1,1,1,1,0\}}\), \(G_{\{1,0,1,1,1,1,1,1\}}\), \(G_{\{0,0,1,1,1,1,1,1\}}\), \(G_{\{0,0,1,1,1,1,1,1,1\}}\), can avoid this problem and produce a simpler result of the total amplitude for the first topology:
\[\mathcal{A}_{1}^{\{134679\}}=y_{t}^{6} \left[t\right. \left(4G_{\{0,0,1,1,1,0,1,1\}}+2G_{\{0,0,1,1,1,0,1,1\}}\right.\] \[-4G_{\{0,0,1,1,1,1,1,0,1\}}-4G_{\{1,-1,1,0,1,1,1,1\}}\] \[+2G_{\{1,-1,1,1,1,1,0,1\}}+2G_{\{1,-1,1,1,1,1,1,0\}}\] \[+4G_{\{1,0,0,0,1,1,1,1,1\}}-4G_{\{1,0,0,1,1,1,0,1\}}\] \[+2G_{\{1,0,0,1,1,1,1,1,0\}}-4G_{\{1,0,1,1,0,1,0,1,1\}}\] \[+4G_{\{1,0,1,1,0,1,0,1,0\}}-4G_{\{1,0,1,0,1,1,1,0\}}\] \[+2G_{\{1,0,1,1,1,-1,1,1\}}-2G_{\{1,0,1,1,1,1,0,0,1\}}\] \[-2G_{\{1,0,1,1,1,1,1,0,0\}}+2G_{\{1,0,1,1,1,1,1,1\}}\] \[+t^{2}\left(8G_{\{0,0,1,1,1,1,1,1\}}+8G_{\{1,0,0,1,1,1,1,1\}}\right.\] \[+8G_{\{1,0,0,1,1,1,1,1\}}-16G_{\{1,0,1,0,1,1,1,1\}}\] \[+8G_{\{1,0,1,1,1,0,1,1,1\}}+8G_{\{1,0,1,1,1,1,0,1\}}\] \[\left.-16G_{\{1,0,1,1,1,1,1,0,1\}}+8G_{\{1,0,1,1,1,1,1,1\}}\right)\] \[+t^{3}\,32G_{\{1,0,1,1,1,1,1,1\}}\big{]}\] \[+sy_{t}^{6} \left[t\right. \left(-2G_{\{1,0,1,0,1,1,1,1,1\}}+4G_{\{1,0,1,0,1,1,1\}}\right.\] \[-2G_{\{1,0,1,1,1,0,1,1,1\}}-2G_{\{1,0,1,1,1,0,1\}}\] \[\left.+4G_{\{1,0,1,1,1,1,0,1\}}-2G_{\{1,0,1,1,1,1,1,0\}}\right.\] \[\left.-t^{2}\ 8G_{\{1,0,1,1,1,1,1,1\}}\right] \tag{12}\]
without pathological denominators. Note that master integrals contain 9 indices because \(D_{0}\) is omitted in the
\begin{table}
\begin{tabular}{|c|c|} \hline Topology & Propagator \\ \hline \hline
1 & \(\{134679\}\) \\ \hline
2 & \(\{1278\}\), \{12378\} \\ \hline
3 & \(\{1379\}\), \{123789\}, \{134679\} \\ \hline
4 & \(\{24589\}\) \\ \hline
5 & \(\{258\}\), \{278\}, \{2578\}, \{24589\} \\ \hline
6 & \(\{125678\}\) \\ \hline
7 & \(\{17\}\), \{147\}, \{157\}, \{1457\} \\ \hline
8 & \(\{17\}\), \{127\}, \{157\}, \{1257\} \\ \hline
9 & \(\{123790\}\) \\ \hline \end{tabular}
\end{table}
Table 1: Integral families. An integral family is represented with a list \(\{ijk...\}\). Each number in the list gives the position “\(j\)” of a massive propagator \(D_{j}^{-1}\). The missing propagators are massless.
planar topologies while \(D_{8}\) is removed in non-planar diagrams. Analogous simple expressions have been derived for topologies 2, 4 and 6, the results for the amplitudes \(\mathcal{A}_{3}\), \(\mathcal{A}_{5}\), \(\mathcal{A}_{7}\), \(\mathcal{A}_{8}\) and \(\mathcal{A}_{9}\) are instead somewhat lengthy. All the amplitudes can be consulted by the following link [https://github.com/fisicateoticAUDP/HiggsSM](https://github.com/fisicateoticAUDP/HiggsSM) together with the list of good master integrals, the useful IBP reductions and the main Mathematica routines implemented to carry out this computation. In particular, the planar diagrams can be reduced to a superposition of 212 MIs, while the non-planar diagrams can be expressed in terms of 82 masters. Even if a good basis of MIs could be found with the help of the Sabbah's theorem in this computation, when the number of energy scales is increased the coefficients of the master integrals get even worse and make inefficient any IBP reduction procedure. This kind of problems also appears in beyond the SM theories, as is the case of the SUSY calculations of \(M_{h}\), where the analogous contribution at order \(y_{t}^{6}\) is missing [44] and at least one additional scale (the squarks mass scale) has to be included. Analytical approaches where an IBP reduction can be avoided and the amplitudes can be directly evaluated for an arbitrary number of energy scales, as it is done for instance with the Loop-Tree Duality technique [45; 46], could be an interesting alternative.
## IV Numerical analysis
In this section we discuss the numerical evaluation of the three-loop Higgs self-energy corrections at O(\(y_{t}^{6}\)) obtained after summing the amplitudes \(\mathcal{A}_{j}\) of the 21 families reported in Table 1. The final amplitude of the genuine three-loop 1PI Higgs self energy
\[\Sigma_{hh}^{(3l)}(s,Q,M_{t},y_{t})=\sum_{j}\mathcal{A}_{j}, \tag{13}\]
requires the evaluation of 294 MIs which are functions of the top quark mass \(M_{t}\) and the squared external momentum \(s\) of the self-energies. We set the value of the external momentum at the experimental central value of the Higgs boson mass \(M_{h}\), \(\sqrt{s}=125.09\) GeV [47]. In order to numerically generate the Laurent \(\varepsilon\)-expansion of each master integral, we have used the code FIESTA 5.0[48] which implements the sector decomposition approach. The expansion goes up to \(\varepsilon^{0}\) order, the evanescent terms of order \(\varepsilon^{n}\) with \(n>0\) are not needed since the coefficients of the good master integrals do not contains poles in \(D=4\). Besides, the evaluation of the amplitude has to include the evolution of the top Yukawa coupling \(y_{t}\) and the mass parameter \(M_{t}\) as a function of the energy scale \(Q\) in the \(\overline{MS}\) scheme.
In this analysis we use the full three-loop \(\overline{MS}\) renormalization group equations (RGEs) of the SM parameters [49; 50; 51; 52; 53; 54; 55; 56] plus the \(O(\alpha_{s}^{5})\) QCD contributions to the strong coupling beta function [57; 58; 59; 60] and the \(O(\alpha_{s}^{5})\) QCD contributions to the beta functions of the Yukawa couplings [61; 62; 63]. This is in order to obtain the running of \(y_{t}\) from 10 to 500 GeV as is shown in FIG. 2. To draw the evolution we chose the initial benchmark model point specified on the top of the plot, which yields at \(Q_{0}=0.1731\) TeV the central values of the SM masses (\(M_{h}=125.1\) GeV, \(M_{t}=173.1\) GeV, etc.) as given in the last edition of the Review of Particle Properties [64].
The next plots also follows this boundary condition.
On the other hand, the top quark pole mass is evolved in the tadpole-free \(\overline{MS}\) scheme with the help of SMDR, as is shown in the FIG. 3, including the pure QCD 1-loop [65], 2-loop [66], 3-loop [67] and 4-loop [68; 69; 70; 71] contributions plus the non-QCD 1-loop, mixed EW-QCD 2-loop and full 2-loop EW [72] corrections to the quark top mass. Those contributions have been also computed in different renormalization schemes [73; 74; 75], however, for our first numerical analysis of the Higgs corrections we use the black curve in FIG 3 which contains all the contributions together in the tadpole-free scheme. A discussion including the differences between all the schemes for a running top quark mass is necessary, as is emphasized in [75], but this is left as a work for a future publication. The other lines represent the theoretical predictions of \(M_{t}\) at different perturbative orders and are pictured to note that the pure QCD predictions have a very large scale dependence of a few GeVs when \(Q\) is varied from 60 to 500 GeV and therefore the EW corrections cannot be neglected and must be included in the numerical analysis, as has already been pointed out in [72; 73; 74; 75], since our amplitudes are sensible to the precise value of \(M_{t}\). When the full 2-loop EW contribution is added, the renormalization scale dependence decreases by about 97% in the range of \(Q\) considered.
Finally, we study the numerical behaviour of the resulting new contributions to the Higgs self-energies
Figure 2: Renormalization group evolution of the top Yukawa coupling \(y_{t}\) in the \(\overline{MS}\) scheme including the full 3-loop RGEs for all the SM parameters and the QCD beta functions of \(y_{t}\) and \(\alpha_{s}\) up to 5-loops. Here \(g_{1}\) and \(g_{2}\) stands for the EW gauge couplings, \(v\) is the Higgs vev and \(\lambda\) represents the quartic Higgs self-coupling.
containing all momentum dependence which are obtained from the difference
\[\Delta M_{h}=\text{Re}\left[\Sigma_{hh}^{(3l)}(p^{2}=M_{h}^{2})-\Sigma_{hh}^{(3l)} (p^{2}=0)\right]. \tag{14}\]
In FIG. 4\(\Delta M_{h}\) is shown as a function of the renormalization scale from \(Q=60\) GeV to \(Q=500\) GeV. In the plot is included the real contributions from the finite part (black curve) and the coefficients of the simple (yellow) \(\frac{1}{\varepsilon}\), double (green) \(\frac{1}{\varepsilon^{2}}\) and triple (red) \(\frac{1}{\varepsilon^{3}}\) poles separately. Note from FIG. 2 that the coupling \(y_{t}\) goes out the perturbative regime bellow \(Q=60\) GeV and therefore this region was excluded in the analysis.
The coefficients of the poles have a mild dependence on the renormalization scale, the triple pole coefficient varies about \(0.5\) MeV for \(60\text{ GeV}\leq Q\leq 500\) GeV, in this case the dependence on \(Q\) is not explicit, the variation is due to the RG evolution of \(y_{t}\) and \(M_{t}\). The double pole coefficient contains an explicit logarithmic dependence on \(Q\) implying a variation of about \(1.5\) MeV. The simple pole coefficient contains a squared logarithmic dependence on \(Q\) which amounts to a variation of about \(6.2\) MeV. Finally, the finite part have a size of about \(51\) MeV for \(Q=173.1\) GeV and contains a significant renormalization scale dependence, it decreases by about \(47\%\) in the complete \(Q\) range considered. In particular, when \(Q\) is varied around the EW scale from \(100\) GeV to \(300\) GeV the correction is reduced by about \(16\) MeV which is of the same order of magnitude than the size of the anticipated experimental precision at HL-LHC (\(10-20\) MeV [76]) and at the future colliders ILC (\(14\) MeV [77]) and FCC-ee (\(11\) MeV [78]). The inclusion of the new three-loop corrections \(\Delta M_{h}\) into the complex pole mass, \(s_{pole}^{h}\), for the SM Higgs boson and the further analysis of the numerical impact on the theoretical prediction of the Higgs boson pole mass are non-trivial tasks. They require the iterative evaluation of the MIs and amplitudes at \(s=\text{Re}(\text{s}_{\text{pole}}^{\text{h}})\), instead of the naive evaluation at \(s=M_{h}^{2}\), and an additional prescription for the renormalization of the UV sub-divergences in order to get the correct values of the \(M_{h}\)-predictions at three-loop level. The numerical evaluation of the Higgs boson pole mass, including the pure three-loop corrections presented in this work, will be done in a future analysis.
## V Conclusions and perspectives
In this article we have presented a new contribution to the SM Higgs boson mass perturbative corrections coming from the pure three-loop Higgs self-energies at order \(y_{t}^{6}\) including the external momentum dependence. This implies a Feynman diagrammatic evaluation of eight planar and one non-planar topologies with only cubic vertices and a fermion loop in the internal lines. The Higgs self-energies do not contain the tadpole contributions since the renormalized vev of the Higgs field is considered as the minimum of the Higgs effective potential. As a consequence, the considered contributions have a good perturbative behaviour but acquire an additional gauge dependence, we have used the Landau gauge in order to reduce the number of energy scales in the Feynman amplitudes. Besides, we worked in the gaugeless and non-light fermions limits where the EW vector boson and all the light fermion masses are disregarded; thus, the final result is expressed in terms of the top quark mass \(M_{t}\) and the Higgs boson mass \(M_{h}\). The DREG procedure was adopted in order to regularize the Feynman amplitudes associated to the Higgs self-energies, in particular, a non-eclicity prescription was applied to deal with the regularization of the \(\gamma_{5}\) matrix. The resulting regularized amplitudes are expressed in terms of thousands of scalar integrals which are reduced to a superposition of a basis of master integrals through the IBP and LI identities implemented in the code Reduze. This automated
Figure 4: Renormalization group scale dependence coming from the external momentum contribution to the three-loop Higgs self-energy correction at order \(y_{t}^{6}\) in the SM. The evolution of the finite part and the coefficients of the simple, double and triple poles have been included.
Figure 3: Evolution of the top quark mass \(M_{t}\) as a function of the renormalization scale \(Q\) in the \(\overline{MS}\) scheme. The different perturbative contributions are shown. In particular, the black line contains the 4-loop QCD and the full 2-loop EW corrections.
reduction leads to a set of master integrals which contains large coefficients with kinematic singularities and non-kinematic divergences at \(D=4\) space-time dimensions. The above mentioned singular behaviour as well as the length of the expressions of the coefficients get worse when the number of scales is increased. However, we have showed that those divergences are spurious and can be removed with a good redefinition of a suitable basis, whose existence is guaranteed by the Sabbah's theorem. The expressions obtained for the amplitudes of the involved topologies are thus linear combinations of a set of 212 planar and 82 non-planar good MIs with coefficients that do not contain poles at \(D\to 4\), it has the advantage that the evanescent terms of the Laurent expansion of the masters are not required. A first numerical analysis allows to measure the size of the new momentum dependent Higgs self-energy contributions showing a value of \(\sim 51\) MeV at the benchmark model point which produces the experimental values of the SM masses, but it also displays a significant renormalization scale dependence of a few tens of MeV which are of the same magnitude order than the expected precision at the coming colliders experiments.
Several research perspectives are left open for future works. The inclusion of the new momentum dependent corrections into the complex mass pole of the Higgs propagator and the study of the numerical impact on the theoretical prediction together with the perturbative stability of \(\overline{MS}\) renormalization of the Higgs mass will be faced in a forthcoming publication. A further numerical analysis including the different renormalization prescriptions for the top quark mass must be also considered. Besides, the developed routines for this computation will be extended to include the quantum corrections to the SM gauge boson masses \(M_{Z}\) and \(M_{W}\) at the same perturbative order considered here. An extension of the momentum dependent Higgs self-energies at order \(y_{t}^{6}\) to include supersymmetric contributions coming from the stop sector of the MSSM in the Dimensional Reduction scheme [27] is also under consideration. The theoretical uncertainties in the MSSM scenarios amount a size between 1 to 5 GeV, which is one magnitude order greater than the experimental error in \(M_{h}\), in this context the calculation of missing higher order corrections is mandatory. This implies, nevertheless, the inclusion of at least one additional scale, the SUSY scale, and therefore we finally point out that an alternative approach to the IBPs reductions must be considered to deal with the problem of the large divergent MIs coefficients, this is valid in general for higher order perturbative calculations involving an arbitrary number of energy scales.
|
2303.18040 | Investigating the amplitude and rotation of the phase spiral in the
Milky Way outer disc | Context: With the data releases from the astrometric space mission Gaia, the
exploration of the structure of the Milky Way has developed in unprecedented
detail and unveiled many previously unknown structures in the Galactic disc and
halo. One such feature is the phase spiral where the stars in the Galactic disc
form a spiral density pattern in the $Z-V_Z$ plane. Aims: We aim to
characterize the shape, rotation, amplitude, and metallicity of the phase
spiral in the outer disc of the Milky Way. This will allow us to better
understand which physical processes caused the phase spiral and can give
further clues to the Milky Way's past and the events that contributed to its
current state. Methods: We use Gaia data release 3 (DR3) to get full position
and velocity data on approximately 31.5 million stars, and metallicity for a
subset of them. We then compute the angular momenta of the stars and develop a
model to characterise the phase spiral in terms of amplitude and rotation at
different locations in the disc. Results: We find that the rotation angle of
the phase spiral changes with Galactic azimuth and Galactocentric radius,
making the phase spiral appear to rotate about $3^\circ$ per degree in Galactic
azimuth. Furthermore, we find that the phase spiral in the $2200 - 2400$ kpc km
s$^{-1}$ range of angular momentum is particularly strong compared to the phase
spiral that can be observed in the solar neighbourhood. The metallicity of the
phase spiral appears to match that of the Milky Way disc field stars.
Conclusions: We created a new model capable of fitting several key parameters
of the phase spiral. We have been able to determine the rotation rate of the
phase spiral and found a peak in the phase spiral amplitude which manifests as
a very clear phase spiral when using only stars with similar angular momentum. | S. Alinder, P. J. McMillan, T. Bensby | 2023-03-31T13:20:21Z | http://arxiv.org/abs/2303.18040v3 | # Investigating the amplitude and rotation of the phase spiral
###### Abstract
Context:With the data releases from the astrometric space mission _Gaia_, the exploration of the structure of the Milky Way has developed in unprecedented detail and unveiled many previously unknown structures in the Galactic disc and halo. One such feature is the _Gaia_ phase spiral where the stars in the Galactic disc form a spiral density pattern in the \(Z-V_{Z}\) plane. Many questions regarding the phase spiral remain, particularly how its amplitude and rotation change with position in the Galaxy.
Aims:We aim to characterize the shape, rotation, amplitude, and metallicity of the phase spiral in the outer disc of the Milky Way. This will allow us to better understand which physical processes caused the phase spiral and can give further clues to the Milky Way's past and the events that contributed to its current state.
Methods:We use _Gaia_ data release 3 (DR3) to get full position and velocity data on approximately 31.5 million stars, and metallicity for a subset of them. We then compute the angular momenta of the stars and develop a model to characterise the phase spiral in terms of amplitude and rotation at different locations in the disc.
Results:We find that the rotation angle of the phase spiral changes with Galactic azimuth and Galactocentric radius, making the phase spiral appear to rotate with these quantities. Furthermore, we find that the phase spiral in the \(2200-2400\) kpc km s\({}^{-1}\) range of angular momentum is particularly strong compared to the phase spiral that can be observed in the solar neighbourhood. The metallicity of the phase spiral appears to match that of the Milky Way disc field stars.
Conclusions:We created a new model capable of fitting several key parameters of the _Gaia_ phase spiral. We have been able to determine the rotation rate of the phase spiral to be about \(2^{\circ}\) per degree in Galactic azimuth. We find a peak in the amplitude of the phase spiral at \(L_{Z}\approx 2300\) km kpc s\({}^{-1}\) which manifests as a very clear phase spiral when using only stars with similar angular momentum. These results provide insights into the physical processes that led to the formation of the phase spiral and contribute to our understanding of the Milky Way's past and present state.
## 1 Introduction
How large spiral galaxies form and which processes contribute to their formation are open questions. By studying the structure of our own galaxy, the Milky Way, we can find traces of these processes and start to piece together its formation history. However, detailed structures that carry signatures of galaxy evolution and accretion events tend to phase mix and disappear with time. The outer disc of the Galaxy has longer dynamical timescales, meaning that dynamical and physical structures there remain for longer times (Freeman & Bland-Hawthorn, 2002). Therefore, the outer Galactic disc is a good place to study when trying to answer questions about the Milky Way's past.
The European Space Agency's _Gaia_ mission (Gaia Collaboration et al., 2016) has provided accurate astrometric data for almost two billion stars in the Milky Way, and its different data releases (DR1 Gaia Collaboration et al., 2016, DR2 Gaia Collaboration et al., 2018, 2018) have allowed us to reveal over more detailed and delicate structures in our Galaxy. Examples include the _Gaia_-Enceladus-Sausage, the remnants of an ancient merger with a massive galaxy (Belokurov et al., 2018; Helmi et al., 2018); the Radcliffe wave, a large nearby structure of gas that contains several stellar nurseries (Alves et al., 2020); the three-dimensional velocities of stars in the satellite dwarf galaxy Sculptor, allowing a close look at the kinematics of a dark matter dominated system (Massari et al., 2018); many details about the structure of the Galactic halo leading to insights into its formation (Helmi et al., 2017); and the phase spiral (or'snail shell'), a spiral pattern that was discovered by Antoja et al. (2018) in the phase plane defined by the vertical distance from the Galactic plane (\(Z\)) and the vertical velocity component (\(V_{Z}\)).
The existence of the phase spiral physically means that the distribution of the \(V_{Z}\)-velocities for the stars at certain \(Z\)-positions is uneven in a way that looks like a spiral when plotted on a phase space diagram. For example, when looking at stars in the solar neighbourhood with \(Z\approx 0\) pc, there are more stars with \(V_{Z}\approx-20\) km s\({}^{-1}\) and fewer stars with \(V_{Z}\approx-15\) km s\({}^{-1}\) than expected from a smooth symmetrical distribution. The phase spiral was mapped within a Galactocentric range of \(7.2<R/\) kpc \(<9.2\) and within \(15^{\circ}\) of the anti-centre direction (opposite to the Galactic centre) by Bland-Hawthorn et al. (2019), to \(6.6<R/\) kpc \(<10\) by Laporte et al. (2019), to \(6.34<R/\) kpc \(<12.34\) by Wang et al. (2019), and Xu et al. (2020) extended the furthest outer detection to 15 kpc from the Galactic centre. When investigations and simulations of the phase spiral were done across a larger range of positions in the Galaxy, these studies found that the phase spiral changes shape with Galactocentric radius. Close to the solar radius, it has a greater extent in the \(V_{Z}\) direction, and
at greater Galactocentric radii it has a larger extent in the \(Z\) direction. This increase in vertical extent at greater Galactocentric distances is due to the change in gravitational potential and a reduction in vertical restoring force.
The phase spiral is thought to be a response of the Galactic disc to a perturbation that pushed it out of equilibrium. This response, over time, winds up in the \(Z\)-\(V_{Z}\) plane into a spiral due to phase-mixing. In this simple picture, the time since the perturbation determines how wound the phase spiral has become, while any variation with Galactic azimuth, such as a rotation of the phase spiral in the \(Z\)-\(V_{Z}\) plane, corresponds to a difference in the initial perturbation felt by stars at different azimuths. Wang et al. (2019) looked at the phase spiral at different Galactic azimuths and found that the amplitude of the spiral pattern changes. Widmark et al. (2022) show that the orientation of the phase spiral changes with Galactic azimuth and that the difference across 180\({}^{\circ}\) of the Galactic azimuth in a heliocentric system will be about 140\({}^{\circ}\). They show a very slight positive change in angle with radial distance, but only in cells they have marked as less reliable (see Widmark et al. 2022, Figs. D.1 and D.2 for details). Bland-Hawthorn and Tepper-Garcia (2021) show the rotation of the phase spiral at different galactic azimuths in their N-body simulation of the effects of the passage of the Sagittarius dwarf galaxy on the disc. The rotation of the phase spiral is an important part of any attempt at modelling it directly, and an important property to capture in any simulation because it is tied to the potential of the disc. In this study, we will present measurements of the propagation of the rotation angle of the phase spiral.
The chemical composition of the phase spiral was investigated by Bland-Hawthorn et al. (2019) using elemental abundances from the GALAH survey (Buder et al., 2018). They found no evidence that the phase spiral is a single-age population (such as a star cluster or similar) because the trend in metallicity is smoothly varying. This indicates that the stars in the phase spiral are part of the general population of the Milky Way disc. An (2019), using data from APOGEE DR14 (Abolfafhi et al., 2018), examined the metallicity of the disc and found an asymmetry in the \(Z\)-direction, with higher mean metallicity above the plane of the Galaxy than below. They explain this asymmetry as being caused by the phase spiral as it would push stars to greater \(Z\)-distances. These results are reported as being in agreement with the findings of Bland-Hawthorn et al. (2019). In this study, we use global metallicity data on a large number of stars to investigate the chemical properties of the phase spiral.
Several theories for the origin of the phase spiral exist in the literature. Among the proposed scenarios, the most popular one is that the phase spiral was caused by gravitational interactions between the Milky Way and a massive external object. The primary observational evidence for this scenario is the presence of the Sagittarius dwarf galaxy (Ibata et al., 1994) which is undergoing disruption by the Milky Way (Binney and Schonrich, 2018; Laporte et al., 2019; Bland-Hawthorn et al., 2019). If the Sagittarius dwarf galaxy is the cause, then the properties of the phase spiral and the properties of the Sagittarius dwarf galaxy at the time when the interaction took place are linked and knowledge of one can be used to derive the properties of the other, for example, the mass history of the Sagittarius dwarf galaxy, and the time of impact (Bland-Hawthorn and Tepper-Garcia, 2021). Darling and Widrow (2019) discusses the possibility that the phase spiral is caused by bending waves (physical displacement of stars). Several phenomena can cause these waves, including dwarf galaxy impacts and gravitational effects from the bar or spiral structure of the Galaxy. Frankel et al. (2022) and Antoja et al. (2022) both find that a simple model with a single cause for the perturbation fails to explain the observations and calls for more complex models. Hunt et al. (2022); Bennett et al. (2022) and Tremaine et al. (2023) suggest, in different ways, that the formation history of the phase spiral cannot be explained with a single impact but perhaps rather originates from several small disturbances.
The primary goal of this paper is to map the rotational angle, amplitude, and chemical composition of the phase spiral. By using the most recent data from _Gaia_, DR3, we aim to investigate these properties in higher definition than before. As we learn more about the extent, amplitude, rotation, and shape of the phase spiral, we might be able to strengthen the evidence for one of the proposed formation scenarios, leading to a greater understanding of the formation history of the Milky Way. We start by presenting how the stellar sample is selected in Sect. 2. In Sect. 3 we develop the model that we use to analyse the phase spiral and how it changes across the Galactic disc. In Sect. 3.6 we examine the chemical composition of the phase spiral and in Sect. 4 we discuss our results. Finally, we summarise our findings and conclusions in Sect. 5.
## 2 Data
To study the phase spiral we need stars with known three-dimensional velocities. We use _Gaia_ DR3 (Gaia Collaboration et al., 2016, 2022) to get positions, proper motions, and radial velocities for the stars. The distances were calculated by Bailer-Jones et al. (2021) who used a Bayesian approach with a direction-dependant prior on distance, the measured parallaxes, and _Gaia_ photometry, exploiting the fact that stars of different colours have different ranges of probable absolute magnitudes. The ADQL-query used to retrieve this data from the public _Gaia_ database1 was:
Footnote 1: [https://gea.esac.esa.int/archive/](https://gea.esac.esa.int/archive/)
SELECT source_id, ra, dec, pmra, pmdec, r_med_photogeo, radial_velocity FROM external.gaiaerd3_distance JOIN gaiadr3.gaia_source USING (source_id) WHERE parallax_over_error>=3 and radial_velocity IS NOT NULL and r_med_photogeo IS NOT NULL This query resulted in 31,552,449 stars being selected. We use parallax_over_error>3 as this removes the most uncertain distance measurements.
For the chemical investigation, we use the global metallicity [M/H] data from _Gaia_ DR3 RVS spectra (Recio-Blanco et al., 2022) with the ADQL-query:
SELECT source_id, mh_gspspec, flags_gspspec FROM gaiadr3.astrophysical_parameters JOIN gaiadr3.gaia_source USING (source_id) WHERE parallax_over_error>=3 and left_gspspec > 3500 and logg_gspspec BETWEEN 0 and 5 and left_gspspec_upper - teff_gspspec_lower < 750 and logg_gspspec_upper - logg_gspspec_lower < 1 and mh_gspspec_upper - mh_gspspec_lower < 5 and mh_gspspec IS NOT NULL and radial_velocity IS NOT NULL This query resulted in 4,516,021 stars being selected. We use quality cuts as recommended by Recio-Blanco et al. (2022) combined with those used for the main sample. These cuts remove
stars with low temperatures as these stars are known to have complex, crowded spectra and stars with \(\log(g)\) and \(T_{\rm eff}\) not between the upper and lower confidence levels. We also filter out the least reliable K and M-type giant stars using the supplied flag as there exists a parameterisation problem for cooler and metal-rich stars of this type. The final sample for the chemical investigation consists of this table combined with the previous one to get positions, velocities and spectral data in the same sample and is 4,303,484 stars after quality cuts.
For reasons given in Sect. 3, we will base our analysis on samples defined by the angular momenta of the stars. The angular momentum2 is computed as \(L_{Z}=R\,|V_{\phi}|\).
Footnote 2: We use a Galactocentric coordinate system centred on the Galactic centre with the Sun on the (negative) \(X\)-axis at a distance of 8.122 kpc and a height of 20.8 pc with the \(Y\)-axis pointing towards \(l=90^{\circ}\) and the Z-axis pointing to \(b=90^{\circ}\). Galactic azimuth (\(\phi\)) is decreasing in the direction of Galactic rotation and the Sun is located at \(\phi=180^{\circ}\). The velocity of the Sun is \([V_{B,O}=-12.9,V_{B,O}=-245.6,V_{Z,O}=-7.78]\) km s\({}^{-1}\)(Reid & Brunthaler, 2004; Drimmel & Poggio, 2018; GRAVITY Collaboration et al., 2018; Bennet & Bovy, 2019). For the computations and definitions of coordinates, we use Astropy v5.2 (Astropy Collaboration et al., 2022).
The distribution of the stars in the Galactocentric Cartesian \(X\)-\(Y\) plane is shown in Fig. 1. We can see that the sample mostly contains stars with Galactocentric distances \(5-12\) kpc. This allows us to study the phase spiral in regions far from the solar neighbourhood and measure how it changes with location. The top row shows the full sample to the left and the sample of stars with [M/H] values to the right. The bottom row is split into three bins with different angular momentum. In the bin with the highest angular momentum (right-most panel), most stars are \(\sim\)2 kpc further out than those in the low angular momentum bin (left-most panel).
## 3 The Gaia phase spiral
Figure 2 shows the density of stars within \(5^{\circ}\) of the anti-centre direction plotted in the \(L_{Z}\)-\(V_{Z}\) plane. The thick line is the Galactic disc and the "\(V_{Z}\) feature \(2^{\circ}\) at \(L_{Z}~{}\approx~{}2700\) kpc km s\({}^{-1}\) is the bifurcation which was discovered by Gaia Collaboration et al. (2021) and investigated by McMillan et al. (2022) who found that it may be an effect of the passage of the Sagittarius dwarf galaxy. Several other features can be seen, but a particularly clear one that we focus on is the wrinkle labelled "\(V_{Z}\) feature \(1^{\circ}\) at \(L_{Z}\approx 2300\) kpc km s\({}^{-1}\) and the apparent overdensity centred on \((L_{Z},V_{Z})=(2300\) kpc km s\({}^{-1}\), \(-20\) km s\({}^{-1}\)). These regions and features are marked lines in Fig. 2. Finding this seemingly isolated overdensity of stars sitting below the thick line was surprising since the stars otherwise show a smooth falloff from the centre in the vertical directions. As we will show, the cause for the highlighted overdensity and \(V_{Z}\) feature 1 in Fig. 2 seems to be that, in the range \(2200<L_{Z}/\) kpc km s\({}^{-1}<2400\), a higher proportion of the stars are part of the phase spiral, giving the stars in that region an unusual \(V_{Z}\) distribution and a very prominent phase spiral.
### Selection of stars
For an investigation of a structure of stars like the phase spiral, a velocity-dependant selection will produce a more sharply defined phase spiral than a position-dependent selection because the phase spiral is a dynamical structure (Bland-Hawthorn et al., 2019; Li, 2021; Antoja et al., 2022). Samples based on position will contain stars with a wide range of velocities and orbital frequencies, meaning one will indirectly sample a large part of the Galaxy which can be useful when addressing other research questions (Hunt et al., 2021; Gandhi et al., 2022), such as those in Bensby et al. (2014) where relatively nearby stars were sampled
Figure 1: Top middle panel: Number density of stars used in our investigation in the Galactocentric Cartesian \(X\)-\(Y\) plane. This panel contains stars in the \(2000<L_{Z}/\) kpc km s\({}^{-1}<2600\) range. Top right panel: Number density of stars within our sample that have global metallicity [M/H] values. Bottom panels: Number density of the selected stars in the used angular momentum bins in the \(X\)-\(Y\) plane. The circled red dot is the location of the Sun in all panels. The bin size for all panels is 200 pc by 200 pc.
to map the age and abundance structure of several components of the Milky Way.
Here we do a quick comparison of samples selected by Galactocentric position and by angular momentum. Using the Galactic potential from McMillan (2017), we compute the guiding centres for hypothetical stars with \(L_{Z}=2200\) kpc km s\({}^{-1}\) and \(L_{Z}=2400\) kpc km s\({}^{-1}\) to be \(R\approx 9.5\) kpc and \(R\approx 10.4\) kpc, respectively. The \(Z\)-\(V_{Z}\) phase space for the stars between these Galactocentric radii is shown in Fig. 3a, and the same for stars in the angular momentum range in Fig. 3b. The phase spiral based on stars in the \(9.5<R/\) kpc \(<10.4\) range is visible but less clear while the phase spiral based on stars in the \(2250<L_{Z}/\) kpc km s\({}^{-1}<2350\) range is more prominent. This is because panel a contains stars that are part of different-looking phase spirals, making the stars fill in the gaps in each other's patterns, whereas panel b mostly contains stars that are part of one single phase spiral, making the pattern clear. Panel a contains a total of 1,045,921 stars, panel b contains 1,348,886 stars.
### Model of the phase spiral
To quantify the properties of the phase spiral as functions of \(R,\phi\) and \(L_{Z}\) we construct a model inspired by those used by Widmark et al. (2021) and Guo et al. (2022). The model is built by creating a smoothed background from observed data, then a spiral-shaped perturbation that, when multiplied by the background, recreates
Figure 3: a: The number density of stars in the \(Z\)-\(V_{Z}\) phase plane in the \(9.5<R/\) kpc \(<10.4\) range. b: The number density of stars in the \(Z\)-\(V_{Z}\) phase plane in the \(2200<L_{Z}/\) kpc km s\({}^{-1}<2400\) range. a shows a less clearly defined phase spiral than b.
Figure 2: Column normalized histogram of star number density in the \(L_{Z}-V_{Z}\) plane in the Galactic outer disc. The region of interest is marked by solid white lines at \(L_{Z}=[2200,2400]\) kpc km s\({}^{-1}\), with dashed white lines at \(L_{Z}=2000\) and \(2600\) kpc km s\({}^{-1}\) marking the areas used for comparisons in Figs. 1, 11, and 13. Features mentioned in the text are also marked. The figure contains all stars in our sample with \(175^{\circ}<\phi<185^{\circ}\), 12,723,513 in total.
the observed distribution of stars in the \(Z\)-\(V_{Z}\) plane. This way, the spiral can be isolated and quantified. In this model, we compute values for the phase distance \(r\) and the phase angle \(\theta\) by using
\[r=r(Z,V_{Z})=\sqrt{Z^{2}+\left(\frac{V_{Z}}{S}\right)^{2}}, \tag{1}\]
\[\theta=\theta(Z,V_{Z})=\arctan\left(\frac{1}{S}\frac{V_{Z}}{Z}\right), \tag{2}\]
where \(S\) is a scale factor which determines the ratio between the axes and is a free parameter in the model. These coordinates are illustrated in Fig. 4 with a simple diagram. A larger value of \(S\) stretches the \(V_{Z}\) axis, thus controlling the axis ratio of the spiral, see the bottom-left panel of Fig. 5. A star experiencing a harmonic motion in \(Z\) will trace a circle in the phase plane for the right value of \(S\), of \(S\) is closely related to the vertical frequency of oscillations in the Galactic disc. We therefore restrict it to \(30<S<70\) where \(S\) is in units of \(\mathrm{kms^{-1}\,kpc^{-1}}\).
Starting from Guo et al. (2022)'s discussion of the shape of the phase spiral, we consider the quadratic spiral
\[r=a+b\phi_{s}+c\phi_{s}^{2}, \tag{3}\]
where \(\phi_{s}\) is the angle of the spiral. They claim that an Archimedean spiral3\((a=0,c=0)\) fits the data well enough. We, however, found that our model fits better when we do not require that \(c=0\). We can assume \(a=0\) without loss of generality. As we construct the model we will be referring to Fig. 5 for illustrations of the effects of each parameter. Figure 5 contains six panels. Panel A shows the spiral perturbation for a certain set of parameters. Each other panel shows the spiral perturbation with one parameter increased and will be referred to as that parameter is introduced. We write the equation for the radial distance of the phase spiral as
Footnote 3: An Archimedean spiral is a spiral with a linear relation between angle and radial distance. Expressed in polar coordinates the spiral traces \(r=b\phi_{s}\).
\[r=b\phi_{s}+c\phi_{s}^{2}, \tag{4}\]
which means
\[\phi_{s}(r)=-\frac{b}{2c}+\sqrt{\left(\frac{b}{2c}\right)^{2}+\frac{r}{c}}. \tag{5}\]
The parameter \(b\) is the linear winding term of the spiral. Higher values of \(b\) mean the spiral winds slower and moves further in \(r\) per turn, see the top-middle panel in Fig. 5. The value of \(b\) has to be positive and by inspection we find it to provide reasonable results between 0.01 and 0.175. \(c\) is the quadratic winding term. It has a similar meaning to \(b\) except it does not act equally at all \(r\), having a smaller effect close to the middle of the spiral and a greater further out, see the top-right panel of Fig. 5. \(c=0\) means that the spiral is Archimedean and its radius has a constant increase with increasing angle. \(c\) has to be positive and, by inspection, we find that by limiting its upper value to 0.005 we get reasonable results.
Following Widmark et al. (2021) we take the form of the perturbation to be
\[f(r,\Delta\theta)=1-\alpha\cdot\mathrm{mask}(r)\cos(\Delta\theta), \tag{6}\]
where \(\alpha\) is a free parameter of the model that defines the amplitude of the phase spiral. This spiral perturbation can have values in the range \(1+\alpha\) to \(1-\alpha\). If \(\alpha=0\) then the smoothed background is unperturbed by the spiral, if \(\alpha=1\) there are no stars that are not part of the spiral. We define \(\Delta\theta\) as the phase angle relative to the peak of the perturbation as a function of phase distance as
\[\Delta\theta=\theta-\phi_{s}(r)-\theta_{0}, \tag{7}\]
where \(\theta_{0}\) is the angle offset, which is a free parameter, giving us
\[f(r,\theta)=1-\alpha\cdot\mathrm{mask}(r)\cos(\theta-\phi_{s}(r)-\theta_{0}). \tag{8}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Name & min & max & Description & Unit \\ \hline \(\alpha\) & 0 & 1 & Amplitude of spiral pattern & – \\ \(b\) & 0.01 & 0.175 & Linear winding parameter & pc rad\({}^{-1}\) \\ \(c\) & 0.0 & 0.005 & Quadratic winding parameter & pc rad\({}^{-2}\) \\ \(\theta_{0}\) & \(-\pi\) & \(\pi\) & Angle offset & rad \\ \(S\) & 30 & 70 & Scale factor & \(\mathrm{kms^{-1}\,kpc^{-1}}\) \\ \(\rho\) & 0 & 0.3 & Inner mask distance & kpc \\ \hline \end{tabular}
\end{table}
Table 1: Free parameters in the model of the phase spiral.
Figure 4: Illustration of the phase-plane coordinates used. \(r\) is the phase distance and \(\theta\) is the phase angle. In this example, \(\theta=45^{\circ}\). The scale factor \(S\) has been chosen such that the \(r\)-vector could be drawn with constant length regardless of angle.
The innermost part of the phase spiral cannot be accurately fitted with this kind of model because the part of the Galactic disc with low \(Z\)-displacement and velocity is subject to small perturbations which wash out the phase spiral. We therefore apply a mask to reduce the strength of the spiral perturbation in this region. Like Widmark et al. (2021), we use the logistic function (a sigmoid function) for our masking function. The logistic function has the property that it is bounded by zero and one, and smoothly (exponentially) changes between them, thereby bringing any value into the zero-to-one range in a naturalistic way. We define the masking function as
\[\text{mask}_{\text{inner}}(r)=\text{sigm}\left(\frac{r-\rho}{0.1\text{ kpc}}\right), \tag{9}\]
where
\[\text{sigm}(x)=\frac{1}{1+e^{-x}}, \tag{10}\]
is the sigmoid function and \(\rho\) is the radius of the mask, which is a free parameter in the model. The mask reduces the impact of the inner part of the spiral by "flattening" it, bringing it closer to one, see the bottom-right panel of Fig. 5. A larger value of \(\rho\) means a larger part of the spiral is flattened. By inspection, we find that this value should be less than 0.3 which we apply as a prior.
We also use an outer mask to reduce the influence of the most distant regions of the phase plane with very few stars. Similarly
Figure 5: Examples of the effects on the spiral perturbation when changing (increasing) the different parameters in the model. Panel A) shows the spiral perturbation for a certain set of parameters. This is taken as the default for the comparison in this figure. Panel B) shows the spiral perturbation with an increased linear winding parameter. Panel C) shows the spiral perturbation with an increased quadratic winding parameter. Note that the inner part of the spiral is still similar to panel A). Panel D) shows the spiral perturbation with an increased phase angle, rotating it half a revolution. Panel E) shows the spiral perturbation with an increased scale factor which increases the \(V_{Z}\)-\(Z\) axis ratio. Panel F) shows the spiral perturbation with the inner mask distance increased which makes the inner parts less distinct.
Figure 6: Example of the process from data to fitted model. a: Data used for the model, a two–dimensional histogram showing number density consisting of 1,396,320 stars, b: Initial background, c: Data / initial background, d: Extracted spiral perturbation, e: Initial fit spiral. f: Final background, g: Data / final background, h: Best fit spiral. See text for details on individual panels. This example consists of stars with \(8.4<R/\text{kpc}<10.4\) and \(165^{\circ}<\phi<195^{\circ}\).
to Widmark et al. (2021) we use
\[{\rm mask}_{\rm outer}(Z,V_{Z})=-{\rm sign}\left(\left(\frac{Z}{1\,{\rm kpc}} \right)^{2}+\left(\frac{V_{Z}}{40\,{\rm km\,s}^{-1}}\right)^{2}-1\right)-1, \tag{11}\]
for the outer mask. This mask is applied to both data and model when evaluating how good the fit is. Note that these two masks have different purposes. The inner mask is only applied to the model to reduce the strength of the perturbation in a small area, while the outer mask reduces the importance of the outermost data in our results.
Combining Eqs. 5, 8, and 9 gives the spiral perturbation as
\[f(r,\theta)=1-\alpha\cdot{\rm sign}\,(r-\rho)\cos(\theta-\phi_{s}(r)-\theta_{0}), \tag{12}\]
where \(\alpha,\rho\), and \(\theta_{0}\) as well as \(b\) and \(c\) are free parameters of the model. The prior we use is based on observations of the phase spiral and chosen in a way to ensure that the sampler converges to the most reasonable solution. The prior uses uniform probabilities for all parameters between the values listed in Table 1. This table also contains a summary of the parameters with their units.
### Fitting procedure
The spiral perturbation is found through an algorithm that involves an iterative procedure to create a smooth background. With a smooth background, we can define the perturbation which has the parameters of the phase spiral. This background is not a two-dimensional Gaussian or other simple shape, it is complicated and changes depending on where in the Galaxy you look, in part because of interstellar extinction. The procedure for fitting a spiral perturbation to the data is illustrated in Fig. 6 and the letters in this subsection refer to the panels in the figure. The figure contains stars with \(8.4<R/\,{\rm kpc}<10.4\) and, \(165^{\circ}<\phi<195^{\circ}\) in the \(2200<L_{Z}/\,{\rm kpc\,km\,s}^{-1}<2400\) range. The procedure contains the following steps.
1. Collect the data into a two-dimensional number density histogram in the \(Z-V_{Z}\) phase plane (panel a). The model uses a bin size of 25 pc by 2 km s\({}^{-1}\) except in cases with fewer stars when larger bins are used. For example, Fig. 7 uses bins of \(33\frac{1}{5}\) pc by \(2\frac{2}{5}\) km s\({}^{-1}\).
2. Create the first background using the observed data (panel b). The background is created from data that has been smoothed by a SciPy Gaussian kernel-density algorithm using Scott's rule (Scott, 1992) to determine the kernel size, and mirrored along the \(V_{Z}=0\) axis because the background velocity distribution is here assumed to be approximately symmetric.
In panels b and c we can see that this background still contains some structure from the data and that the spiral pattern in panel c is not clear.
1. Find the spiral perturbation (Eq. 12) that, multiplied by this background fits the data best (panel d). The parameter space is explored and the best fit is found by using a Markov Chain Monte Carlo (MCMC) 4 approach. To find a fit, we need to define a probability function of a given model that takes the data and our prior into account. Given that we are using an MCMC sampler we can ignore any multiplicative constants and can say that the relevant value is \(p\), where Footnote 4: The model is implemented in Python using the package emcee(Foreman-Mackey et al., 2013) as an MCMC sampler. \[\ln p=-\frac{1}{2}\sum\left(\frac{(N-f(r,\theta)\cdot B)^{2}}{f(r,\theta)\cdot B }\right)+\ln(P_{\rm prior}),\] (13) where \(N\) is the data in the form of number counts for each bin, \(B\) is the background, and \(P_{\rm prior}\) is the prior probability. This perturbation is multiplied by the background and outer mask (Eq. 11) and compared to the data (panel e).
5. Divide the data by the spiral perturbation produced in the fit to create an improved background which lacks some of the structure of the initial one (panel f). This new background is smoothed by averaging every pixel with its nearest neighbours (including diagonals) and is not necessarily symmetric in \(V_{Z}\) anymore.
The process from point 3 to here is repeated until the new background no longer provides a better fit. The background converges quickly, usually not improving further after three iterations. The difference this process makes for the background can be seen by comparing panels c and g and noting in panel g the clearer spiral pattern.
6. When making a new background no longer improves the fit, take the final background and perturbation and make the final best fit (panel g). The final parameters are the median of the final samples found by the MCMC sampler.
The model is robust and capable of fitting spirals even to regions with relatively few stars. This is because the quality of the fit is judged by how well the smooth background is made to look like the data, and the spiral perturbation is the way in which it can change this background. In Fig. 7 we show an example of the process when dust severely obscures the sample. The figure contains data in the \(8<R/{\rm kpc}<12\), \(150^{\circ}<\phi<155^{\circ}\), and \(2200<L_{Z}/\,{\rm kpc\,km\,s}^{-1}<2400\) ranges. The model still produces a reasonable fit and provides the parameters of the phase spiral.
Figure 7: Example of a selection of stars near the edge of our considered area, containing only about 22,000 stars. The sample contains stars with \(8<R/{\rm kpc}<12\) and \(150^{\circ}<\phi<155^{\circ}\). Upper left: The phase plane showing strong extinction by dust. Upper right: The background produced by the model. Lower left: The spiral perturbation produced by the model (this panel does not share the colour bar with the rest). Lower right: The best fit. We can see that even absent a clear spiral pattern in the data, the model still produces a convincing spiral and fit.
### Rotation of the phase spiral
The animation5 shows the phase spiral smoothly transition from stars at \(\phi\approx 210^{\circ}\) to stars at \(\phi\approx 150^{\circ}\) with bins \(10^{\circ}\) wide. Here the phase spiral can be seen to spin clockwise about half a rotation as we decrease the Galactocentric azimuthal angle from \(\phi=150^{\circ}\) to \(\phi=210^{\circ}\). At either end of the range, there is a reduction of the number counts of stars in the mid-plane of the disc, because interstellar dust blocks our view of these stars. Figure 8 shows the phase spirals at three different galactic azimuths for stars with \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\), along with the perturbations fitted to the data. It is evident that the rotation angle of the phase spiral increases (rotating counterclockwise) with Galactic azimuth, changing by roughly \(80^{\circ}\) over the \(30^{\circ}\) change in azimuth from \(\sim\)\(165^{\circ}\) to \(\sim\)\(195^{\circ}\).
Footnote 5: View the animation at
[https://lu.box.com/s/lkd@3c3gzcj29eqfgprsfmuit8rbqyrl](https://lu.box.com/s/lkd@3c3gzcj29eqfgprsfmuit8rbqyrl)
The parameter \(\theta_{0}\) which we fit in our model is not a convenient or particularly helpful description of the rotation of the phase spiral because the angle parameter (\(\theta_{0}\)) in the model has a degeneracy with the winding parameter (\(b\)) and to a certain extent the quadratic winding parameter (\(c\)) and different sets of these values can produce very similar spirals except in the most central regions, which are removed by the inner mask. Therefore, we describe the rotation of the phase spiral by the angle which maximises Eq. 12 (i.e. \(\Delta\theta=0\)) at a fixed phase distance of \(r=150\,\mathrm{pc}\) and call this angle \(\theta_{0,\mathrm{model}}\). This angle is shown in Fig. 8 with a red line, and the phase distance is shown with a white ring (scaled to the same axis ratio as the phase spiral) in
Figure 8: Upper row: The phase spiral at low, medium, and high galactic azimuth (\(\phi\)) with the angle \(\theta_{0,\mathrm{model}}\) marked with a red line and \(\theta_{0,\mathrm{model}}=0\) marked with a white dashed line. Lower row: The corresponding spiral perturbations fitted to the data with \(\theta_{0,\mathrm{model}}\) marked with a red line and the measurement distance for \(\theta_{0,\mathrm{model}}\) marked with a white ring.
Figure 9: Normalized \(Z\) distributions for stars at \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) at different Galactic azimuths. Note the seemingly bimodal distribution at Galactic azimuth far from \(180^{\circ}\), this is an effect of dust hiding stars in the middle of the disc.
the lower row. The angle \(0^{\circ}\) is shown with a dashed white line in the upper row.
Figure 9 shows the \(Z\)-distribution for 6 different ranges of Galactic azimuth, each \(10^{\circ}\) wide, between \(150^{\circ}\) and \(210^{\circ}\), for stars in the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) range. Here, we can clearly see the reduction in the number of stars close to the Galactic plane (\(Z\approx 0\)) at high and low Galactic azimuth. This is because of dust in the plane of the Galactic disc, obscuring the true distribution of stars. Despite this, we see a shift as Galactic azimuth increases with more stars with low \(Z\) at low Galactic azimuth than at higher Galactic azimuth, where there are more stars at high \(Z\) instead. This is because stars in the phase spiral get pushed to greater \(Z\) distances.
Figure 10 shows a map of the phase spiral's rotation angle (\(\theta_{0,\,\mathrm{model}}\)) on a top-down radial grid of the Galactic disc. Three plots are shown, each containing stars in a different angular momentum and Galactocentric radial distance range. The left plot contains stars in the \(2000<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2200\) and \(7.5<R/\mathrm{kpc}<10\) range, the middle plot has stars in the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) and \(8.5<R/\mathrm{kpc}<11\) range and the right plot has stars in the \(2400<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2600\) and \(9.5<R/\mathrm{kpc}<12\) range, all the between \(150^{\circ}\) and \(210^{\circ}\) in Galactic azimuth. The zero-point of the rotation angle is set to be 0 at the \(V_{Z}=0\) line at \(Z>0\) (the positive \(x\)-axis as indicated in Fig. 8). In Fig. 10, we see a change in this rotation angle from high to low Galactic azimuth. In the left and middle plots, we see a relatively smooth decrease in rotation angle as Galactic azimuth decreases, changing by about \(180^{\circ}\) over \(60^{\circ}\) in the Galactic azimuth. The right panel shows the same trend but less smoothly. The left panel shows a radial increase in rotation angle by about \(40^{\circ}\) over \(2.5\,\mathrm{kpc}\) while the middle panel shows a radial decrease in this angle by about \(70^{\circ}\) over \(2.5\,\mathrm{kpc}\). The right panel appears to show an increase in angle with radial distance.
### Amplitude of the phase spiral
Figure 11 shows the \(Z\)-\(V_{Z}\) phase plane for the three regions marked with lines in Fig. 2. The left and right panel contains stars in the \(2000<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2200\) and \(2400<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2600\) ranges respectively. Both these regions show weak and/or almost dissolved phase spiral patterns. The middle panel, which corresponds to the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) range, shows a clear, single-armed, phase spiral pattern.
Our model contains a parameter for the amplitude, or strength, of the phase spiral pattern (\(\alpha\)). Figure 11 shows the amplitude of the phase spiral as a function of angular momentum in the bottom panel. There is a peak at \(L_{Z}\approx 2300\) kpc km s\({}^{-1}\), which is what we expected from Fig. 2 and the top row of Fig. 11. The shaded region in the plot is between the 84th and 16th percentiles. These are used to show the statistical uncertainties in the model. The systematic uncertainties are expected to be larger. The jagged part between \(L_{Z}\approx 2000\,\mathrm{kpc\,km\,s^{-1}}\) and \(2100\,\mathrm{kpc\,km\,s^{-1}}\) are examples of the modelling procedure finding an alternate solution. By visual inspection, we can conclude that the phase spirals found at these points are not the best fits. The line rises in the high end of the plot, indicating another peak beyond \(L_{Z}=2600\,\mathrm{kpc\,km\,s^{-1}}\). This seems to correspond to "\(V_{Z}\) feature 2" in Fig. 2 and the bifurcation discussed by McMillan et al. (2022). The bottom plot contains points based on bins that are \(1000\) pc by \(30^{\circ}\). These bins are centred on the guiding centre radius corresponding to that angular momentum. Because the bins are \(30^{\circ}\) wide, we are measuring phase spirals with rotational angles over a \(\sim\)70\({}^{\circ}\) range, as large as that seen in Fig. 8.
Figure 12 shows a map of the amplitude (\(\alpha\)) on a top-down radial grid of the Galactic disc. Three plots are shown, each containing stars in a different angular momentum and Galactocentric radial distance range, the same as in Fig. 10. The figure shows that the brightest region, with the highest amplitude, is the middle panel with stars in the \(2200<L_{Z}/\,\mathrm{kpc\,km\,s^{-1}}<2400\) range as we would expect from Fig. 11. The figure also shows that the region of the highest amplitude moves outward with \(L_{Z}\), as well as in each bin. There is a slight trend for the amplitude to decrease at higher and lower Galactic azimuths in the figure. This is believed to be an observational effect.
### Chemical composition of the phase spiral
Figure 13 shows the \(Z\)-\(V_{Z}\) phase plane coloured by the mean global metallicity for the sample of stars that we have metallicities for in the same three ranges in angular momentum as in Fig. 11. A similar, but weaker, spiral pattern can be seen here. We observe that stars in the phase spiral have a slightly higher metallicity than those outside the pattern, indicating a common origin between the stars in the arm of the phase spiral and those
Figure 10: Angle (\(\theta_{0,\,\mathrm{model}}\)) of the phase spiral as measured by the model, showing the rotation across the Galactic disc. The plots show data for different regions of the Galaxy as seen from above for the three angular momentum ranges marked in Fig. 2. The colour bar is periodic and the zero point is arbitrary.
in the Galactic thin disc. A clear decreasing trend in mean metallicity with angular momentum can also be seen. The similarities between the spiral patterns in Figs. 11 and 13 are noteworthy but not surprising as stars in the Galactic thin disc are known to have higher metallicity and the phase spiral is assumed to be a perturbation of the Galactic disc which moves stars away from the midplane. They both show stars in the same angular momentum ranges and the same phase spiral pattern appears. In the left panel, the central region of phase space (the thin disc) shows high [M/H] values. In this panel, an arm of the phase spiral can be seen emerging from the top of this central region, at about \(Z\approx-300\) pc, \(V_{Z}\approx 20\) km s\({}^{-1}\). In the middle panel, a one-armed spiral is visible in stars with mean metallicity of \(\langle[{\rm M/H}]\rangle\approx-0.15\) against a background of \(\langle[{\rm M/H}]\rangle\approx-0.22\). This panel lacks the high metallicity region in the centre of the phase plane, instead having the region of highest metallicity be in the arm of the phase spiral. Even the less dense gap between the wraps of the phase spiral arm is visible as a darker curve
Figure 11: Measurements of the amplitude of the phase spiral as a function of angular momentum. Top: Number density of stars at low, medium, and high angular momentum, showing the phase spiral change shape and amplitude. Bottom: Amplitude of the phase spiral pattern as a function of angular momentum. The lines are the same as in Fig. 2. The shaded area shows the 84th and 16th percentiles.
Figure 12: Amplitude (\(\alpha\)) of the phase spiral as measured by the model for different regions of the Galaxy as seen from above for three angular momentum ranges marked in Fig. 2. The brightness of the plots corresponds to the height of the line in the bottom panel in Fig. 11, showing the change in amplitude across the Galactic disc.
near the centre of the phase plane. The right panel shows a faint trace of a spiral arm at \(Z\approx 500\,\mathrm{pc}\), \(V_{Z}\approx-20\,\mathrm{km\,s^{-1}}\). Note that the colour scale in this panel is shifted slightly towards lower metallicity values in order to bring out the remaining structure.
## 4 Discussion
### Formation
The results presented in the previous section challenge certain proposed formation mechanisms for the phase spiral. A smoothly changing angle of the phase spiral across a wide range of different Galactic azimuths and radii, such as we observe in Fig. 10 and our animation6, would seem to indicate a single-impact formation mechanism. However, numerous recent papers are pointing in the opposite direction, that a single-impact origin scenario is too simple to explain all the observations (e.g. Tremaine et al., 2023; Frankel et al., 2022). In this context, more advanced models that consider multiple galactic components and the wider cosmological context may be more suited for studying a complicated system like the phase spiral. Garcia-Conde et al. (2022) look at phase spirals in a cosmological simulation and conclude that phase spirals still appear, even if the interacting satellite galaxies are less massive or more distant than the Sagittarius dwarf galaxy is thought to have been (Niederste-Ostholt et al., 2010).
Footnote 6: [https://lu.box.com/s/lkd@3c3gzcjj29eqfgprsfmuit8rbqyrl](https://lu.box.com/s/lkd@3c3gzcjj29eqfgprsfmuit8rbqyrl)
Knowledge of how the phase spiral shifts across the Galactic disc can be related to the properties of the disc and the cause of the perturbation. Widmark et al. (2021) used the velocities of stars in the disc and phase spiral to infer the potential of the disc, and thereby its mass. Comparing how the phase spiral has propagated through the disc with results from modelling studies can lead to better constraints for these methods. It should be noted that, according to Grand et al. (2022), the torque on the disc caused by the dark matter wake of a passing satellite galaxy can be significantly stronger than the direct interaction with the satellite galaxy. This means that the connection between the perturbing satellite galaxy and the perturbation in the disc may not be as simple as some previous idealised models have assumed. Explaining the physics on scales as small as those focused on in this paper (roughly the area covered by Fig. 10) in the context of a cosmological simulation presents a challenge for the modelling community.
### Hot/cold orbits
Our results can also be seen as being in tension with those of Li & Shen (2020) and Bland-Hawthorn et al. (2019), who argue that the phase spiral is more clearly defined in stars on dynamically cold (close to circular) orbits than in stars on dynamically hot (far from circular) orbits. Li & Shen (2020) argue that stars on hotter orbits should be excluded from samples used in phase-mixing studies to provide clearer samples. The results of this paper combined with those of Frankel et al. (2022) are in tension with these conclusions. Figure 11 (bottom row) contains stars on cold orbits by only including stars that are within \(500\,\mathrm{pc}\) radially of where a star with the same angular momentum on a circular orbit would be. This result is similar to that by Frankel et al. (2022), who show that stars on hot orbits at \(L_{Z}\approx 2300\,\mathrm{kpc}\) km s\({}^{-1}\) still produce a phase spiral with higher amplitude than stars on cold orbits at \(L_{Z}\approx 1800\,\mathrm{kpc}\) km s\({}^{-1}\) (see their Fig. 6 for details). Both results show the same feature, a region with a more prominent phase spiral, despite containing separate populations of stars with different dynamics.
Frankel et al. (2022) conduct a similar investigation of the amplitude of the phase spiral pattern as a function of angular momentum in the range \(1250<L_{Z}/\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}<2300\). Their sample consists of stars within a \(0.5\,\mathrm{kpc}\) cylinder centred on the Sun meaning that the stars included in this volume with high angular momentum (\(>2000\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\)) are all going to be on relatively dynamically hot orbits. Our sample contains stars whose position and guiding centre are further out, meaning that when considering the high angular momentum case, the stars are on dynamically cooler orbits. They show a general increase in amplitude with angular momentum, with the highest peak at \(L_{Z}\approx 2350\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\), (see their Fig. 6 for details). The bins containing stars with high angular momentum in their sample hold few stars leading to a relatively large scatter in the results. Our results also extend to higher angular momentum meaning that, in Fig. 11, we can see the dip in amplitude at \(L_{Z}\approx 2500\) kpc km s\({}^{-1}\).
The questions posed in Bland-Hawthorn et al. (2019) are still relevant. How are different populations of stars affected by whatever mechanism caused the phase spiral? How is the gas affected? Were the stars in the phase spiral formed in it or were they swept up into it after they had already formed? These questions are mostly outside the scope of this paper but could bring significant insights into the dynamic processes that shape our galaxy.
Figure 13: Phase spirals coloured by mean global metallicity at low, medium, and high angular momentum, showing that the spiral pattern is visible, compare to Fig. 11. Note that the rightmost panel has different values on the colour bar. The data is split into the three angular momentum ranges marked in Fig. 2.
### Metallicity
Widrow et al. (2012) discovered an asymmetry in the \(Z\)-distribution of stars in the Galactic disc which we now associate with the phase spiral. They found that when looking at the number density of stars as \(\left(\mathrm{North}-\mathrm{South}\right)/\left(\mathrm{North}+\mathrm{South}\right)\) the result is \(<0\) at \(\left|Z\right|\approx 400\,\mathrm{pc}\) and \(>0\) at \(\left|Z\right|\approx 800\,\mathrm{pc}\). An (2019) analysed this asymmetry further, specifically looking at the metallicity of the stars. They found that the vertical metallicity distribution is asymmetric in a similar complicated manner. Our results suggest that the arm of the phase spiral drives stars to greater \(Z\)-distances in the region of the Galaxy we study. This would push stars from the disc vertically away from it, and preferentially in the positive \(Z\)-direction (An, 2019).
Bland-Hawthorn et al. (2019) looked at the difference in the phase spiral when using different cuts in the elemental abundance plane. They found that more metal-rich (\(\left\{\mathrm{Fe}/\mathrm{H}\right\}>0.1\)) stars were concentrated in the central part of the phase spiral. As we can see in Fig. 13, we also see that stars with higher mean metallicity can be found in the centre of the phase spiral. The conclusion is that these stars were formed in the Galactic thin disc and then perturbed to move out of it. This would explain the asymmetry in the \(Z\)-distribution and the concentration of metal-rich stars in the phase spiral.
### Effects of rotation
Previous studies have shown that the phase spiral changes shape and becomes more flattened along the \(Z\)-axis with increasing Galactocentric radius (Bland-Hawthorn et al., 2019; Wang et al., 2019; Khanna et al., 2019; Laporte et al., 2019; Xu et al., 2020; Hunt et al., 2021; Li, 2021; Antoja et al., 2022). If the rotation of the phase spiral is not taken into consideration when studying it, some features are at risk of being washed out. For example, in Fig. 2, the sample is restricted to stars with Galactic azimuth of \(175^{\circ}<\phi<185^{\circ}\), otherwise, the feature of interest is not clearly visible. Future authors should be aware of this phenomenon and how it may affect their results.
In Fig. 9 it appears that stars are missing in the centre of the Galactic disc at high or low Galactic azimuth. This is attributed to dust. We also see an asymmetry in the \(Z\)-distribution when comparing regions at high and low Galactic azimuth. This effect could be caused by the rotation of the phase spiral as it brings the phase spiral arm out of the high \(Z\) region at lower Galactic azimuth. We do not believe this is caused by the warp of the Galactic disc, as the warp only starts being measurable at Galactocentric distances greater than those considered here, at about 10 kpc (Cheng et al., 2020). However, it seems like the phase spiral and the Galactic warp overlap in certain regions of the Galaxy and are perhaps related.
## 5 Summary and Conclusions
In this work, we use data from _Gaia_ DR3 to investigate the _Gaia_ phase spiral by making a new model capable of fitting several of its key characteristics. We use a sample of stars with measured radial velocities to get full three-dimensional information on both their position and velocity, a sample of about 31.5 million stars. Using our model, we have been able to determine the rate of rotation of the phase spiral with Galactic azimuth and the amplitude of the phase spiral as a function of angular momentum. We find that, for the data we explore, the phase spiral rotates with Galactic azimuth. We find a peak in the amplitude of the phase spiral at \(L_{Z}\)\(\approx\)2300 km kpc s\({}^{-1}\) which manifests as a very clear phase spiral pattern in number density when using only stars with similar angular momentum.
Our main findings in this paper are listed here:
1. The phase spiral changes orientation along both Galactic radial distance and Galactic azimuth, and it rotates at a rate which is three times the rate of the azimuthal angle, a rate of \(\sim\)180\({}^{\circ}\) per 60\({}^{\circ}\) Galactic azimuth for stars with angular momenta from 2000 km kpc s\({}^{-1}\) to 2400 km kpc s\({}^{-1}\), corresponding to orbits typically found outside the Sun's position in the Galaxy.
2. The amplitude of the phase spiral pattern changes with angular momentum with a peak at about 2300 \(\pm\) 100 kpc km s\({}^{-1}\), producing a substantially clearer spiral pattern in number density.
3. The stars in the phase spiral arm are chemically very similar to those in the \(Z\)-centre of the Galactic disc. This indicates that the stars in the phase spiral originally belonged to the Galactic thin disc.
4. We can confirm the conclusions of An (2019) and Bland-Hawthorn et al. (2019) that the Z-asymmetry of the metallicity gradient of the Galaxy is caused by the metal-rich arm of the phase spiral pushing such stars to greater \(Z\)-positions.
The reason for the change in the \(L_{Z}\)-\(V_{Z}\) distribution between the solid lines in Fig. 2, the overdensity seen below the thick line, was found to be the phase spiral. The line is raised to about 15 km s\({}^{-1}\), corresponding to when the phase spiral first turns onto the negative \(Z\)-values, the lower clump sits at \(-20\) km s\({}^{-1}\) which corresponds to then the spiral arm turns back to the positive \(Z\)-values.
By combining the data from _Gaia_ with that coming from the soon-to-be operational spectrographs 4MOST (Bensby et al., 2019; Chiappini et al., 2019) and WEAVE (Jin et al., 2023), more light will be shed on the origins of the phase spiral by revealing detailed chemical abundances for millions of stars in all parts of the Milky Way.
###### Acknowledgements.
PM gratefully acknowledges support from project grants from the Swedish Research Council (Vetenskapidr, Reg: 2017-03721; 2021-04153). TB and SA acknowledge support from project grant No. 2018-04857 from the Swedish Research Council. Some of the computations in this project were completed on computing equipment bought with a grant from The Royal Physiographic Society in Lund. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating the _Gaia_ Multilateral Agreement. This research has made use of NASA's Astrophysics Data System. This paper made use of the following software packages for Python, Rungry Harris et al. (2020), Astropy Astronomy Collaboration et al. (2022), eence Forman-Mackey et al. (2013), SciPy Virtanen et al. (2020).
|
2309.12250 | SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References | Evaluation of QA systems is very challenging and expensive, with the most
reliable approach being human annotations of correctness of answers for
questions. Recent works (AVA, BEM) have shown that transformer LM encoder based
similarity metrics transfer well for QA evaluation, but they are limited by the
usage of a single correct reference answer. We propose a new evaluation metric:
SQuArE (Sentence-level QUestion AnsweRing Evaluation), using multiple reference
answers (combining multiple correct and incorrect references) for sentence-form
QA. We evaluate SQuArE on both sentence-level extractive (Answer Selection) and
generative (GenQA) QA systems, across multiple academic and industrial
datasets, and show that it outperforms previous baselines and obtains the
highest correlation with human annotations. | Matteo Gabburo, Siddhant Garg, Rik Koncel Kedziorski, Alessandro Moschitti | 2023-09-21T16:51:30Z | http://arxiv.org/abs/2309.12250v1 | # SQUARE: Automatic Question Answering Evaluation using
###### Abstract
Evaluation of QA systems is very challenging and expensive, with the most reliable approach being human annotations of correctness of answers for questions. Recent works (AVA, BEM) have shown that transformer LM encoder based similarity metrics transfer well for QA evaluation, but they are limited by the usage of a single correct reference answer. We propose a new evaluation metric: SQuArE (Sentence-level QUStion AnsweRing Evaluation), using multiple reference answers (combining multiple correct and incorrect references) for sentence-form QA. We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems, across multiple academic and industrial datasets, and show that it outperforms previous baselines and obtains the highest correlation with human annotations.
## 1 Introduction
Automatic evaluation of Question Answering systems to gauge correctness of an answer for a question is a challenging task. This task is important for preserving a quick velocity in evaluating and development of new QA systems, and creating large high quality training corpora for LLM-based QA systems. The most common approach for this task is to obtain human annotations of correctness of answers for questions, which is slow, expensive, and challenging (annotating complete answer sentences for questions has been shown to achieve poor inter-annotator agreement).
Span extraction (MR) based QA systems are typically evaluated using token matching metrics such as EM (Exact Match) or F1, however, these cannot be extended for evaluating complete sentence-form answers coming from Answer Sentence Selection (AS2) systems [1, 13, 14]. Token/segment-level similarity metrics such as EM, F1, BLEU, etc. fail to capture the semantic coherence between entities/concepts of the answer sentence and the question. Recently, AVA [20] and BEM [1] have proposed transformer LM encoder based similarity metrics for sentence-form extractive QA evaluation by encoding the question, target answer (which needs to be evaluated) and a reference answer (which is treated as a gold standard (GS)).
One of the major limitations of AVA/BEM is the use of a single reference answer. There are several types of questions that have multiple diverse correct answers, other questions that have relevant information spread across multiple reference answers, and other ambiguous/under-specified or opinion seeking questions that may have several possible answers (we motivate this with examples in Section 3). Additionally, AVA/BEM only use information from a correct reference answer for evaluating a target answer, but information and semantics from an incorrect reference answer (which are readily available for several datasets) can also help refine the accuracy of the prediction.
Figure 1: An illustration of SQuArE: an automatic question answering evaluation metric that uses multiple references: positive and negative to evaluate the correctness of a target answer for a particular question.
Motivated by the above shortcomings of AVA/BEM, we propose SQuArE (Sentence-level QUestion AnsweRing Evaluation), a supervised transformer LM encoder based automatic QA evaluation metric that uses multiple reference answers by combining multiple correct and incorrect answers to assign a correctness score for an answer to a question. We evaluate SQuArE on four sentence-level extractive QA datasets, and show that it outperforms previous baselines and achieves the highest correlation with human annotations.
The last few years have seen several research works Hsu et al. (2021); Muller et al. (2021); Gabburo et al. (2022) transition from extractive sentence-form QA towards generating natural sounding sentence-form answers. This paradigm (termed GenQA) synthesizes information using different pieces of information spread across many relevant candidates (while suppressing any irrelevant information) to improve the answering accuracy and style suitability. AVA/BEM have only been evaluated on extractive QA, and not for GenQA, so it is unclear if a transformer encoder based semantic matching metric will correlate with human annotations on a sentence-form generated answer. We strengthen the generality of SQuArE as a QA evaluation metric by showing that it outperforms the AVA/BEM baselines for GenQA systems in addition to extractive QA systems. We will release the code and trained model checkpoints for SQuArE at [https://github.com/amazon-science/square](https://github.com/amazon-science/square) for the NLP and QA community to use our automatic QA evaluation metric.
## 2 Related Work
**Automatic Text Similarity Evaluation:** Token/N-grams level similarity metrics like BLEU Papineni et al. (2001) and ROUGE Lin (2004) are not suitable for QA evaluation, and have been shown to achieve poor correlation with human judgements Reiter (2018); Gabburo et al. (2022). Kusner et al. (2015) propose using a distance function between word embeddings for text similarity. Other research works Kusner et al. (2015); Clark et al. (2019) have proposed evaluation metrics based on Wasserstein distance. Recent years have seen a number of automatic evaluation metrics being proposed for Neural Machine Translation (MNT) and summarization tasks like BERT-Score Zhang et al. (2020)-02-24), BLEURT Sellam et al. (2020), COMET Rei et al. (2020), etc. that use contextual embeddings from transformer encoders. Similar approaches extend for text style Wegmann and Nguyen (2021) and summarization Cao et al. (2020); Zeng et al. (2021).
**QA Evaluation:** For entity level span-extraction MR tasks, Yang et al. (2018) adapt BLEU, ROUGE for answer comparison, with a focus on "yes-no" and "entity" questions. Si et al. (2021) mine entities from KBs to use them as additional gold answers for MR tasks, our approach shares this intuition of using multiple diverse reference answers for evaluation. Chen et al. (2019) propose a modification of BERTScore for QA by using the question and the paragraph context along with the answer. Empirically however, they demonstrate that for extractive MR tasks, F1 works as a reasonable metric, but this does not transfer well for generative QA. Min et al. (2021) uses human annotations to evaluate correct answers that are not contained in the GS answer. For sentence-level extractive QA (AS2), AVA Vu and Moschitti (2021) and BEM Bulian et al. (2022) are two recently proposed learned metrics.
## 3 Methodology
Being a knowledge-intensive task, automatic QA evaluation typically requires leveraging knowledge from external sources to evaluate correctness of answer (e.g., Knowledge Bases, Gold standard reference answers). We can formalize automatic QA evaluation with the notation: \(f(q,a,c){\rightarrow}p\), where \(f\) is the automatic evaluation function applied to question \(q\), target answer \(a\) and reference context \(c\), and outputs a correctness score \(p\in[0,1]\).
Previous works (AVA, BEM) show that using a single GS reference answer as the context \(c\) achieves higher correlation with human annotations that only using \(q\) and \(a\). In this paper, we propose a supervised learned metric SQuArE that enriches the reference context \(c\) for QA evaluation using: (i) multiple gold standard references, and (ii) negatively annotated answers as negative references.
**Multiple Reference Answers** In AVA/ BEM, using a single correct reference limits the evaluation scope of QA system predictions.
* Several types of questions may have multiple and diverse correct answers: for example _"What is a band?"_ is correctly answered by both _"A flat, thin strip or loop of material, used as a fastener"_ and _"A band is a group of people who perform instrumental and/or vocal music"_
* Knowledge seeking questions may have pieces of relevant information spread across multiple references: for example _"Who is Barack Obama"_ can be answered by combining information across multiple answers _"He served as the 44th president of the U.S. from 2009-2017"_, _"He was a member of the Democratic Party, and served as a U.S. senator from 2005-2008"_, etc.
* For ambiguous/under-specified questions that do not have a single correct answer or opinion seeking questions, using a single GS reference answer can be limiting and provide an incorrect evaluation of the answering capability of a QA system. Consider the question _"When is the next world cup"_ for which both the answers _"The next FIFA football world cup is in 2026"_ and _"The next ICC cricket world cup is in 2023 in India"_ are correct as the questions fails to specify the name of the sport (many more possible answers).
**Negative Reference Answers** An automatic QA evaluation system can use the information and semantics from an incorrect answer to help refine the accuracy of its prediction. Consider the question _"Which movies of Dwayne Johnson released in 2017"_ with the positive reference _"Dwayne The Rock' Johnson starrer Baywatch premiered in 2017"_. Only using this reference, both the answers _"Baywatch and Jungle Cruise"_ and _"The Fate of the Furious and Baywatch"_ appear to be equally correct for this question. However when we add in an incorrect reference for the question _"Jungle Cruise is a movie starring the Rock and Emily Blunt that released in 2021"_, the automatic QA evaluation can identify that the second answer is probably more correct than the first one. Several sentence-form extractive QA datasets such as ASNQ Garg et al. (2020), WikiQA, TREC-QA, etc. have a large number of negatively labeled answer candidates for each question, which can be exploited for automatic evaluation of QA systems for these datasets.
**SQuArE** Motivated by the above reasons, we modify the context \(c\) of automatic evaluation \(f(q,a,c){\rightarrow}p\) to include a combination of \(n_{+}\) correct and \(n_{-}\) incorrect reference answers, i.e, \(c:c^{+}{=}\{c^{+}_{1},...,c^{+}_{n_{+}}\}\cup c^{-}{=}\{c^{-}_{1},...,c^{-}_{ n_{-}}\}\). During supervised learning, SQuArE learns to minimize the semantic distance between a correct target answer from the set of correct references \(c^{+}\) and maximizing the semantic distance from the set of incorrect references \(c^{-}\). We prefix a prompt (_Pos_Ref / Neg_Ref_) to each reference to indicate the correctness/incorrectness of the reference to the model. Specifically, a \((q,a,c^{+},c^{-})\) input for SQuArE is encoded as **"Question**: \(q\) Target: \(a\)Pos_Ref: \(c^{+}_{1}\)\(\cdots\)Pos_Ref: \(c^{+}_{n_{+}}\)Neg_Ref: \(c^{-}_{1}\cdots\)Neg_Ref: \(c^{-}_{n_{-}}\)" as illustrated in Figure 1.
The choice of reference answers can create biases in automatic QA evaluation. For a given question, collecting a set of diverse reference answers and ensuring they exhaustively cover all the concepts needed to answer the question is challenging and very expensive. In this paper, we utilize existing annotated answer candidates (both positive and negative) in high-quality labeled datasets as references. Extending automatic QA evaluation to previously unseen questions (without any references) is a challenging open problem in NLP QA.
## 4 Experiments and Results
### Datasets
**WQA** Web Question Answers (WQA) is a public dataset Zhang et al. (2021) containing 149,513 questions, each associated with \(\sim\)15 answer candidates retrieved from a large-scale web index with human annotations.
**WikiQA** A small AS2 dataset Yang et al. (2015) with questions from Bing search, and answers extracted from Wikipedia. We use the most popular clean setting (questions having at least one positive and one negative answer).
**TREC-QA** A small AS2 dataset Wang et al. (2007) containing factoid questions. We only retain questions with at least one positive and one negative answer in the development and test sets.
**IQAD** A large scale Industrial QA Dataset containing non-representative de-identified user questions from a commercial personal assistant. IQAD contains 10k questions, and \(\sim\)200 answer candidates retrieved for each question using a large scale web index that contains over 100M web documents. Results on IQAD are presented relative to a baseline to comply with company policies.
**GenQA-MTURK** This dataset is composed of 3k questions from 3 datasets (1k each): MS-MARCO Bajaj et al. (2018), WikiQA and TREC-QA using GenQA models evaluated in Hsu et al. (2021); Gabburo et al. (2022). For each question we generate an answer using 8 different GenQA models (details in Appendix B) based on T5-Large. We annotate all the answers of this dataset for their correctness, using MTurk using \(5\) independent annotations for each QA pair. We use majority voting over the 5
annotations for each QA pair.
**Answer Equivalence (AE):** A question answering dataset released by Bulian et al. (2022) where each sample contains a question, a candidate answer (typically short answers), and a positive reference (typically entity-based) carefully selected to avoid the candidate-reference exact match (EM).
### Models and Baselines
We use DeBERTaV3-Large (He et al., 2021) for SQuArE, and compare with three baselines (proposed in AVA/BEM): **QT: Question-Target** that takes input a question and the target answer, **TR: Target-Reference** that takes input a reference GS answer and the target answer, and **TQR: Target-Question-Reference** that takes as input a question, the target answer and a reference GS answer. For our experiments, we set the total number of reference \((n_{+})+(n_{-}){=}5\) per question.
We also compare SQuArE against two additional baselines: (i) **BEM**(Bulian et al., 2022), a recently released reference-based automatic evaluation metric (trained on the AE dataset), and (ii) a large language model (**LLM**) based approach using two versions of the Falcon (Almazrouei et al., 2023) model. For fair comparison with the baselines, we perform evaluation in the zero-shot setting for the WikiQA and TrecQA datasets, and after fine-tuning on the AE dataset. For more details on the implementation of these baselines, refer to Appendix A.2.
### Results
We present results comparing SQuArE with the baselines on large datasets (from both extractive QA: AS2 and generative QA: GenQA) in Table 1. Using GS human annotations for each dataset, we compute accuracy, Area Under the Curve (AUROC), and Pearson Correlation of each automatic QA metric. We observe that on all datasets, SQuArE significantly outperforms the baselines and achieves the highest accuracy and AUROC with human annotations.
**Zero-shot Setting:** To show strong generalization to out-of-distribution datasets (zero-shot setting), we train SQuArE and the other baselines on the WQA dataset, and use this for evaluation on other datasets. Specifically, we use two small datasets: WikiQA and TREC-QA (exploring both extractive: AS2 and generative settings), and one large dataset MS-MARCO. Results presented in Table 2 highlight that SQuArE achieves the highest accuracy and correlation with human annotations.
**Comparison with BEM and LLMs:** We present comparison with BEM and LLM baselines in Table 3 on WikiQA, TrecQA and Answer Equivalence (AE) datasets. On the WikiQA and TrecQA datasets, the results show that SQuArE outperforms both the baselines, which stems from (i) the usage of multiple references, and (ii) the references for these datasets being complete sentences in comparison to entities/short-answers which are used for training BEM. On the AE dataset, zero-shot SQuArE (which is trained on the WQA dataset) performs inferior (0.572 vs 0.897 in accuracy) to the
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline
**Dataset** & **Technique** & **\# Refs** & **Accuracy** & **AUROC** & **Correlation** \\ \hline \multicolumn{6}{c}{**Answer Sentence Selection (AS2)**} \\ \hline \multirow{4}{*}{WQA} & AVA-TR & 1 & 0.734 & 0.809 & 0.716 \\ & AVA-QT & 0 & 0.790 & 0.851 & 0.750 \\ & AVA-TQR & 1 & 0.809 & 0.873 & 0.771 \\ & SQuArE & **5** & **0.833** & **0.896** & **0.793** \\ \hline \multirow{4}{*}{IQAD} & AVA-TR & 1 & Baseline & Baseline & Baseline \\ & AVA-QT & 0 & +1.946 & -0.393\% & +0.682\% \\ & AVA-TQR & 1 & +8.02\% & +5.75 & +6.178\% \\ & SQuArE & **5** & **+22.24\% & **+14.01\%** & **+16.062\%** \\ \hline \multicolumn{6}{c}{**Answer Generation (GenQA)**} \\ \hline \multirow{4}{*}{MS-MARCO} & AVA-TR & 1 & 0.882 & 0.768 & 0.610 \\ & AVA-QT & 0 & 0.882 & 0.777 & 0.623 \\ & AVA-TQR & 1 & 0.878 & 0.790 & **0.636** \\ & SQuArE & 5 & **0.895** & **0.832** & 0.629 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on WQA, IQAD, MS-MARCO measured using Accuracy, Area under the curve and Pearson Correlation with gold labels. Results on IQAD are relative to AVA-TR baseline (due to data being internal). # Refs refers to the total number of reference answers used for the metric.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Dataset** & **Technique** & **Accuracy** & **AUROC** & **Correlation** \\ \hline \multicolumn{6}{c}{**Answer Sentence Selection (AS2)**} \\ \hline \multirow{4}{*}{WikiQA} & AVA-TR & 0.701 & 0.633 & 0.532 \\ & AVA-QT & 0.900 & 0.804 & 0.637 \\ & AVA-TQR & 0.903 & 0.805 & 0.632 \\ & SQuArE & **0.919** & **0.851** & **0.676** \\ \hline \multirow{4}{*}{TrecQA} & AVA-TR & 0.911 & 0.913 & 0.816 \\ & AVA-QT & 0.885 & 0.927 & 0.737 \\ & AVA-TQR & 0.906 & **0.972** & 0.797 \\ & SQuArE & **0.924** & 0.969 & **0.842** \\ \hline \multicolumn{6}{c}{**Answer Generation (GenQA)**} \\ \hline \multirow{4}{*}{MS-MARCO} & AVA-TR & 0.843 & 0.683 & 0.587 \\ & AVA-QT & 0.772 & 0.693 & 0.580 \\ & AVA-TQR & 0.839 & 0.738 & 0.601 \\ & SQuArE & **0.845** & **0.773** & **0.620** \\ \hline \multirow{4}{*}{WikiQA} & AVA-TR & 0.692 & 0.670 & 0.602 \\ & AVA-QT & 0.627 & 0.798 & 0.667 \\ & AVA-TQR & 0.671 & 0.811 & 0.678 \\ & SQuArE & **0.694** & **0.819** & **0.690** \\ \hline \multirow{4}{*}{TrecQA} & AVA-TR & 0.847 & 0.784 & 0.615 \\ & AVA-QT & 0.709 & 0.816 & 0.612 \\ \cline{1-1} & AVA-TQR & 0.779 & **0.857** & 0.647 \\ \cline{1-1} & SQuArE & **0.890** & 0.818 & **0.671** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Zero-shot evaluation using QA evaluation models trained on WQA. Same metrics used as Table 1.
BEM baseline (which is trained on the AE dataset). This drop in zero-shot performance of SQuArE compared to BEM can be attributed to (i) the lack of multiple references, and (ii) the references in AE being of a different style/format than those used for training SQuArE (entities/short answers v/s complete sentences). On fair comparison (when SQuArE(AE) is fine-tuned on the AE dataset), it is able to beat the BEM baseline in both accuracy (0.908 vs 0.897) and AUROC (0.966 vs 0.859).
**Comparison with text similarity metrics:** We also compare SQuArE with learned text similarity metrics: BLEURT and BERTScore in Table 4. Results show that SQuARe achieves a higher correlation with manual annotations than BLEURT and BERTScore. For complete details, see Appendix C.
### Ablation studies
To assess the improvements from different design choices used in SQuArE, we conduct ablation studies to show how the use of negative and multiple references improves the performance and correlation with human annotations. To perform these studies we pick one dataset (WQA) and present comparisons in Tab. 5.
**Usage of Negative references:** To support our claim that using negative references can improve the automatic QA evaluation, we compare two additional models/baselines: (i) AVA-TQR(-) which refers to an AVA baseline which only uses a single negative reference, and (ii) SQuArE(+) which refers to a SQuArE model which only uses multiple positive references. On comparison with results in Table 1, AVA-TQR(-) outperforms both AVA-QT (model without references) and AVA-TR (model without the question). This validates our intuition on the importance of negative references. SQuArE(+) outperforms the AVA-TQR baseline, but performs inferior to the SQuArE using a combination of both positive and negative references, thereby validating our claim that the combination of positive and negative references improves the accuracy and the generalizability of SQuArE.
**Number of references:** We hypothesize that higher number of labeled references help with improved correlation of SQuArE with human evaluation. To support this intuition, we present an ablation study where we vary the total number of references from 5 per question to: (i) using 3 references per question, and (ii) randomly sampling \(\in[1,5]\) references per question. We observe that SQuArE using 5 references outperforms SQuArE using 3 references (0.833 v/s 0.821 in accuracy), while SQuArE using a random sample of \(\in[1,5]\) references (0.820 accuracy) performed comparable to SQuArE using 3 references.
## 5 Conclusion
In this paper, we propose SQuArE transformer LM encoder-based learned metric that uses multiple reference answers (positive + negative) for automatically evaluating sentence-level QA systems. We evaluate sentence-level extractive QA: AS2 and answer generation (GenQA) systems across multiple academic and industrial datasets and show that SQuArE achieves the highest correlation with human annotations beating previous baselines.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **SQuArE** & **BLEURT** & **BERTScore** \\ \hline MS-MARCO & **0.238** & 0.142 & 0.168 \\ WikiqA & **0.425** & 0.219 & 0.233 \\ TreeQA & **0.862** & 0.341 & 0.646 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Pearson Correlation of evaluation metrics with human annotations on GenQA-MTURK.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Dataset** & **Approach** & **\# Refs** & **Accuracy** & **AUROC** \\ \hline \multirow{4}{*}{WikiQA} & BEM & 1 & 0.863 & 0.553 \\ & Falcon-7B & 1 & 0.081 & 0.448 \\ & Falcon-40B & 1 & **0.963** & 0.499 \\ & SQuArE & 5 & 0.919 & **0.851** \\ \hline \multirow{4}{*}{TreeQA} & BEM & 1 & 0.866 & 0.819 \\ & Falcon-7B & 1 & 0.601 & 0.529 \\ \cline{1-1} & Falcon-40B & 1 & 0.848 & 0.509 \\ & SQuArE & 5 & **0.924** & **0.969** \\ \hline \multirow{4}{*}{AE} & BEM & 1 & 0.897 & 0.959 \\ & SQuArE & 1 & 0.572 & 0.718 \\ \cline{1-1} & SQuArE(AE) & 1 & **0.908** & **0.966** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparing SQuArE against BEM and LLM baselines on the WikiqA, TreeQA and AE datasets. The BEM baseline is trained on the AE dataset. We use the same metrics as Table 1.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Technique** & **\# Refs** & **Accuracy** & **AUROC** & **Correlation** \\ \hline AVA-TQR(-) & 1 & 0.800 & 0.864 & 0.763 \\ SQuArE(+) & 5 & 0.815 & 0.885 & 0.783 \\ SQuArE & 3 & 0.821 & 0.889 & 0.787 \\ SQuArE & [1,5] & 0.820 & 0.889 & 0.786 \\ SQuArE & 5 & **0.833** & **0.896** & **0.793** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies evaluating the benefits of using negative references, and the impact of number of references on the performance of SQuArE. AVA-TQR(-) and SQuArE(+) refer to an AVA model only using negative references and a SQuArE model only using positive references. # Refs is the total number of references used for the metric. [1,5] refers to the number of references being randomly sampled \(\in[1,5]\).
### Limitations
Our approach of training QA evaluation metrics requires access to large GPU resources for training large transformer encoders such as DeBERTa, etc. For the experiments in this paper, we only consider datasets from the English language, however we conjecture that our techniques should work similarly for languages with a similar morphology. Since SQuArE is a learned evaluation metric based on large transformers, it might be challenging to effectively learn in a scarce-data setting. While we have shown impressive zero-shot evaluation results in Table 2, extending to a completely new data domain/new language might be challenging for SQuArE to adapt to without access to any labeled data. As visible from Tables 1 and 2, SQuArE's accuracy on human annotations is in the range of 80-90%, highlighting that there is still a gap with respect to human evaluation. For safety critical applications, human evaluation still remains the best means to evaluate Question Answering systems.
|
2305.00581 | Multimodal Graph Transformer for Multimodal Question Answering | Despite the success of Transformer models in vision and language tasks, they
often learn knowledge from enormous data implicitly and cannot utilize
structured input data directly. On the other hand, structured learning
approaches such as graph neural networks (GNNs) that integrate prior
information can barely compete with Transformer models. In this work, we aim to
benefit from both worlds and propose a novel Multimodal Graph Transformer for
question answering tasks that requires performing reasoning across multiple
modalities. We introduce a graph-involved plug-and-play quasi-attention
mechanism to incorporate multimodal graph information, acquired from text and
visual data, to the vanilla self-attention as effective prior. In particular,
we construct the text graph, dense region graph, and semantic graph to generate
adjacency matrices, and then compose them with input vision and language
features to perform downstream reasoning. Such a way of regularizing
self-attention with graph information significantly improves the inferring
ability and helps align features from different modalities. We validate the
effectiveness of Multimodal Graph Transformer over its Transformer baselines on
GQA, VQAv2, and MultiModalQA datasets. | Xuehai He, Xin Eric Wang | 2023-04-30T21:22:35Z | http://arxiv.org/abs/2305.00581v1 | # Multimodal Graph Transformer for Multimodal Question Answering
###### Abstract
Despite the success of Transformer models in vision and language tasks, they often learn knowledge from enormous data implicitly and cannot utilize structured input data directly. On the other hand, structured learning approaches such as graph neural networks (GNNs) that integrate prior information can barely compete with Transformer models. In this work, we aim to benefit from both worlds and propose a novel Multimodal Graph Transformer for question answering tasks that requires performing reasoning across multiple modalities. We introduce a graph-involved plug-and-play quasi-attention mechanism to incorporate multimodal graph information, acquired from text and visual data, to the vanilla self-attention as effective prior. In particular, we construct the text graph, dense region graph, and semantic graph to generate adjacency matrices, and then compose them with input vision and language features to perform downstream reasoning. Such a way of regularizing self-attention with graph information significantly improves the inferring ability and helps align features from different modalities. We validate the effectiveness of Multimodal Graph Transformer over its Transformer baselines on GQA, VQAv2, and MultiModalQA datasets.
## 1 Introduction
A myriad of complex real-world tasks require both prior knowledge and reasoning intelligence Yi et al. (2018); Ilievski and Feng (2017). These days, vision-and-language reasoning tasks such as as vision question answering (VQA) Antol et al. (2015) and multimodal question answering (MultimodalQA) Talmor et al. (2021) post further needs for integrating structured info from different input modalities and thus perform reasoning. Towards this, two questions yield: What is the best way to integrate prior knowledge and reasoning components from multiple modalities in a single model? How would such an integration lead to accurate models, while being more computationally efficient and allowing for significantly more interpretability? Such questions are important to address when scaling reasoning systems to real-world use cases.
These years, there are a spectrum of methods in the literature exploring different ways of integrating structured prior information. Graph neural networks (GNNs) Wu et al. (2020), have been widely used in representation learning on graphs. Some experts tried to investigate the embedding of the structured information by resorting to them. However, GNNs are inefficient Wu et al. (2020) and they can barely compete with Transformer models. Besides, most GNNs are designed to learn node representations on fixed and homogeneous graphs. Thereby, it is suboptimal to operate GNNs on vision-and-language tasks such as visual question answering (VQA), w
Figure 1: Overview of Multimodal Graph Transformer. It takes visual features, text features, and their corresponding generated graphs as inputs. The generated graph is first converted to an adjacency matrix to induce the mask matrix \(\mathbf{G}\). The modified quasi-attention score in the Transformer is computed to infer the answer. In the formular, \(\mathbf{G}\) is the graph-induced matrix constructed by concatenating adjacency matrices both from the vision and the language end. \(\mathbf{\hat{G}}\) is the trainable bias. The input features from different modalities are fused along with graph info to perform downstream reasoning.
in these problems (e.g. scene graphs) can be more complex; Alternatively, knowledge graphs (KGs), such as Freebase Bollacker et al. (2008), represent world-level factoid information of entities and their relations in a graph-based format, surfaced these years. They have been successfully used in vision and language applications including VQA Marino et al. (2019). However, they have not been dedicated to be applied to our scenario, more concretely, we aim at filling the gap of capturing prior knowledge in Transformer models.
To mitigate deficiencies of the existing methods, this paper proposes a novel plug-and-play graph-involved Transformer-based method for multimodal question answering tasks. Our method is _Multimodal Graph Transformer_ in the sense that it is built upon the well-established Transformer Vaswani et al. (2017) backbone, albeit with several key fundamental differences. First, we introduce a systematic scheme to convert text graphs, dense region graphs, and semantic graphs from vision and language tasks to adjacency matrices to use in our method. Second, instead of directly computing the attention score, we learn the newly proposed quasi-attention score with graph-induced adjacency matrices live at its heart, to signify the importance of learning relative importance as a highly effective inductive bias for computing the quasi-attention score. Third, different from previous Transformer methods, where self-attention are fully learned from data, we switch gears to introduce the graph-structured information in the self-attention computation to guide the training of Transformers as shown in Figure 1.
The main contributions are summarized below:
* We propose a novel Multimodal Graph Transformer learning framework that combines multimodal graph learning from unstructured data with Transformer models.
* We introduce a modular plug-and-play graph-involved quasi-attention mechanism with a trainable bias term to guide the information flow during training.
* The effectiveness of the proposed methods is empirically validated on GQA, VQA-v2, and MultiModalQA tasks.
## 2 Related Works
### Multimodal question answering
Visual Question Answering (VQA)Antol et al. (2015) has been a prominent topic in the field of multimodal question answering, garnering significant attention and advancing significantly since the introduction of the first large-scale VQA dataset byAntol et al. (2015). To answer VQA questions, models typically leverage variants of attention to obtain a representation of the image that is relevant to the question Andreas et al. (2016); Yang et al. (2015); Xu and Saenko (2016); Fukui et al. (2016); Lu et al. (2016). A plethora of works Liang et al. (2021); Hudson and Manning (2018); Yi et al. (2018); Xiong et al. (2016); Kim et al. (2018); Teney et al. (2017) have attempted to enhance the reasoning capability of VQA models, with Teney et al. (2017) proposing to improve VQA using structured representations of the scene contents and questions. They developed a deep neural network that leverages the structure in these representations and builds graphs over scene objects and question words. The recent release of MultiModalQA Talmor et al. (2021), a dataset that demands joint reasoning over texts, tables, and images, has received widespread attention. However, similar to VQA, existing MultiModalQA methods have not fully utilized structured information from the input concepts. To address this, we propose a combination of multimodal graph learning and Transformer models to improve question answering across inputs from multiple different modalities.
### Attention mechanisms
The attention mechanism Xu et al. (2015, 2015); Devlin et al. (2018), has dramatically advanced the field of representation learning in machine learning. The attention mechanism is introduced in Vaswani et al. (2017) and widely used in language tasks (i.e., abstract summarization Xu et al. (2020)), machine translation Bahdanau et al. (2014), reading comprehension Dai et al. (2020), question answering Min et al. (2019), etc. Zhang et al. (2020) proposes using syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanisms. Meanwhile, it has seen increasing application in multimodal tasks Li et al. (2020); Nam et al. (2017); Lu et al. (2016), where it is usually used for learning of interactions between multiple inputs. Following their success, Transformer models have also shown impressive results
on several vision-and-language tasks Chen et al. (2019); Hu et al. (2020); He et al. (2022); Sun et al. (2019). Yun et al. (2019) proposes Graph Transformer Networks (GTNs) that can generate new graph structures and learn effective node representation on the new graphs in an end-to-end fashion. Different from these works, our work incorporates graph information from different modalities into the Transformer to improve the reasoning ability.
### Exploiting graphs in multimodal reasoning
Considering that graph priors can transfer commonalities and mitigate the gap between visual and language domains, researchers explore how to use graphs Teney et al. (2017); Yu et al. (2020) properly in both tasks. In recent years, many classes of GNNs have been developed for both tasks which are divided into two approaches: spectral Bruna et al. (2013) and non-spectral methods Chen et al. (2018). Graphs can also be transferred into latent variables by GCN Yang et al. (2019); Yao et al. (2018), which can be directly utilized by models. However, the need for aligning graph priors from different modalities to do reasoning limits the use of graph priors. Our work addresses this problem via the graph-involved quasi-attention mechanism.
### Pretraining
Pretrained models in computer vision Simonyan and Zisserman (2014); He et al. (2016) and NLP Devlin et al. (2018); Yang et al. (2019); Liu et al. (2019), have achieved state-of-the-art performances in many downstream tasks Zhu et al. (2017); Jiang et al. (2022); Karpathy and Fei-Fei (2015); Lee et al. (2018). Other pretrained models Lu et al. (2019); Sun et al. (2019) based on BERT Devlin et al. (2018) and ViLT Kim et al. (2021) also demonstrate their effectiveness on downstream vision-language tasks. Recent works on vision-language pretraining such as OSCAR Li et al. (2020) perform cross-modal alignment in their visual-language pretraining models. Likewise, our proposed method includes cross-modality alignment, which is critical for reasoning. Our proposed modular plug-and-play graph-involved quasi-attention mechanism is also model-agnostic and can be also applied to other pretrained Transformer-based vision and language models.
## 3 Multimodal Graph Transformer
### Background on Transformers
The Transformer layer Vaswani et al. (2017) consists of two modules: a multi-head attention and a feed-forward network (FFN). Specifically, each head is represented by four main matrices: the query matrix \(\mathbf{W}_{i}^{q}\in\mathbb{R}^{d^{m}\times d^{q}/h}\), the key matrix \(\mathbf{W}_{i}^{k}\in\mathbb{R}^{d^{m}\times\frac{d^{k}}{h}}\), the value matrix \(\mathbf{W}_{i}^{v}\in\mathbb{R}^{d^{m}\times\frac{d^{v}}{h}}\), and the output matrix \(\mathbf{W}_{i}^{o}\in\mathbb{R}^{\frac{d^{v}}{h}\times d^{o}}\), and takes the hidden states \(\mathbf{H}\in\mathbb{R}^{l\times d^{m}}\) of the previous layer as input, where \(d\) denotes the dimension of the model, \(h\) represents the number of head, and \(i\) denotes the index of layer number. The output of attention is given by:
\[\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}=\mathbf{H}\mathbf{W}_{i}^{q},\mathbf{H}\mathbf{W}_{i}^{k}, \mathbf{H}\mathbf{W}_{i}^{v} \tag{1}\]
\[\mathrm{Attention}\left(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}\right)=\mathrm{SoftMax }\left(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{\frac{d^{q,k}}{h}}}\right)\mathbf{V}_ {i} \tag{2}\]
\[\mathbf{H}_{i}=\mathrm{Attention}\left(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}\right)\mathbf{W }_{i}^{o} \tag{3}\]
where \(\mathbf{Q}_{i}\in\mathbb{R}^{l\times\frac{d^{q}}{h}},\mathbf{K}_{i}\in\mathbb{R}^{l \times\frac{d^{k}}{h}},\mathbf{V}_{i}\in\mathbb{R}^{l\times\frac{d^{v}}{h}}\) are obtained by the linear transformations of \(\mathbf{W}_{i}^{q},\mathbf{W}_{i}^{k},\mathbf{W}_{i}^{v}\) respectively. \(\mathrm{Attention}(\cdot)\) is the scaled dot-product attention operation. Then output of each head is transformed to \(\mathbf{H}_{i}\in\mathbb{R}^{l\times d^{o}}\) by \(\mathbf{W}_{i}^{o}\).
### Framework overview
The entire framework of the proposed Multimodal Graph Transformer method is depicted in Figure 2. Without loss of generality, we assume the end task is VQA in the following discussion while noting that our framework can be applied to other vision-language tasks, such as multimodal question answering.
Given the input images and questions, the framework first constructs three graphs, including the semantic graph, dense region graph, and text graph, which will be described in more detail in the following sections. The graph \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) represents the set of nodes in the graph and \(\mathcal{E}\) represents the edges connecting them, is fed into Transformers to guide the training process.
### Multimodal graph construction
We build three types of graphs and feed them into Transformers: _text graph_, _semantic graph_, and _dense region graph_. We now introduce them in detail.
Text graphThe task of Visual Question Answering involves a combination of an image, a question, and its corresponding answer. To process the question, we extract the entities and create a text graph representation. We then build the graph \(G=(\mathcal{V},\mathcal{E})\) as shown in the left of Figure 2. The set of nodes, \(\mathcal{V}\), represents the entities and the set of edges, \(\mathcal{E}\), represents the relationships between the pairs of entities. This results in:
* A set of \(N\) entities, each represented by a vector of token embeddings, that constitute the nodes of the graph.
* A set of pairwise relations between entities, forming the edges of the text graph. The relationship between entities \(i\) and \(j\) is represented by a vector \(e_{ij}\) which encodes the relative relationships.
Semantic graphIn tasks such as multimodal question answering, there might be additional inputs in the form of tables or lengthy paragraph sentences. To handle these inputs, a linear representation of the table can be created and a semantic graph can be constructed using a similar approach. They are processed using the scene graph parser Zhong et al. (2021), which transforms the text sentence into a graph of entities and relations, as depicted in Figure 3. The output of the scene graph parser includes:
* A set of \(N\) words that constitute the nodes of the semantic graph, where \(N\) is the number of parsed words in the texts.
* A set of possible pairwise relations between words, such as "left" and "on" as shown in Figure 3, which constitute the edges of our graph. An edge between words connecting \(j\) to \(i\) is represented by \(e_{ij}\), namely, the connectivity is indicated as: \(e_{ij}=\begin{cases}0,&i,j\ \text{ not connected}\\ 1,&i,j\ \text{ connected}\end{cases}\).
Dense region graphThe visual features are extracted by slicing the input images into patches and flattening them. A dense region graph \(G=(\mathcal{V},\mathcal{E})\) is then converted into masks, with \(\mathcal{V}\) being the set of extracted visual features and \(\mathcal{E}\) being the set of edges connecting each feature node, following the method described in Kim et al. (2021). This results in a graph that is nearly fully connected.
The resulting three graphs are then transformed into adjacency matrices, where the elements are either -\(\infty\) or zero. The conversion process is depicted in Figure 3 using the semantic graph as an
Figure 3: The naive demonstration of converting a semantic graph into an adjacency matrix. Cells in blue means ’0’s for that element in the graph matrix, while white ones means ’-inf’s. We employ the matrix as the mask when computing the quasi-attention.
Figure 2: The figure illustrates the overall framework of our Multimodal Graph Transformer. The input from different modalities are processed and transformed into corresponding graphs, which are then converted into masks and combined with their features to be fed into Transformers for downstream reasoning. In detail, semantic graphs are created through scene graph generation methods, dense region graphs are extracted as densely connected graphs, and text graphs are generated through parsing.
example. These adjacency matrices are used inside the scaled dot-product attention to control the flow of information, by masking out (setting to \(-\infty\)) the values.
### Graph-involved quasi-attention
In order to effectively utilize structured graph knowledge in our self-attention computation, we incorporate the graph as an extra constraint in each attention head by converting it into an adjacency matrix. The graph matrix, denoted as \(\mathbf{G}\), is constructed by combining various masks. An illustration of this process can be seen in Figure 4. The visual mask is generated from the dense region graph, while the text mask is derived from the text graph. Additionally, the cross-modal mask is set to an all-zero matrix to encourage the model to learn the cross-attention between visual and text features, thereby promoting alignment across the different modalities.
Within the context of adding graph information, when vision graph mask and text graph mask are concatenated and aligned with image and text features, we believe that a more flexible masking-out mechanism is beneficial, rather than keeping a single constant mask matrix inside the Softmax operation. Drawing insights from Liu et al. (2021), where they include a relative position bias to each head in computing similarity, we also intuitively parameterize a trainable bias \(\hat{\mathbf{G}}\) and involve it in the training process. Finally, we compute the quasi-attention as follows:
\[\mathrm{Attention}=\mathrm{SoftMax}(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{ \frac{d^{\parallel k}}{h}}}+\mathbf{G}+\lambda\hat{\mathbf{G}})\mathbf{V}_{i}, \tag{4}\]
where \(\lambda\) is the tradeoff hyper-parameter that controls the contribution of \(\hat{\mathbf{G}}\), and \(\mathbf{G}\) is our graph-induced matrix constructed by concatenating a graph matrix both from the vision and the language end. Here for clear clarification, we use \(\mathbf{G}\) and \(\hat{\mathbf{G}}\) to distinguish the graph matrices fixed and trainable, respectively. During training, \(\mathbf{G}\) is frozen as before and does not receive gradient updates, while \(\hat{\mathbf{G}}\) contains trainable parameters.
We now introduce the motivation behind adding two types of graph matrices. We perform the masking process by adding \(\mathbf{G}\) when computing the quasi-attention because it can be interpreted as a form of attentional pooling (learning to align), in which each element of \(\mathbf{G}\) pools all relevant information across all elements of the relative importance matrix computed by \(\left(\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{T}}{\sqrt{\frac{d^{\parallel k}}{h}}}\right)\). Hence during fine-tuning, the model ignores redundant features and only focuses on useful information. The mask can also force the model to learn the cross attention between features from the images and questions and perform aligning across them. And the trainable bias \(\hat{\mathbf{G}}\) captures information gained during the training process. Such information is valuable for fine-tuning, making the Transformer more robust and helping it gain numerical stability.
### Training
The interdependence of output features from various modalities calls for a unified optimization approach for the Transformers in both the visual question answering and multimodal question answering tasks. To accomplish this, we implement a kind of end-to-end training, which ensures the optimality of the models. The final outcome of our models is a classification logit, which is generated by the VQA models that select the best answer from the available candidate answers. To evaluate the accuracy of the models, we compute the cross-entropy loss Zhang and Sabuncu (2018) using the output logits produced by the Transformer. This measure helps us determine the difference between the predicted class probabilities and the actual class labels.
## 4 Experiments
### Datasets
#### Vqa v2
The VQA v2 dataset Goyal et al. (2017) extends the VQA Antol et al. (2015) dataset to better balance visual and textual information through the collection of complementary images. Each
Figure 4: A naive demonstration of adding the graph-induced mask while computing the quasi-attention when the inputs are from two modalities. The visual mask is the mask converted from the dense region graph and the text mask is converted from the text graph. The cross-modal mask, which is always set as an all-zero matrix, is imposed to encourage the model to learn the cross-attention between the image features and text features, thus facilitating the alignment across them.
question in VQA v2 is associated with a pair of similar images with different answers, resulting in a total of 1.1 million QA pairs and 204,000 images. The data split for VQA v2 includes a training set with 83,000 images and 444,000 questions, a validation set with 41,000 images and 214,000 questions, and a test set with 81,000 images and 448,000 questions. The annotated answers are in natural language, but they are commonly converted to a classification task with 3,129 answer classes. As described by Anderson et al. (2018), the model selects the answer to each question from a set of 3,129 most frequent answers. Following this convention, we fine-tune the multimodal graph transformer model on the VQAv2 training and validation sets, while reserving 1,000 validation images and related questions for internal validation.
GqaThe GQA dataset contains 22M questions over 113K images. The questions in GQA are designed to require multi-hop reasoning to test the reasoning skills of VQA models. GQA greatly increases the complexity of the semantic structure of questions, leading to a more diverse function set. The real-world images in GQA also bring in a bigger challenge in visual understanding. We treat the task as the classification task reffering to the VQA v2 setting.
MultiModalQAMultiModalQA (MMQA) contains 29, 918 questions. We split the dataset with reference to the public split. Around 60% of the questions in MMQA are compositional. The answer for each question can be a single answer or a list of answers.
### Baselines
We compare with four state-of-the-art VQA models: LXMERT Tan and Bansal (2019), NSM Hudson and Manning (2019), OSCAR Li et al. (2020), and VinVL Zhang et al. (2021).
* LXMERT Tan and Bansal (2019) designs five pretraining tasks: masked language modeling, feature regression, label classification, cross-modal matching, and image question answering to pretrain a large Transformer model. Towards this, a large-scale Transformer Vaswani et al. (2017) model is built that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modal encoder.
* NSM Hudson and Manning (2019) predicts a probabilistic graph that represents its underlying semantics and performs sequential reasoning over the graph to traversing its nodes to make the inference.
* OSCAR Li et al. (2020) uses object tags detected in images as anchor points to significantly ease the learning of alignments, improving previous methods and using self-attention to learn image-text semantic alignments.
* VinVL Zhang et al. (2021) developed a new object detection model to create better visual features of images than previous classical object detection models.
We compare with four baselines introduced in the MultiModalQA paper Talmor et al. (2019),
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Method & Open questions & Binary questions & Overall accuracy \\ \hline \multirow{6}{*}{GQA} & LXMERT Tan and Bansal (2019) & - & - & 60.0 \\ & LXMERT w/ Graph Tan and Bansal (2019) & - & - & 61.4 \\ & HANS Kim et al. (2020) & - & - & 69.4 \\ & NSM Hudson and Manning (2019) & 49.3 & 78.9 & 63.2 \\ & OSCAR Li et al. (2020) & - & - & 61.6 \\ & VinVL Zhang et al. (2021) & - & - & 65.1 \\ & Multimodal Graph Transformer (Ours) & 59.4 & 80.5 & 68.7 \\ \hline \multirow{6}{*}{VQA v2} & LXMERT Tan and Bansal (2019) & - & - & 72.4 \\ & HANS Kim et al. (2020) & - & - & 65.1 \\ \cline{1-1} & NSM Hudson and Manning (2019) & - & - & 63.0 \\ \cline{1-1} & OSCAR Li et al. (2020) & - & - & 73.8 \\ \cline{1-1} & VinVL Zhang et al. (2021) & - & - & 76.6 \\ \cline{1-1} & Multimodal Graph Transformer (Ours) & 66.5 & 87.0 & 74.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy (%) comparison of different methods on the VQA task. Ours has the second best performance and is comparable to state-of-the-art methods. After applying our proposed quasi-attention mechanism and exploiting the use of graphs, there is also a 2% improvement of overall accuracy on the LXMERT baseline, suggesting the generalization ability of our method.
2021): Question-only (Kaushik and Lipton, 2018), Context-only (Kaushik and Lipton, 2018), AutoRouting, ImplicitDecomp.
* Question-only is a sequence-to-sequence model that directly generates the answer given the question.
* Context-only first predicts the question type using the classifier and then feed in the relevant context to predict the answer.
* AutoRouting first determines the modality where the answer is expected to occur, and then runs the corresponding single-modality module.
* ImplicitDecomp is a 2-hop implicit decomposition baseline and so far the state-of-the-art method on the MultiModalQA dataset.
### Implementation details
The input texts undergo preprocessing using a scene graph parser which extracts entities and their relationships. The text features are obtained through a pre-trained BERT tokenizer, allowing us to extract text spans of individual entities and text spans containing two related entities. As for images, we employ the methods described in Dosovitskiy et al. (2020); Kim et al. (2021) to extract visual features and create graph masks. This involves resizing the shorter edge of the input images while preserving the aspect ratio and limiting the longer edge, followed by patch projection and padding for batch training. The resulting patch embeddings are used as inputs along with constructed dense region graph that is densely connected. The Transformer backbone used in this setting is the pretrained VIT-B-32 (Dosovitskiy et al., 2020) version, consisting of 12 layers with a hidden size of \(H\) = 768, layer depth of \(D\) = 12, patch size of \(P\) = 32, a multi-layer perceptron size of 3072, and 12 attention heads. To test this setting, all inputs and graphs are merged and processed by the Transformer backbone, which learns from features from different modalities.
#### 4.3.1 MultiModalQA
We further investigate the effectiveness of our proposed method on MultiModalQA (Talmor et al., 2021), a recently introduced and demanding task that requires joint reasoning across various modalities such as texts, images, tables, etc. We employ a Multimodal Graph Transformer to tackle the task, using the same approach for extracting vision and text features as in VQA. Additional modalities, such as tables, are encoded by linearizing them and utilizing pre-trained models like RoBERTalarge (Liu et al., 2019). After generating text graphs, semantic graphs, and dense region graphs from input questions, text, tables, and images, we feed them along with the extracted features into the Transformer.
### Results and analysis
Table 1 presents a comparison of the accuracy of our proposed method on the GQA dataset with previous state-of-the-art methods. Our proposed method ranks second in terms of accuracy and outperforms the third best method by a substantial margin, with an absolute improvement of over 3% in overall accuracy. The performance of our method is comparable to the state-of-the-art method.
We also conducted experiments on the VQA v2 dataset, and the results are summarized in Table 1 and Table 3. As shown, there are significant improvements over methods without graphs, suggesting that incorporating graph information into the Transformer is effective.
Additionally, after incorporating our proposed graph method into LXMERT, we can observe a boost in overall accuracy on the GQA dataset, demonstrating the generalization ability of the proposed method in incorporating graph information into quasi-attention computation.
Table 2 compares the Exact Match (EM) and average F1 score of our proposed method on the
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & EM & F1 \\ \hline Question-only & 16.9 & 19.5 \\ Context-only & 6.6 & 8.5 \\ \hline AutoRouting & 32.0 & 38.2 \\ ImplicitDecomp & 46.5 & 51.7 \\ \hline Human & 84.8 & 90.1 \\ \hline \hline Multimodal Transformer w/o Graph & 50.1 & 56.4 \\ Multimodal Graph Transformer (Ours) & 52.1 & 57.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: EM (%) and F1 (%) of Multimodal Graph Transformer and its Transformer baseline on questions in MultiModalQA that require reasoning over multiple modalities. We also quote the results from the MultiModalQA (Talmor et al., 2021) paper. Incorporating graph information into the Multimodal Graph Transformer can boost about 2% F1 and 4% EM performance.
MultiModQA dataset with the baseline. The results show that our proposed method outperforms the baseline without the aid of graph information, demonstrating the generalization of our method to more complicated vision-and-language reasoning tasks.
### Ablation studies
We perform ablation studies to verify the necessity of using two-stream inputs with the help of graphs to deal with input from different modalities, with GQA dataset as our testing bed. For all experiments, we use the overall accuracy as the evaluation metric.
The results presented in Table 3 show the superiority of our proposed Multimodal Graph Transformer over the method where a single modality input is fed into a Transformer. Our method, which involves dividing the input streams into two separate parts and processing each part through a Transformer, outperforms the Multimodal Transformer without Graph. This demonstrates the beneficial effect of incorporating graph information into the processing of the input data and performing training. The use of different input features with the help of graphs allows for a better alignment of the information from different modalities, which is reflected in the improved performance of our proposed method.
### Qualitative results
One qualitative example is shown in Figure 5. As can be seen, predictions from Multimodal Graph Transformer are more relevant to contents of the input image as the graph information improves the inferring ability of the Transformer, which further indicates the effectiveness of Multimodal Graph Transformer.
## 5 Conclusions
In this paper, we have presented a novel method to integrate structured graph information to guide the Transformers training. Our method can model interactions between different modalities and achieves competitive performance on multimodal reasoning tasks such as VQA and MultiModalQA. Experimental results show that our method outperforms many other methods on the GQA dataset. More importantly, the proposed quasi-attention mechanism is model-agnostic and it is possible to apply it to other Transformer-based methods. We will test our methods on other vision-and-language reasoning tasks and include the comparison with existing graph representation learning methods in our future work.
## 6 Limitations and Potential Risks
The Limitations of the proposed Multimodal Graph Transformer include the potential preservation of fairness and bias issues inherent in the pretrained Transformer models, despite the involvement of graph information. Additionally, the integration of graphs may introduce new biases that can further exacerbate the problem. One potential source of bias is the vision-and-language dataset itself, which may favor majority cases and overlook minority
Figure 5: A qualitative comparison from VQA v2. _fresh_ is the ground truth. Predictions from the Multimodal Graph Transformer (ours) are more relevant to the contents of the input image and achieve a higher confidence score over the ground truth.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Method & Open questions & Binary questions & Overall accuracy \\ \hline \multirow{3}{*}{GQA} & One-modality Transformer & 47.7 & 78.1 & 62.7 \\ & Multimodal Transformer w/o Graph & 49.9 & 81.0 & 65.4 \\ & Ours & **60.1** & **90.2** & **72.4** \\ \hline \multirow{3}{*}{VQA v2} & One-modality Transformer w/ one Transformer & 60.5 & 85.4 & 70.1 \\ & Multimodal Transformer w/o Graph & 64.8 & 86.3 & 72.1 \\ \cline{1-1} & Ours & **66.7** & **87.2** & **74.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation Studies on the GQA and VQA v2 datasets. The figure demonstrates the effectiveness of incorporating graph information into the Transformer architecture through ablation studies performed on the GQA and VQA. The results of these studies clearly indicate that including graph information can lead to an improvement in performance.
cases. Unfortunately, the proposed method is not equipped to address these biases and issues, making further research and consideration crucial when building upon or directly using this method for vision and language tasks.
|
2309.11759 | Symbol Detection for Coarsely Quantized OTFS | This paper explicitly models a coarse and noisy quantization in a
communication system empowered by orthogonal time frequency space (OTFS) for
cost and power efficiency. We first point out, with coarse quantization, the
effective channel is imbalanced and thus no longer able to circularly shift the
transmitted symbols along the delay-Doppler domain. Meanwhile, the effective
channel is non-isotropic, which imposes a significant loss to symbol detection
algorithms like the original approximate message passing (AMP). Although the
algorithm of generalized expectation consistent for signal recovery (GEC-SR)
can mitigate this loss, the complexity in computation is prohibitively high,
mainly due to an dramatic increase in the matrix size of OTFS. In this context,
we propose a low-complexity algorithm that incorporates into the GEC-SR a quick
inversion of quasi-banded matrices, reducing the complexity from a cubic order
to a linear order while keeping the performance at the same level. | Junwei He, Haochuan Zhang, Chao Dong, Huimin Zhu | 2023-09-21T03:43:28Z | http://arxiv.org/abs/2309.11759v3 | # Symbol Detection for Coarsely Quantized OTFS
###### Abstract
This paper explicitly models a coarse and noisy quantization in a communication system empowered by orthogonal time frequency space (OTFS) for cost and power efficiency. We first point out, with coarse quantization, the effective channel is imbalanced and thus no longer able to circularly shift the transmitted symbols along the delay-Doppler domain. Meanwhile, the effective channel is non-isotropic, which imposes a significant loss to symbol detection algorithms like the original approximate message passing. Although the algorithm of generalized expectation consistent for signal recovery (GEC-SR) can mitigate this loss, the complexity in computation is prohibitively high, mainly due to an dramatic increase in the matrix size of OTFS. In this context, we propose a low-complexity algorithm that embed into GEC-SR a quick inversion of the quasi-banded matrices, thus reducing the algorithm's complexity from cubic order to linear order, while keeping the performance at almost the same level.
OTFS, coarse quantization, GEC-SR, matrix inversion, low-complexity.
## I Introduction
Interest in estimating signal parameters from quantized data has been increased significantly in recent years [1]. Ultra-wideband applications, such as millimeter-wave communications, require high sampling rates, but conventional analog-to-digital converters (ADCs) are expensive and power-hungry. In cases that are cost and power constrained, the use of high-precision ADCs is not feasible, which makes ADCs with coarse quantization a better choice for systems like 6G [2]. For 6G a prominent waveform candidate is _orthogonal time frequency space (OTFS)_[3, 4], a 2D modulation technique that transforms the information to the delay-Doppler (DD) coordinate. OTFS enjoys an excellent robustness in high-speed vehicular scenarios, while orthogonal frequency division multiplexing (OFDM) suffers from disrupted orthogonality among subcarriers due to the high Doppler shift.
Detection for symbols in the delay-Doppler domain is key to the OTFS communications. Linear detectors like LMMSE are usually prior-ignorant, i.e., they are unaware of the prior information of transmitted symbols, and therefore not optimal in general sense. Non-linear detectors like sphere decoding are optimal in detection accuracy but often suffer from an unaffordable computational complexity. An effective and efficient alternative is to use message passing (MP) for the detection of OTFS symbols, which includes: [5] proposed a hybrid message passing detector for fractional OTFS that combines standard MP approximate MP; [6] adopted Gaussian mixture distribution as the messages; [7] designed a hybrid detection algorithm that combines MP with parallel interference cancellation; [8] detected the signals in both time and DD domains iteratively using a unitary transformation; [9] developed a message passing algorithm that utilized the sparsity of the effective channel; [10] applied expectation propagation (EP) to the detection and reduced significantly its complexity by exploiting the channel structure; [11] proposed an unitary approximate message passing (UAMP) algorithm, addressing the challenge of large channel paths and fractional Doppler shifts, effectively and efficiently; [12] circumvented the matrix inversion of vector approximate message passing (VAMP) by an average approximation. These works, however, considered only the ideal case of infinite-precision ADCs. The influence of coarse quantization is not yet accounted for.
This paper models explicitly the coarse and noisy quantization for the OTFS communications. We find that a major difference between coarse quantization and the infinite-precision case is: the effective channel is no longer a multiplication of three matrices, i.e., the postprocessing transform, the multi-path fast-fading channel, and the preprocessing transform; instead, a non-linear mapping enters between the postprocessing and the remaining two, making it impossible to model them as an integrated whole. Ignoring the difference and applying directly the algorithms above is seen to bring about noticeable performance loss. To overcome the limitation, we consider a generalized linear model (GLM) [13] that takes in the noisy quantization, the fast-fading channel, and the preprocessing transform, and validate the performance of two efficient solvers, GAMP [13] and GEC-SR [14]. We find that GEC-SR is much robuster to the change of effective channel; however, the complexity of GEC-SR quickly soars up as the matrix size in OTFS squares that of the OFDM counterpart.
In this context, we propose a low-complexity GEC-SR, which utilizes a quick inversion of the quasi-banded matrices. The idea of inverting a quasi-banded matrix was not new [10, 15]; however, the channel matrix here is asymmetric (due to a lack of the postprocessing transform), and the matrix to invert is in general not quasi-banded. Interestingly, we find that if we approximate the GEC-SR covariance matrix by a scaled identity matrix, the one to invert simply reduces to be quasi-banded. It is also worth noting the method of [10] is not applicable to coarse quantization, because [10, Eq. (40)] holds only in the quantization-free case. Finally, we carry out simulations to confirm the effectiveness of the proposed algorithm. To sum up, we contribute in these two aspects:
* We point out, in the presence of coarse quantization, the effective channel becomes imbalanced, containing only one of two transform matrices, which makes the OTFS modulation unable to circularly shift the transmitted sym |
2309.08208 | HM-Conformer: A Conformer-based audio deepfake detection system with
hierarchical pooling and multi-level classification token aggregation methods | Audio deepfake detection (ADD) is the task of detecting spoofing attacks
generated by text-to-speech or voice conversion systems. Spoofing evidence,
which helps to distinguish between spoofed and bona-fide utterances, might
exist either locally or globally in the input features. To capture these, the
Conformer, which consists of Transformers and CNN, possesses a suitable
structure. However, since the Conformer was designed for sequence-to-sequence
tasks, its direct application to ADD tasks may be sub-optimal. To tackle this
limitation, we propose HM-Conformer by adopting two components: (1)
Hierarchical pooling method progressively reducing the sequence length to
eliminate duplicated information (2) Multi-level classification token
aggregation method utilizing classification tokens to gather information from
different blocks. Owing to these components, HM-Conformer can efficiently
detect spoofing evidence by processing various sequence lengths and aggregating
them. In experimental results on the ASVspoof 2021 Deepfake dataset,
HM-Conformer achieved a 15.71% EER, showing competitive performance compared to
recent systems. | Hyun-seo Shin, Jungwoo Heo, Ju-ho Kim, Chan-yeong Lim, Wonbin Kim, Ha-Jin Yu | 2023-09-15T07:18:30Z | http://arxiv.org/abs/2309.08208v1 | HM-CONFORMER: A CONFORMER-BASED AUDIO DEPFAKE DETECTION SYSTEM WITH HERARCHICAL POOLING AND MULTI-LEVEL CLASSIFICATION TOREN AGREGATION METHODS
###### Abstract
Audio deepfake detection (ADD) is the task of detecting spoofing attacks generated by text-to-speech or voice conversion systems. Spoofing evidence, which helps to distinguish between spoofed and bona-fide utterances, might exist either locally or globally in the input features. To capture these, the Conformer, which consists of Transformers and CNN, possesses a suitable structure. However, since the Conformer was designed for sequence-to-sequence tasks, its direct application to ADD tasks may be sub-optimal. To tackle this limitation, we propose HM-Conformer by adopting two components: (1) Hierarchical pooling method progressively reducing the sequence length to eliminate duplicated information (2) Multi-level classification token aggregation method utilizing classification tokens to gather information from different blocks. Owing to these components, HM-Conformer can efficiently detect spoofing evidence by processing various sequence lengths and aggregating them. In experimental results on the ASVspoof 2021 Deepfake dataset, HM-Conformer achieved a 15.71% EER, showing competitive performance compared to recent systems.
Hyun-seo Shin\({}^{*}\), Jungwoo Heo\({}^{*}\), Ju-ho Kim, Chan-yeong Lim, Wonbin Kim, and Ha-Jin Yu\({}^{\dagger}\)+School of Computer Science, University of Seoul
Footnote †: This work was supported by Institute of Information & communications Technology Planning & Evaluation(ITTP) grant funded by the Korea government(MSIT) (No.RS-2023-00263037, Robust deepfake audio detection development against adversarial attacks)
Audio deepfake detection, Anti-spoofing, Conformer, Hierarchical pooling, Multi-level classification token aggregation
## 1 Introduction
Recently, speech generation technologies such as voice conversion (VC) and text-to-speech synthesis (TTS) have become so sophisticated that their outputs cannot be distinguished from those of humans, and they are evolving dramatically [1, 2]. Due to the risk of abusing these technologies in deepfake crimes or spoofing attacks, audio deepfake detection (ADD) has recently become a notable research area. In this flow, continuous efforts are being made to develop various countermeasure (CM) systems against spoofing. Among these efforts, CM systems based on deep neural networks (DNNs) have proven to be particularly effective, demonstrating outstanding performance [3, 4, 5].
Over the last decade, CM systems that extract local features at the frame level and subsequently derive global features by aggregating them to the utterance level have revealed remarkable performance in ADD task. A prime example of this is LCNN-LSTM [3], which processes input features with light convolution neural network (CNN) and then reprocesses the extracted information with long short-term memory layers; this model has established itself as a benchmark in spoofing challenges [6]. More recently, models such as AASIST have emerged, leveraging a graph neural network as a back-end module to interpret the local features extracted by CNN, exhibiting superior results [4]. Furthermore, SE-Rawformer offers innovative approaches to CM systems by processing the CNN output with Transformers that operate on entire sequence lengths of the temporal axis [5].
Liu et al. [5] argue that evidence of voice spoofing may exist at both the local and global levels, with examples including unnatural emphasis and intonation (at the local level) and excessive smoothing (at the global level). This assumption aligns well with the goal of CM systems that extract local features by employing convolutional layers and refine these features at the global level. From this perspective, we consider that the Conformer architecture [7], which combines the Transformer specialized in extracting global features and the CNN specialized in exploring local features, would be well-suited for capturing evidence of voice spoofing. Moreover, differing from traditional methods that explore local features first and then global features, the Conformer simultaneously explores local-global features by fusing the Transformer encoder with the convolution module. The architecture of Conformer can provide a standard for the importance of local information by considering the entire information.
In this paper, we propose HM-Conformer by modifying the Conformer structure through hierarchical pooling and multi-level classification token aggregation (MCA) methods. The vanilla Conformer framework tends to carry duplicated information between frame-level features [8], since it was designed for automatic speech recognition that requires frame-level output (i. e., many-to-many task). On the other hand, in many-to-one scenarios such as classification, we hypothesize that conveying compact features is more advantageous than delivering overlapping features. To reduce duplicated information, HM-Conformer applies a hierarchical pooling method which adds downsampling layers between Conformer blocks. Furthermore, the strategy of utilizing information from various layers is widely recognized for enhancing the performance of classification tasks, including ADD task [9, 10, 11]. To this end, we devised MCA method to aggregate task-related information from various encoder blocks. MCA method employs CLS tokens [12], which are effective in extracting information from the sequence of tokens, at each set of blocks to extract information from multiple encoders. Subsequently, the processed CLS tokens are individually trained through different classifiers and loss functions to extract task-related features.
HM-Conformer trained using the ASVspoof 2019 logical access training dataset achieved remarkable performance on the ASVspoof 2021 Deepfake detection task [6]. It achieved an equal error rate (EER) of 15.71%, outperforming recent frameworks that did not employ ensemble techniques.
## 2 Conformer
Conformer [7] is an architecture proposed in the automatic speech recognition domain and has achieved superior performance compared to Transformer and CNN-based models [13, 14]. We attribute this performance to the fact that the Conformer adopts the advantages of both Transformer and CNNs by having a structure with a convolution module inserted within the Transformer encoder. The Transformer is effective in modeling long-range global features by self-attention mechanism, while CNN is specialized for processing local features. By fusing the two structures, the Conformer is able to capture both global and local features, which is suitable for detecting spoofing evidence scattered at a various range of scales.
As shown in Fig. 1 (a), Conformer consists of a convolutional sub-sampling and linear layer to tokenize the input, and Conformer blocks to process the tokens. The convolution sub-sampling consists of \(n\) 2D-convolution layers with a stride of 2 to adjust the sequence length of the input. Following this, the vectors with flattened channel and frequency axes are processed into tokens \(h_{0}\in\mathbb{R}\frac{2}{\pi}\times d\) through the linear layer. The Conformer block has a structure with two feed-forward modules that wrap around a multi-head self-attention (MHSA) and a convolution module in the center. If the output of the \(i\)-th Conformer block is defined as \(h_{i}\), then the Conformer block can be represented by the following Equations:
\[\widetilde{h}_{i-1}=h_{i-1}+\frac{1}{2}FFN\left(h_{i-1}\right), \tag{1}\]
\[h^{\prime}_{i-1}=\widetilde{h}_{i-1}+MHSA\left(\widetilde{h}_{i-1}\right), \tag{2}\]
\[h^{\prime\prime}_{i-1}=h^{\prime}_{i-1}+Conv\left(h^{\prime}_{i-1}\right), \tag{3}\]
\[h_{i}=LayerNorm\left(h^{\prime\prime}_{i-1}+\frac{1}{2}FFN\left(h^{\prime \prime}_{i-1}\right)\right), \tag{4}\]
where \(FFN\), \(MHSA\), and \(Conv\) refer to the feed-forward module, the multi-head self-attention module, and the convolution module, respectively. Due to MHSA and the convolution module, the Conformer block can process global and local features separately at each layer. Note that, unlike vanilla Conformer, we introduce global pooling using the SeqPooling [15] method to generate the final embedding \(e\in\mathbb{R}^{1\times d}\) from the output \(h_{6}\in\mathbb{R}^{\frac{T}{2^{n}}\times d}\) to perform the ADD task. Finally, as shown in Equation (5), the final embedding \(e\) is input into the classifier \(C\) to output a single scalar value \(Score\), and this structure is used as a baseline.
\[Score=C(e)=\sigma\left(eW_{1}^{c}+b^{c}\right)W_{2}^{c}, \tag{5}\]
where \(W_{1}^{c}\in\mathbb{R}^{d\times d/2}\), \(W_{2}^{c}\in\mathbb{R}^{d/2\times 1}\), and \(b^{c}\) denote trainable parameters, and \(\sigma\) is \(Swish\)[16] activation function.
## 3 Proposed System: HM-Conformer
In this study, we propose HM-Conformer for ADD task, which integrates two proposed methods into the Conformer structure: (1) hierarchical pooling method and (2) MCA method. The following two subsections and Fig. 1 (b), (c) describe the two proposed methods.
### Hierarchical pooling method
To improve the CM system using the Conformer, we noted that ADD task is a binary classification task, whereas Conformer was designed for sequence-to-sequence (seq-to-seq) tasks. In order to reduce the gap between the two tasks, we paid attention to research results that indicate tokens within the Transformer-based structure tend to become more similar to each other as they progress through encoders [8]. Based on this observation, it has been argued in the image processing field that conveying compact information is more advantageous than providing duplicated information in many-to-one tasks such as classification [17]. Inspired by this argument, we propose to apply a hierarchical pooling method that gradually downsamples the output of Conformer blocks to extract compact information for ADD task. By decreasing the sequence length of the tokens, the Conformer block can propagate more condensed features to the subsequent blocks. In addition, the hierarchical pooling method offers one more advantage: it can reduce computational costs.
Fig. 1 (b) illustrates the process of the hierarchical pooling method. First, the tokens \(h_{0}\), output from the convolution sub-sampling and linear layers, are passed to the Conformer block. Then, the outputs of the Conformer block \(h_{i}\) (\(i\in\{2,4\}\)) are processed through the pooling layer according to Equation (6).
\[\hat{h}_{i}=pooling\left(h_{i};\gamma\right), \tag{6}\]
Figure 1: Overview of the Conformer-based architectures. (a) depicts the overall architecture of the Conformer structure and the details of its blocks. (b) and (c) illustrate the architecture with the hierarchical pooling method and the proposed HM-conformer with hierarchical pooling and MCA methods.
where \(\gamma\) means the downsampling rate, which is fixed at 2 in this paper, and if \(h_{i}\in\mathbb{R}^{T^{\prime}\times d}\), then \(\hat{h}_{i}\in\mathbb{R}^{\frac{T^{\prime}}{2\gamma}\times d}\).
### Multi-level classification token aggregation method
The approach of aggregating and processing outputs from various layers, known as multi-level feature aggregation, is known to enhance the performance of classification tasks [9, 18, 19]. From this perspective, utilizing features extracted from multiple Conformer blocks may be beneficial for ADD task. However, lower layers of Transformer-based models are observed to process less task-relevant information [20], making the direct use of outputs from Conformer blocks potentially inefficient. Taking these characteristics into account, we propose MCA method that extracts task-relevant features from multiple blocks.
MCA is a method that extracts task-related information by training the CLS token, a widely used feature extraction technique in Transformer-based models for classification tasks, through auxiliary losses. CLS tokens are learnable vectors inserted at the beginning of a token sequence that serve as an aggregated representation of the sequence through MHSA modules [12]. Then, the aggregated representation can be utilized for classification tasks. Given this consideration, MCA method adds CLS tokens to the input sequence, and each set of Conformer blocks (called stage) has its own classifier and a loss function. Therefore, the lower block can be trained with more strong task-relevant supervision. Furthermore, MCA method can aggregate more discriminative features by adapting CLS tokens for each stage; when applied with the hierarchical pooling method, it becomes feasible to gather even more discriminative features from token sequences with diverse time scales.
Fig. 1 (c) depicts HM-Conformer where we applied the hierarchical pooling and MCA methods. Algorithm 1 shows the process of HM-Conformer. First, three CLS tokens which are randomly initialized vectors (pink dotted boxes in Fig. 1 (c)) are added to the input token sequence \(h_{0}\) as presented in Equation (7).
\[h_{0}=\{cls_{1},cls_{2},cls_{3},t_{1},...,t_{T/2^{n}}\}, \tag{7}\]
where \(cls_{i}\in\mathbb{R}^{d}\) and \(t_{i}\in\mathbb{R}^{d}\) denote CLS tokens for each stage and the tokens from input, respectively. These tokens are processed by the Conformer blocks, enabling CLS tokens to aggregate information from other tokens. The refined CLS tokens are separated before entering the pooling layer and then used to produce the final embedding with a global-level token (the lowest green box). Each of the CLS tokens, global-level token, and final embedding are transformed into an embedding \(e_{k}\) through a classifier, as shown in Algorithm 1. Subsequently, each embedding processed through its own OC-Softmax [21] loss function \(L_{os}\) to calculate the final loss \(L\):
\[Score_{k}=C_{k}(e_{k})=\sigma\left(e_{k}W_{1}^{k}+b^{k}\right)W_{2}^{k}, \tag{8}\]
\[L_{os_{k}}=\frac{1}{N}\sum_{j=1}^{N}\log\left(1+\exp^{\alpha\left(m_{y_{j}}- \hat{w}_{k}Score_{k}\right)\left(-1\right)^{y_{j}}}\right), \tag{9}\]
\[L=\sum_{k=1}^{5}w_{k}L_{oc_{k}}\left(Score_{k}\right), \tag{10}\]
where \(\hat{w}_{k}\), \(\alpha\), and \(m_{y_{j}}\) denote a trainable single parameter, a scale factor, and margins, respectively. \(y_{j}\) means labels that \(y_{j}=0\) for bona-fide and \(y_{j}=1\) for spoofed, and the hyper-parameter \(w_{k}\) denotes weight of \(L_{oc_{k}}\). Note that during inference, only the \(Score_{5}\) processed from the final embedding \(e_{5}\) is used.
## 4 Experimental Setup
### Datasets and evaluation metric
We used the training and development partitions of the ASVspoof 2019 logical access task datasets for model training. This training dataset consists of 5,128 bona-fide and 45,096 spoof utterances generated by six spoofing attack algorithms. Model evaluation was performed on the evaluation partitions of the ASVspoof 2021 deepfake (DF) task dataset. The DF evaluation dataset comprises 611,829 samples of 100 spoofed and bona-fide utterances combinations. We compared the performance of the models based on the Equal Error Rate (EER), the official metric for the ASVspoof 2021 challenge.
### Implementation details
We employed the OC-Softmax [21] loss function with hyper-parameters \(\alpha=20\), \(m_{0}=0.9\), and \(m_{1}=0.2\). To augment training data, we employed media codec1 augmentation, speed perturbation set to 0.9 and 1.1, SpecAugment [22] for randomly masking 0 to 20 frequency axes, and colored additive noise augmentation with signal-to-noise ratios randomly selected between 10 and 40. For
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Frameworks** & **EER (\%)** \\ \hline LFCC-LCNN (Wang et al., 2021 [3]) & 23.48 \\ SE-Rawformer* (Liu et al., 2023 [5]) & 21.65 \\ AASIST (Jung et al., 2022 [4]) & 20.04 \\ SFR-CNN (Yamagishi et al., 2021 [6]) & 19.22 \\ \hline Conformer (baseline) & 18.91 \\
**HM-Conformer** & **15.71** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of EER (%) performance of the ASVspoof 2021 DF task evaluation in various frameworks. (*: our implementation)
input features, we used 400 frames of the 120-dimensional linear frequency cepstral coefficients (LFCCs), encompassing a window length of 20ms, a hop size of 10ms, a 512-point FFT, a linearly spaced triangle filter bank of 40 channels, and delta and delta-delta coefficients. We utilized a batch size of 240 and the Adam [23] optimizer with \(\beta 1=0.9\) and \(\beta 2=0.999\). Readers can access our codes at GitHub2.
Footnote 2: [https://github.com/talkingnow/HM-Conformer](https://github.com/talkingnow/HM-Conformer)
## 5 Results
### Comparison with recent CM systems
Table 1 shows the comparison of the performance of the HM-Conformer with recently proposed single CM frameworks on the ASVspoof 2021 DF evaluation. The baseline framework (Conformer) reveals an EER of 18.91%, which is improved performance than other frameworks, thereby validating its potential in ADD. The proposed HM-Conformer achieved an EER of 15.71%, representing an approximately 16% improvement over the baseline. These results indicate that we successfully adapted the vanilla Conformer framework for the ADD task using our proposed hierarchical pooling and MCA methods.
### Validation of the hierarchical pooling method
To validate the effectiveness of the hierarchical pooling method, we confirmed the performance variation when applying different types of pooling layers, as displayed in Fig. 2. In our experiments, all employed pooling strategies yielded superior performance compared to the baseline. These results demonstrate that conveying condensed features is reasonable for addressing the ADD task. Meanwhile, there is one further notable observation in Fig. 2. In previous studies on ADD task, pooling mechanisms that select more significant representations from extracted features, such as max-pooling, are often employed in building CM systems and show outstanding performance [3, 4]. Consistent with prior works, max and top-k pooling derived better performance than other pooling techniques in our experiments.
### Effectiveness of MCA method
Table 2 shows the results of the Conformer with MCA method under various conditions of the \(w\) ratio. Experiment #1 shows the EER of the baseline, which is the Conformer framework with a max pooling as described in subsection 5.2. Compared to the baseline, all experiments with MCA achieved superior performance as depicted in #2\(\sim\)#6. Based on these results, we concluded that our proposed MCA method could improve the Conformer's performance in the ADD task by transmitting appropriate information for detecting spoofing evidence. We also found that increasing the loss weight for the lower layers resulted in better performance than vice versa (#2, #3 vs #5, #6). In the end, by adjusting the MCA with \(w\) ratio 4:3:2:1:1 to the baseline, we attained the best performance HM-Conformer, which shows an EER of 15.71%.
### Ablation study of MCA method
In Table 3, we performed a token ablation experiment on the best HM-Conformer to prove that all of the different stages of information are valid. We observed that the performance of experiments #1\(\sim\)#6, excluding some elements, decreased compared to experiment #7, which used all elements. The results of these experiments suggest that all CLS tokens and the global-level token carry information regarding the spoofing evidence to the final embedding and that diverse discriminative information is significant for performance improvement.
## 6 Conclusion
In this study, we propose the HM-Conformer, a spoofing CM system, by modifying the Conformer structure through hierarchical pooling and multi-level classification token aggregation methods. The hierarchical pooling method, which can narrow the gap between seq-to-seq tasks and classification tasks, extracts compressed information suitable for the ADD task by reducing the sequence length of the Conformer block. MCA method enables the model to discern spoofing evidence from diverse sequence lengths at varying time compression levels. We verified that these two methods can enhance the Conformer, resulting in competitive performance in the ADD task when compared to modern frameworks.
\begin{table}
\begin{tabular}{l|c c c c c c c|c} \hline \hline
**No.** & \(w_{1}\) & \(w_{2}\) & \(w_{3}\) & \(w_{4}\) & \(w_{5}\) & **EER (\%)** \\ \hline \#1 & & & & & & & & 17.84 \\ \hline
**\#2** & 1 & : & 1 & : & 1 & : & 1 & : & 6 & 17.07 \\
**\#3** & 1 & : & 1 & : & 2 & : & 3 & : & 4 & 17.03 \\
**\#4** & 1 & : & 1 & : & 1 & : & 1 & : & 1 & 16.06 \\
**\#5** & **4** & **:** & **3** & : & **2** & : & **1** & : & **1** & **15.71** \\
**\#6** & 6 & : & 1 & : & 1 & : & 1 & : & 1 & 15.72 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of EER performance when changing the weights \(w_{k}\) of the loss function.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**No.** & \(e_{1}\) & \(e_{2}\) & \(e_{3}\) & \(e_{4}\) & **EER (\%)** \\ \hline \#1 & ✓ & & & & ✓ & 17.41 \\ \#2 & & ✓ & & & ✓ & 16.59 \\ \#3 & & & ✓ & ✓ & ✓ & 17.38 \\ \#4 & ✓ & & ✓ & ✓ & 16.92 \\ \#5 & & ✓ & ✓ & ✓ & 17.74 \\ \#6 & ✓ & ✓ & ✓ & & 16.71 \\ \hline \#7 & ✓ & ✓ & ✓ & ✓ & 15.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of EER performance for ablation study using the sub-set of \(e_{k}\). \(e_{1},e_{2}\), and \(e_{3}\) denote CLS token from \(k\)-th stage, and \(e_{4}\) means global-level token. (Ratio of loss weights 4:3:2:1)
Figure 2: Comparison of EER (%) performance between various pooling layers applied to hierarchical pooling method. Convolution pooling was performed using a convolution layer with a kernel size seven and a stride four. G-pooling adopted the method of Gao et al, [24]. |
2309.10447 | Toward Unified Controllable Text Generation via Regular Expression
Instruction | Controllable text generation is a fundamental aspect of natural language
generation, with numerous methods proposed for different constraint types.
However, these approaches often require significant architectural or decoding
modifications, making them challenging to apply to additional constraints or
resolve different constraint combinations. To address this, our paper
introduces Regular Expression Instruction (REI), which utilizes an
instruction-based mechanism to fully exploit regular expressions' advantages to
uniformly model diverse constraints. Specifically, our REI supports all popular
fine-grained controllable generation constraints, i.e., lexical, positional,
and length, as well as their complex combinations, via regular expression-style
instructions. Our method only requires fine-tuning on medium-scale language
models or few-shot, in-context learning on large language models, and requires
no further adjustment when applied to various constraint combinations.
Experiments demonstrate that our straightforward approach yields high success
rates and adaptability to various constraints while maintaining competitiveness
in automatic metrics and outperforming most previous baselines. | Xin Zheng, Hongyu Lin, Xianpei Han, Le Sun | 2023-09-19T09:05:14Z | http://arxiv.org/abs/2309.10447v2 | # Toward Unified Controllable Text Generation via Regular Expression Instruction
###### Abstract
Controllable text generation is a fundamental aspect of natural language generation, with numerous methods proposed for different constraint types. However, these approaches often require significant architectural or decoding modifications, making them challenging to apply to additional constraints or resolve different constraint combinations. To address this, our paper introduces Regular Expression Instruction (REI), which utilizes an instruction-based mechanism to fully exploit regular expressions' advantages to uniformly model diverse constraints. Specifically, our REI supports all popular fine-grained controllable generation constraints, i.e., lexical, positional, and length, as well as their complex combinations, via regular expression-style instructions. Our method only requires fine-tuning on medium-scale language models or few-shot, in-context learning on large language models, and requires no further adjustment when applied to various constraint combinations. Experiments demonstrate that our straightforward approach yields high success rates and adaptability to various constraints while maintaining competitiveness in automatic metrics and outperforming most previous baselines. 1
Footnote 1: Our code and data are available at [https://github.com/MrZhengXin/CTG-Regex-Instruction](https://github.com/MrZhengXin/CTG-Regex-Instruction).
## 1 Introduction
Generating texts according to human requirements has long been a critical challenge in natural language generation Ziegler et al. (2019); Ouyang et al. (2022). With the emergence of large language models, many tasks in natural language processing can be unified and converted into the formation of _controllable generation_Prabhumoye et al. (2020). For example, text classification Apte et al. (1994), cloze test Devlin et al. (2019), and multiple-choice question answering Lai et al. (2017) tasks constraint the output text to be exactly one of the given options; abductive reasoning Bhagavatula et al. (2020) specifies that the position of the output text is between the previous and future contexts; summarization task Luhn (1957) limits the length of output; machine translation Bar-Hillel (1960) demands to use the vocabulary of the target language for text generation.
For controllable text generation, typical fine
\begin{table}
\begin{tabular}{l} \hline \hline Lexicon \& length constraint \\ \hline _Input_ \\ \textless{expression>} \textless{mask\_0>} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{ } \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{ } \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{} \textless{ } \textless{} \textless{} \textless{} \textless{} \textless{} \
grained control tasks include lexicon Lin et al. (2020), generating position Shen et al. (2020) and length Carlsson et al. (2022). Recently, various approaches have been proposed to satisfy these constraints, which can be categorized into three different paradigms: retraining or refactoring the model Keskar et al. (2019); Zhang et al. (2020); He (2021); Chan et al. (2021); tuning on given data Lester et al. (2021); Stiennon et al. (2020); manually designed post-processing Qin et al. (2020); Meng et al. (2022); Lu et al. (2021, 2022); Wang et al. (2021).
Despite the reasonable performance, current methods on transformer-based language models mainly focus on certain constraints but may not be easily transferred to others, let alone the combination of constraints. For example, Non-Residual Prompting Carlsson et al. (2022) and A*esque Decoding Lu et al. (2022) only considered lexical and length constraints, but it cannot arbitrarily specify which position the generated text shall occur; on the other hand, COLD Qin et al. (2022) can generate text given past and future context, but may not add word inclusion constraint nor restrict the output length. Moreover, these controlling methods assume that we have access to the probability distribution or even gradient of the model, but in the case of large language models where we can only obtain the output token via API, these methods may not be available, and thus black-box controlling techniques need further exploration.
To address the above challenges, we proposed instruction-based Regular Expression Instruction (REI), for universal fine-grained controllable generation. Table 1 present a few examples. Our instruction design is inspired by regular expression, which can easily describe mainstream constraints and their combinations. Following Rosenbaum et al. (2022), we use markup language to construct the expression, hoping that model can better distinguish between meta-data (instructions) and data (actual words). We use two popular paradigms, language model fine-tuning, and large language model few-shot, to teach the model to understand the input constraint expression.
Our method has several advantages. First, our constraint expression supports all typical fine-grained controlling task and is powerful enough to describe composite control specifications. Second, our method can be adapted to various scenarios, such as summarization with length constraint, terminology-constrained machine translation, and alternative-ending story infilling. Third, our method is easy to implement and highly transferable to other models since it requires only fine-tuning on medium-size models and no further modification on large language models, and it does not need access to probability distribution or gradient.
Experiments demonstrate that current state-of-the-art language models can understand our controlling language, achieving high success rate while maintaining high automatic evaluation metric score and surpassing most of the strong previous baselines under various constraints. We hope our work can shed light on future works.
## 2 Method
### Instruction Design
The controlling language REI follows the style of regular expression due to its expressiveness. Also, it's easy to evaluate whether the input expression instruction matches the generated text or not. Following Rosenbaum et al. (2022), HTML-like markup language is used, which helps the model learn that they are meaningful meta-data instructions rather than plain symbols, especially when using large language models in-context learning with limited examples and no parameter update. This markup label can also avoid the usage of the escape character.
REI contains several special labels, as shown in Table 1. <expression> and </expression> mark the beginning and the end of the expression and can be put anywhere in the input text, assuming we only generate according to one expression at a time. <mask_i> is equivalent to the regular expression ". *" and similar to the mask token in BART Lewis et al. (2020) and T5 Raffel et al. (2022), where at its position the model shall generate zero or more tokens. <options> and </options> is equivalent to the parentheses "(" and ")" in regular expression, the model shall choose one expression among the group. To make the recognition easier, we use <choice_i> and </choice_i> to wrap each choice. The regular expression notation of length counts at the character level, but in practice, we want to control the output word length. Therefore, we use the <length=> label to denote the constraint of output word count.
We avoid the shortcoming of T5 Raffel et al. (2022) span-corruption schema, where the model only generates discontinued spans rather than full
natural sentences Lester et al. (2021). On the other hand, we also overcome the redundancy of BART denoising schema He (2021), where the whole input is generated again, since we only generate the realized expression. Moreover, beyond fill-in-the-blank, we introduce choice-making, which further enriches the expressiveness of our controlling language.
### Training
Fine-tuningWe could automatically construct the training data from the corpus and conduct self-supervised learning. Alternatively, we could also directly convert the input of existing supervised datasets into the form of our controlling language, and use them to fine-tune state-of-the-art models such as FLAN-T5 Chung et al. (2022). The input format is shown in Table 1(a).
We include \(\alpha\)NLG Bhagavatula et al. (2020) and CommonGen Lin et al. (2020), two English controllable generation datasets of position and lexicon constraint. In \(\alpha\)NLG, given the past observation \(O_{1}\) and the future observation \(O_{2}\), the goal is to generate a hypothesis \(h\) that could follow \(O_{1}\) and trigger \(O_{2}\). The regular expression of the constraint is "\(\cdot\) *" since no lexicon constraint is required. In CommonGen, given a set of \(k\) concepts \(C=\{c_{0},c_{1},...,c_{k-1}\}\), the output text shall include those concepts and at the same time be consistent with common sense. While in the original setting, the appearance order of concepts and their word sense change is not provided, and the model shall make these decisions, here in our controlling language, the exact word and order must be given. Otherwise, we cannot construct the corresponding expression. So, we preprocess the original instances and recover the order and word sense of the concepts by the reference text. To help the model generate the concepts sequentially and track how many concepts it has already used, we append the serial number label \((i)\) to every concept \(c_{i}\) on both the input and output sides and remove the labels from the output generation once completed. The regular expression of the constraint is "\(\cdot\) *\(c_{0}\) *\(c_{1}\)... *\(c_{k-1}\) *".
We also leverage these two datasets to teach the model to control the output length by simply adding the length label with the ground truth length. To better track how many words the model itself has already generated, we append the length number label \(\_i\) to every word \(w_{i}\); for example, the sentence "Stephen knocked over a vase while drunk." becomes "Stephen_0 knocked_1 over_2 a_3 vase_4 while_5 drunk_6". Similarly, we remove the length number labels after completion.
Finally, we need to teach the model about choosing grammar. We use \(\alpha\)NLI Bhagavatula et al. (2020) dataset, the task of which is to determine whether \(H_{1}\) or \(H_{2}\) is the more plausible hypothesis given the past and future observations \(O_{1}\) and \(O_{2}\), and the constraint of the regular expression is "\((H_{1}|H_{2})\)".
\begin{table}
\begin{tabular}{l l} \hline \hline
**Task** & **Input with Control Expression** \\ \hline \(\alpha\)NLG & \(O_{1}\) \textless{} expression\textgreater{
In-context LearningFor large language models like GPT-3.5 Brown et al. (2020), where typically access is typically provided via API, we may not apply many traditional controllable generation technics. However, we can leverage its ability of in-context learning to conduct fine-grain constraint generation. More specifically, we leverage the ability to discover and imitate the repeated pattern Madaan and Yazdanbakhsh (2022); Min et al. (2022), which is desirable in our case, since unlike other natural language understanding tasks, the specific fine-grain constraint is a well-defined simple pattern that could be easily discoverable and imitable.
Given the input with control expression, we can select \(k\) instances with the same expression structure as the instruction prompt and send it to the large language model together with input. Naturally, when evaluating the test set, we can select examples from the training set or validation set, or other instances of the test set when they are not available. Consistently, we use the same input and output format described before, which saves extra efforts on prompt engineering. In addition, we simply use the popular json format " {"input": [INPUT], "output": [OUTPUT]} " for each demonstrating instances, and naturally seperate them with "un". By using json, we can further avoid the need for escape character if the input text happens to contain metadata like "Input" or "un".
### Inference
We use rejection sampling to generate output text that is matched by the control expression. Verifying the output is simple, since we could convert the control expression into regular expression and check the validity. Additionally, if the expression contains length constraint label, we count and compare the number of words in the output text. We try at most \(k\) times to avoid infinite loop and save costs if we use large language model API. When using medium or small size langauge model, to increase the generation quality, we can perform beam search first and see if it can generate a valid result at the first try.
### Recursive Decoding
Different choice might affect the generated text. For example, consider the case "\(S_{1}S_{2}S_{3}\).\(*\)(\(E_{1}\)\(|\)\(E_{2}\))", which gives the first three sentence and two alternative endings and the goal is to choose the correct ending while infill the fourth sentence at the same, which is not included in our fine-tuning data. Instead of directly jumping to the answer with possibly insufficient computation, we could also let the model "think step by step Kojima et al. (2022)". We can solve each choice expression first, then compare the complete choices "(\(S_{4}E_{1}\)\(|\)\(S_{4}^{\prime}E_{2}\))"". The generalized decoding procedure is presented at Algorithm 1, which assumes that each options is independ with each other and greedily solve them from left to right. We leave the evaluation of expression with multipe consecutive options Lu et al. (2022) for future work.
## 3 Experiment
### Setup
We conduct experiments on 2 Nvidia A100 GPUs, with about 10 total GPU hours locally. For medium-size language model, we use FLAN-T5-xl Chung et al. (2022) with Apache 2.0 license, which has 3B parameters and is fine-tuned on many natural language understanding and generation tasks. We use Huggingface Transformers library Wolf et al. (2020) with Apache-2.0 license for fine-tuning and evaluation. We trained the model for 3 epochs, with a batch size of 16 and learning rate of 3e-5. We set beam size to 4 for beam search and p to 0.95 for top-p sampling. We generate at most \(k=512\) samples if we do not obtain any valid outcome.
For large language model, we use GPT-3 Brown et al. (2020) text-davinci-003 version via OpenAI API, and the 175B model is calibrated with Reinforcement Learning from Human Feedback Stiennon et al. (2020). We feed 8 in-domain examples as the prompt, set the temperature to 0.7, and retry at most \(k=8\) times if the result is not valid. All results are from "single" run.
### Lexicon Constraint
#### 3.2.1 Lexicon Constraint Only
SettingWe evaluate our method on the devset of CommonGen Lin et al. (2020), as the reference text of the test set is not publicly available. As mentioned in 2.2, we feed the model with oracle concept order and word sense. For automatic metrics we use BLEU-4 Papineni et al. (2002), CIDEr Vedantam et al. (2015), SPICE Anderson et al. (2016) and Coverage (Cov.), which is the average ratio of input concepts that are present in lemmatiztized outputs.
ResultsWe compare the performance of our method with other baselines, including fine-tuning methods BART (Lin et al., 2020) and T5-Large (Lin et al., 2020), auxilary guiding model method NADO (Meng et al., 2022), prompting method NRP (Carlsson et al., 2022), and 8-shot pure natural language instruction (NLI) on GPT-3.5, which is shown at Table 2(a).
Given only 8 examples with a clear connection between input and output, GPT-3.5 still shows competitive performance in terms of text automatic metrics, and achieves high concept coverage, surpassing all the previous baselines. Compared with natural language instruction, the success rate is very close. And with more supervised data to modify the model's parameter, FLAN-T5-xl performs significantly better than GPT-3.5 and other previous baselines in all metrics and successfully satisfies all lexicon constraints.
#### 3.2.2 Lexicon & Length Constraint
As described in Section 2.2, we slightly modify the devset of CommonGen to introduce the additional length constraint and evaluate GPT-3.5 and FLAN-T5. For metric, we replace Coverage (Cov.) with Success Rate (SuR.), which is the average percentage of output that matches the input expression. In a composite task, the performance of GPT-3.5 downgrades dramatically and struggles to generate valid output, indicating that multi-concept inclusion and length control at the same time is challenging, especially for few-shot in-context learning. Yet, REI still outperforms NLI in terms of success rate, and the "high" n-gram metrics might also indicate the poor instruction following ability in terms of challenging fine-grain constraints, which is consistent with the finding of Zhou et al. (2023). FLAN-T5 only has a minor drop in performance and still maintains a high success rate since it has trained on this composite constraint.
### Position Constraint
#### 3.3.1 Position constraint only
SettingWe evaluate our method on the testset of \(\alpha\)NLG (Bhagavatula et al., 2020). The automatic metrics include BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004) and BERTScore (Zhang* et al., 2020). We do not report Success Rate since it's always 100%.
ResultsAs presented in Table 3(a), we compare our method with two unsupervised baselines DeLorean (Qin et al., 2020) and COLD (Qin et al., 2022), non-autoregressive Diffusion-LM (Li et al., 2022) and two fine-tuned methods on 11B T5 (Khashabi et al., 2021), 20B UL2 (Tay et al., 2022) and 8-shot NLI on GPT-3.5.
With few-shot learning, GPT-3.5 outperforms two unsupervised baselines and Diffusion-LM, demonstrating its strong in-context learning ability given only a few infilling examples. Since it's a relatively simple constraint, the performance between REI and NLI is very close. With our careful instruction prompt design and adequate fine-tuning, 3B FLAN-T5 shows stronger performance than 11B T5, and remains competitive compared to 20B UL2.
\begin{table}
\end{table}
Table 3: Results on devset of CommonGen. The best models are **bold** within each metric.
#### 3.3.2 Position & Length Constraint
As mentioned in Section 2.2, we slightly modify the \(\alpha\)NLG test set to add the length constraint. We change the BERTScore metric to SuccessRate (SuR.). Table 3(b) shows the results. GPT-3.5 manages to imitate both position and length constraints, showing relatively high success rate, while under NLI, it performs badly. But with full-scale supervised learning, FLAN-T5 can robustly generate valid output on the test set 100% of the time. Also, in terms of automatic metrics, the output of both models does not downgrade dramatically.
#### 3.3.3 Position & Lexicon Constraint
We can also modify the \(\alpha\)NLG test set to add lexicon constraint, setting the keyword to be the first verb on the reference text. The input format is shown in Table 1(b), and Table 3(c) shows the results. For GPT-3.5, it still is very likely to generate valid output nearly all of the time, and the automatic metrics enjoy improvement compared with the results of no lexicon constraint, since the additional gold words are provided, and the verb constraint limits the vast scope of possible hypothesis space. Also, REI is slightly better than NLI. For FLAN-T5, although it has been trained on position constraint or lexicon constraint separately, it has not seen the combination, and yet still demonstrates strong performance.
#### 3.3.4 Position & Lexicon & Length Constraint
We can further combine all conditions together, adding both length and lexicon constraints on the test set of \(\alpha\)NLG. The input format is presented in Table 1(b), and Table 3(d) shows the results. Compositional constraints challenge few-shot GPT-3.5, as it's more difficult to generate output that matches all three requirements, and the success rate drops slightly. Interestingly, NLI got a very low success rate. But fully-trained FLAN-T5 exhibits robust transfer ability, as the simultaneous three constraints are not included in training data, but FLAN-T5 still manages to achieve close to 100% success rate.
#### 3.3.5 Position Constraint & Alternative Endings
On the test set of Story Cloze Test Mostafazadeh et al. (2016), which is to choose between the right ending and the wrong one given the four-sentence context, we additionally mask the fourth sentence and require the model to infill the missing sentence while determining the correct ending. The input format is shown in Table 1(b), and the result is shown in Table 1(b). We change the Success Rate (SuR.) metric to Accuracy (Acc.), since choosing either ending is valid. For GPT-3.5, we directly construct promoting examples with the initial input and final output, and surprisingly find that GPT-3.5 handles the composite constraint quite well, and chooses the right ending with not bad accuracy. Also, REI comes close to NLI in performance. For FLAN-T5-xl, we use the recursive decoding (Section 2.4, and it shows moderate performance, with lower accuracy but higher BLEU / ROUGE compared with GPT-3.5.
### Summarization with length constraint
REI can also easily support abstractive summarization with desired length Kikuchi et al. (2016); Fan et al. (2018), as long as the base model has been trained on the summarization task, which is the case in our choosing models FLAN-T5 Chung et al. (2022) and GPT-3.5 Ouyang et al. (2022). We choose to evaluate on the test set of English headline generation dataset Gigaword Graff et al. (2017),
\begin{table}
\end{table}
Table 4: Result on test of \(\alpha\)NLG.
2003), due to its short input and output length. Also, Gigaword is not included in the training set of FLAN-T5 or GPT-3.5. The input format is written in Table 2b. We use ROUGE-L Lin (2004) and Success Rate (SuR.) for metrics.
We compare our methods with two unsupervised unconstrained baselines SEQ Baziotis et al. (2019) and TED Yang et al. (2020), and the results are shown in Table 7. Both GPT-3.5 and FLAN-T5 exceed the two baselines in ROUGE-L score, showing relatively good text quality. Since the summarization task constrains more on the semantic of output compared with pure lexicon constraint (CommonGen) or position constraint (\(\alpha\)NLG), satisfying length constraint might be more difficult, and GPT-3.5 shows a relatively lower success rate, but NLI has the worst success rate. But nevertheless, FLAN-T5 still achieves 100% success rate. Notice that with limited REI training tasks, the model can still generalize to new tasks with the specific format, demonstrating the robust transfer ability under supervised learning.
### Terminology-constrained machine transltion
We can also apply REI to machine translation with terminology constraint Dinu et al. (2019), which is to ensure the given terminologies \(T=(t_{0},t_{1},...)\) are used in translation. We only test GPT-3.5 here, due to its superiority in multi-language understanding, while the majority of output language during pre-training, multi-task learning, and fine-tuning is English. We evaluate on the test set of Wiktionary and IATE Dinu et al. (2019), two English-German translation dataset, using BLEU-4 Papineni et al. (2002) and Terminology Coverage (Term) for metrics.
We compare our method with several strong baselines, including Constraint decoding Dinu et al. (2019), Train-by-replace Dinu et al. (2019), RePP Sun et al. (2022), TADA Ailem et al. (2021), EDITOR Xu and Carpuat (2021), Levenshtein Transformer Susanto et al. (2020), and 8-shot NLI on GPT-3.5. Due to its vast parameters, GPT-3.5 outperforms all other baselines in terms of BLEU score. Also, GPT-3.5 achieves near 100% terminology coverage rate, which is close to the existing upper limit. Finally, REI has a slightly higher term coverage than NLI.
### Qualitative Results
Table 8 shows the samples of lexicon & length constraints (Section 3.2.2), position & lexicon & length constraints (Section 3.3.4), position constraint with alternative ending (Section 3.3.5), summarization with length constraint (Section 3.4) and translation with terminology constraint (Section 3.5). Both FLAN-T5 and GPT-3.5 generate valid and fluent sentences. GPT-3.5 also uses more vivid or human-like words like "antihistamines" or the abbreviation "FIA", probably due to its large-scale model size and training corpus.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Method** & **BLEU** & **ROUGE** & **Acc.** \\ \hline NLI+GPT-3.5, 8 shot & 3.83 & **21.27** & **88.99** \\ REI+GPT-3.5, 8 shot & 3.77 & 20.56 & 88.72 \\ REI+FLAN-T5-xl & **3.87** & 20.9 & 84.61 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on Story Cloze Test with positional constraint.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{**Wiktionary**} & \multicolumn{2}{c}{**IATE**} \\
**Method** & **Term\%** & **BLEU** & **Term\%** & **BLEU** \\ \hline Constraint decoding Dinuu et al. (2019) & 99.50 & 25.80 & 82.00 & 25.30 \\ Train-by-replace Dinu et al. (2019) & 93.40 & 26.30 & 94.50 & 26.00 \\ RePP Sun et al. (2022) & 93.67 & 30.52 & 95.41 & 29.38 \\ TADA Ailem et al. (2021) & 96.84 & 26.73 & 98.02 & 27.11 \\ EDITOR Xu and Carpuat (2021) & 99.8 & 29.30 & **100.0** & 28.90 \\ Levenshtein Transformer Susanto et al. (2020) & **100.0** & 31.20 & **100.0** & 30.13 \\ \hline NLI+GPT-3.5, 8-shot & 99.03 & **37.62** & 98.07 & 32.22 \\ REI+GPT-3.5, 8-shot & 99.52 & 34.88 & 99.45 & **35.25** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on Wiktionary and IATE.
## 4 Related Work
Tasks of Controllable Text GenerationControllable text generation refers to the tasks that generate text according to the controlling signals (Prabhumoye et al., 2020). Typically, the output can be constrained at three levels from coarse to fine: (Zhang et al., 2022) semantic, structural and lexical. At semantic level, the signals include topic (Tang et al., 2019), sentiment (Logeswaran et al., 2018), format (Li et al., 2020), toxicity (Krause et al., 2021) and other abstract attribute. At the structural level, the constraints include key-value data table (Novikova et al., 2017), syntax tree, and parts-of-speech (Li et al., 2022). At lexical level, then controlling elements include keyword (Lin et al., 2020), generating position (Shen et al., 2020) and length (Carlsson et al., 2022).
Methods of Controllable Text GenerationCurrent approach for controllable text generation can be summarized as three main categories (Zhang et al., 2022): retraining or refactoring the model, e.g. CTRL (Keskar et al., 2019), POINTER (Zhang et al., 2020), CMDP (Chan et al., 2021), Constrained BART (He, 2021), CoCon (Chan et al., 2021), PlanGen (Su et al., 2021) and InstructCTG (Zhou et al., 2023); tuning on given data, including model fine-tuning, Prompt Tuning (Lester et al., 2021) and RL-Fine Tuning (Stiennon et al., 2020); and post-processing, which can either design specific decoding strategy, e.g. Constrained Beam Search (Anderson et al., 2017), DeLorean (Qin et al., 2020), COLD (Qin et al., 2022), NeuroLogic (Lu et al., 2021); or using auxilary guiding model, e.g. PPLM (Anderson et al., 2017), GeDI (Krause et al., 2021), FUDGE (Yang and Klein, 2021), CTRLsum (He et al., 2022), Plug-and-Play Content Planning (Liu et al., 2022), NADO (Meng et al., 2022), and MACSum (Zhang et al., 2023).
## 5 Conclusion
We proposed Regular Expression Instruction (REI), a novel instruction-based method that unifies fine-grain lexical-level constrained text generation. Our method is highly adaptable, fitting either language model fine-tuning or large language model in-context learning. Our controlling language can also easily be applied to other related tasks, including story completion while infilling, summarization with length constraint, and machine translation with terminology constraint. Experiments show that our method has a high success rate and outperforms most of the previous strong baselines, demonstrating its effectiveness despite the simplicity. We leave the evaluation and improvement of more complex constraints for future works.
\begin{table}
\begin{tabular}{l l} \hline \hline CommonGen+length & \textless{expression}\textgreater{} \textless{}mask\_0\textgreater{} **dance**(0) \textless{}mask\_1\textgreater{} **performed**(1) \textless{}mask\_2\textgreater{} **stage**(2) \textless{}mask\_3\textgreater{} **wearing**(3) \textless{}mask\_4\textgreater{} **costumes**(4) \textless{}mask\_5\textgreater{} **leng**(1=1> \textless{}/expression\textgreater{}\textgreater{}\textless{}\textgreater{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{} \textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textlessless{}\textlessless{}\textlessless{}\textlessless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textless{}\textlessless{}\textlessless{}\textlessless{}\textless
### Limitations
Our proposed Regular Expression Instruction is serialized and cannot describe a set of keyword constraints where the appearing order is arbitrary, but only a list of keywords with determined order. Future work is needed to exceed the limit, either by approximating the word order or by repeated random sampling. Also, to obtain valid results we use reject sampling, which might need many repeated trials, thus reducing the efficiency and downgrading the speed. More efficient mechanisms with less retry are worth investigating. Additionally, under the current trends of the instruction following, more sophisticated prompts under 0-shot is worth investigating.
### Ethics Statement
This work involves no sensitive data and uses several public-available datasets. This work discusses controllable text generation, which aims for better usage of the black-box language model and may better reduce the problematic biases. We notice that the method proposed in this work can be used to generate disinformation or harmful content directly via controlling language, but the malicious usage can be further avoided by filtering out improper control input and stopping harmful content generation.
|
2309.10321 | Markov Chain Monte Carlo for Bayesian Parametric Galaxy Modeling in LSST | We apply Markov Chain Monte Carlo (MCMC) to the problem of parametric galaxy
modeling, estimating posterior distributions of galaxy properties such as
ellipticity and brightness for more than 100,000 images of galaxies taken from
DC2, a simulated telescope survey resembling the upcoming Rubin Observatory
Legacy Survey of Space and Time (LSST). We use a physically informed prior and
apply selection corrections to the likelihood. The resulting posterior samples
enable rigorous probabilistic inference of galaxy model parameters and their
uncertainties. These posteriors are one key ingredient in a fully probabilistic
description of galaxy catalogs, which can ultimately enable a refined Bayesian
estimate of cosmological parameters. We systematically examine the reliability
of the posterior mean as a point estimator of galaxy parameters, and of the
posterior width as a measure of uncertainty, under some common modeling
approximations. We implement the probabilistic modeling and MCMC inference
using the JIF (Joint Image Framework) tool, which we make freely available
online. | James J. Buchanan, Michael D. Schneider, Kerianne Pruett, Robert E. Armstrong | 2023-09-19T05:09:11Z | http://arxiv.org/abs/2309.10321v1 | # Markov Chain Monte Carlo for Bayesian Parametric Galaxy Modeling in LSST
###### Abstract
We apply Markov Chain Monte Carlo (MCMC) to the problem of parametric galaxy modeling, estimating posterior distributions of galaxy properties such as ellipticity and brightness for more than 100,000 images of galaxies taken from DC2, a simulated telescope survey resembling the upcoming Rubin Observatory Legacy Survey of Space and Time (LSST). We use a physically informed prior and apply selection corrections to the likelihood. The resulting posterior samples enable rigorous probabilistic inference of galaxy model parameters and their uncertainties. These posteriors are one key ingredient in a fully probabilistic description of galaxy catalogs, which can ultimately enable a refined Bayesian estimate of cosmological parameters. We systematically examine the reliability of the posterior mean as a point estimator of galaxy parameters, and of the posterior width as a measure of uncertainty, under some common modeling approximations. We implement the probabilistic modeling and MCMC inference using the JIF (Joint Image Framework) tool, which we make freely available online.
0000-0002-4882-7885]James J. Buchanan
0000-0002-4882-7885]Michael D. Schneider
0000-0002-4882-7885]Kerianne Pruett
0000-0002-4882-7885]Robert E. Armstrong
## 1 Introduction
Because gravitational lensing depends directly on the overall distribution of matter in a given patch of space, it gives a window into the overall structure and evolution of the universe as a whole (Kilbinger, 2015), enabling constraints on e.g. dark energy (LSST Dark Energy Science Collaboration, 2018). Much of the time the effect of lensing is too subtle to be observed in individual galaxies. Rather, so-called "weak lensing" is statistically inferred by analyzing the correlated pattern of measured shapes of multiple galaxies. Increasing the number of well-measured galaxy shapes is generally expected to improve the statistical strength of weak lensing inferences, as long as systematic errors can be controlled (Mandelbaum, 2018).
The Vera C. Rubin Observatory, under construction, is projected to begin the 10 year Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) in 2024. The LSST will observe an unprecedented number of galaxies throughout a wide and deep volume of space. In order to take complete advantage of this dataset for cosmological inference, we are faced with correspondingly unprecedented demands on the mitigation and characterization of systematic uncertainties in galaxy shape measurements. Standard maximum likelihood estimators of galaxy shapes suffer from numerous biases from sources such as noise (Refregier et al., 2012), pixelation, point-spread function (PSF) distortions (Simon and Schneider, 2017), and potentially centroid estimation errors (Tessore, 2017). In addition to its own irreducible contributions to uncertainty, noise bias interacts with and amplifies the effects of model bias, the inability of a given galaxy model to exactly fit the truth (Kacprzak et al., 2014). These sources of bias must be calibrated away using estimator-specific methods (e.g. Tessore, 2017), which may still leave behind systematic uncertainties that are not always well understood. In any case, a single point estimate of any value, such as galaxy ellipticity, even when accompanied by a confidence interval, fails to reflect all possible information embedded in one's limited data set.
In contrast, a Bayesian forward modeling approach need not be similarly subject to the biases noted above--the noise level, PSF, and pixelization effects, plus many other effects on image rendering, can in principle be forward-modeled and thus naturally accounted for without a separate calibration step. Galaxy shape uncertainties can be described in a Bayesian sense by selecting a parametric family of galaxy light profiles, asserting a prior probability distribution on the profile parameters, and then finding the posterior probability distribution over these parameters for any specific |
2309.16988 | A slime mold inspired local adaptive mechanism for flow networks | In the realm of biological flow networks, the ability to dynamically adjust
to varying demands is paramount. Drawing inspiration from the remarkable
adaptability of Physarum polycephalum, we present a novel physical mechanism
tailored to optimize flow networks. Central to our approach is the principle
that each network component -- specifically, the tubes -- harnesses locally
available information to collectively minimize a global cost function. Our
findings underscore the scalability of this mechanism, making it feasible for
larger, more complex networks. We construct a comprehensive phase diagram,
pinpointing the specific network parameters under which successful adaptation,
or tuning, is realized. There exists a phase boundary in the phase diagram,
revealing a distinct satisfiability-unsatisfiability (SAT-UNSAT) phase
transition delineating successful and unsuccessful adaptation. | Vidyesh Rao Anisetti, Ananth Kandala, J. M. Schwarz | 2023-09-29T05:16:19Z | http://arxiv.org/abs/2309.16988v2 | # A slime mold inspired local adaptive mechanism for flow networks
###### Abstract
In the realm of biological flow networks, the ability to dynamically adjust to varying demands is paramount. Drawing inspiration from the remarkable adaptability of Physarum polycephalum, we present a novel physical mechanism tailored to optimize flow networks. Central to our approach is the principle that each network component--specifically, the tubes-- harnesses locally available information to collectively minimize a global cost function. Our findings underscore the scalability of this mechanism, making it feasible for larger, more complex networks. We construct a comprehensive phase diagram, pinpointing the specific network parameters under which successful adaptation, or tuning, is realized. There exists a phase boundary in the phase diagram, revealing a distinct satisfiability-unsatisfiability (SAT-UNSAT) phase transition delineating successful and unsuccessful adaptation.
## I Introduction
Biological circulatory systems, despite their intricacy, exhibit remarkable adaptability. They adeptly modify various attributes--such as vessel diameter, wall thickness, and the count of micro-vessels cater to the evolving metabolic needs of tissues [1; 2]. This adaptability can be perceived as an outcome of an optimization mechanism, where a global cost function is optimized. Intriguingly, this optimization is not typically directed by a central authority, but emerges from local interactions between different aspects of the system. For example, prior research illustrates how vascular flow efficiency [3] can be enhanced by altering attributes, e.g. tube thickness, based on local data such as flow within a tube.
Given the circulatory system example, as well as others [4; 5], a fundamental inquiry centers on how local interaction rules give rise to adaptive behaviors, and how these behaviors subsequently manifest as optimization algorithms in biological systems [6]. In this manuscript, we introduce a straightforward physical mechanism that potentially allows biological systems to implement such optimization strategies. Moreover, limitations on such optimization algorithms do indeed exist. For instance, one type of optimization algorithm for a large swath of some parameter space may not be feasible. Here, we explore the limitations on our physical mechanism for adaptation in the context of what is known as a Satisfiability/SAT-Unsatisfiability/UNSAT phase transition [7; 8; 9]. A similar transition was found in the study of limits on the multifunctionality of tunable flow and mechanical networks [10].
The specific question we are interested in is: How does one tune the node pressures of a flow network, by altering its tube thickness, via a physical mechanism that uses only local information? To answer this question, we take inspiration from the adaptive behaviour observed in Physarum polycephalum, or slime mold. This organism, in spite of not having a brain or nervous system, is capable of optimizing its network structure to determine the most efficient path between food sources [4; 11]. Prior research [12] indicates that slime mold possibly utilizes a form of chemical signaling for this purpose. Upon encountering food, it releases a signaling chemical at the location, which is then propagated throughout the organism via advection. This chemical, in turn, induces an expansion in the tubes through which it is flowing. As a result, the tubes that connect food sources through the shortest path undergo more pronounced expansion compared to those on longer routes, leading to an efficient connection between food sources. This behavior exemplifies how biological systems can harness physical processes to optimize for a specific task such as finding food.
Our system is a flow network - a set of nodes connected by tubes. Fluid flows in and out of the network through some of those nodes. This flow creates pressure values at each node, depending upon the architecture of the network and the conductance of the pipes. The task here is to modify the conductance of these pipes such that the pressure values at some 'output' nodes matches the desired pressures.
The first thing that we observe is that pressure at a node depends on the resistance to flow downstream. Therefore, to alter the pressure at a node, we must change the conductance of pipes downstream (Fig. 1). Consider an output node where we want the pressure to decrease. To do so a chemical at that node gets released which gets carried by the fluid flow. This chemical interacts with the tube such that, when it is flowing through a tube, it increases the conductance of the pipe by making it thicker. This increase in conductance decreases the
resistance to flow, which in turn decreases the pressure at the output node. Similarly, when we wish to increase the output node pressure, we must release a different kind of chemical which decreases the conductance of the pipes by making it thinner. Through this mechanism, the entire network undergoes adjustments. Each tube, relying on the locally available information--namely, the chemical concentration within it--fine-tunes the pressures at the output nodes to align with the target values. This localized adjustment is facilitated by our introduced method, where the discrepancy at the output nodes is conveyed into the network through a chemical signal, subsequently influencing the network structure.
In what follows, we have assessed the performance of our tuning mechanism across a range of network sizes. The scaling relations observed suggest that our tuning approach remains effective even as the network size increases. Notably, our results suggest a SAT-UNSAT phase transition [13] with a discontinuous jump in the fraction of networks that can be successfully adapted at the transition point along with universal scaling features.
## II The Tuning Process
### The System
We create networks consisting of N nodes and M edges. We first create a Barabasi-Albert network with connection parameter 1 using the python package, NetworkX. This creates a minimally connected network with N nodes and N-1 edges. To create a network with M edges we add M-(N+1) unique edges.
We select \(j\) boundary nodes, denoted as \(q_{1},q_{2},...,q_{j}\), and apply an external potential \(\mathbf{u}=[u(q_{1}),u(q_{2}),...,u(q_{j})]^{T}\). The resulting response potentials at the remaining nodes, termed interior nodes, are calculated by solving the discrete Laplace equation using the Schur compliment method [14]. From these interior nodes, we identify \(k\) output nodes \(p_{1},p_{2},...,p_{k}\). The potentials at these nodes are represented as \(v(p_{1}),v(p_{2}),...,v(p_{k})\). The objective is to adjust this network by altering the conductance of its pipes, aiming to align the output potential vector \(\mathbf{v}=[v(p_{1}),v(p_{2}),...,v(p_{k})]^{T}\) with a target output potential vector \(\mathbf{v_{des}}=[v_{des}(p_{1}),v_{des}(p_{2}),v_{des}(p_{3}),..,v_{des}(p_{ k})]^{T}\). For context, the target output potential vector \(\mathbf{v_{des}}\) is derived by introducing a perturbation to the potential at each node, with the perturbation value sourced from a uniform distribution spanning \([-\Delta,+\Delta]\).
### Implementation of the Tuning Process :
1. The input potential vector \(\mathbf{u}\) is applied at boundary nodes. A supervisor checks the output potentials \(\mathbf{v}\) at output nodes and compares them to the vector of desired output potentials \(\mathbf{v_{des}}\).
2. There are two kinds of chemicals, \(s_{+}\) and \(s_{-}\). \(s_{+}\) increases the conductance of the pipe when it is passing through it, and vice versa for \(s_{-}\). We assume that the output nodes release a chemical whose amount is proportional to the difference between the present output potentials and desired output potentials. At \(t=0\) for some \(a\in O,\ v(a)\neq v_{des}(a)\), then \[v(a)>v_{des}(a)\Rightarrow s_{+}(a)=\alpha(v(a)-v_{des}(a))\] (1) \[v(a)<v_{des}(a)\Rightarrow s_{-}(a)=\alpha(v_{des}(a)-v(a))\] (2) Where \(\alpha\) is the factor that controls the chemical response given by the node to the difference in potentials. Moreover, \(s_{+}(a)\) denotes amount of chemical (e.g. no of molecules) at node "a", and \(\mathbf{s}_{+}(t)\) denotes the array of chemical amount at each node at time \(t\).
3. This chemical is carried by the current in the network. Therefore, in the next time step the chemical flows to the neighbouring nodes of \(a\) that are
Figure 1: _Schematic for an adaptative flow mechanism._ [Arrow 1] To increase the pressure at node A, a specialized chemical (depicted in red) is introduced. This chemical is adectively transported by the fluid flow within the network. The fluid flow direction is indicated by the grey arrows. [Arrow 2] As this chemical traverses the tubes, it interacts with the tube’s structure, causing it to constrict and thereby increasing the flow resistance. This results in an increase in pressure at node A. [Arrows 3 & 4] Conversely, to decrease the pressure at node A, a different chemical is released that dilates the tubes. This dilation reduces flow resistance, leading to a decrease in pressure at node A.
downstream to \(a\)[15]. We call all such downstream neighbours of \(a\) as \(\mathcal{D}(a)\). Then for all \(b\in\mathcal{D}(a):\) \[s_{+}(b,t+1)=s_{+}(a,t)\times\frac{i(b,a)}{\sum_{x\in\mathcal{D}(a) }i(x,a)}\] (3) \[+\left(\text{incoming chemical from other nodes}\right)\] (4) where \(i(x,a)\) represents the current from \(a\) to \(x\). This is how the entire array \(\mathbf{s}_{+}\) and \(\mathbf{s}_{-}\) is modified at each time step. Note that all the chemical initially present at \(a\) flows downstream after one time step.
4. Using the above equation, an N \(\times\) N array \(\hat{S}_{+}\) is generated, where each entry \(i,j\) denotes the amount of chemical passing through the pipe \(\{i,j\}\) at step \(t\to t+1\). Let \(\hat{W}\) denote the conductance matrix of the graph, where each entry \(\{i,j\}\) denotes the conductance of that pipe. Then \[\hat{W}(t+1)=\hat{W}(t)+\beta(\hat{S}_{+}-\hat{S}_{-}),\] (5) \(\beta\) controls the response of the pipe to the passing chemical.
5. The new potentials are calculated on the interior vertices using \(\hat{W}(t+1)\). Again, the supervisor checks if \(v(a)=v_{des}(a)\). The chemical takes some time to reach the boundary nodes, where it drains out of the network [16]. Therefore, the total change in potential due to the chemical released at the output nodes is observed after some amount of time. Therefore, we introduce a time delay \(\tau\) before releasing the chemical once again [17].
6. This process is repeated iteratively.
## III Results
We implemented the aforementioned procedure on a network comprising 50 nodes and 118 edges, as depicted in Fig. 2. The conductance values for the flow network were uniformly sampled from the range \([10^{-5},1]\). External pressures were applied to the input nodes (highlighted in green), with each node's external pressure being uniformly sampled from \([-10,10]\). The objective was to adjust the resulting pressure at the output nodes (indicated in red) to align with specified target values. These target pressures were determined by setting \(\Delta=1\). Chemicals were introduced at these nodes at intervals of \(\tau=5\) units, influencing the conductance of downstream pipes. Successful tuning was evident as the error, represented as \(||\mathbf{v}-\mathbf{v_{des}}||\), decreased significantly, and the pressure at the output nodes converged to the desired values.
Fig. 3 presents the \(P_{SAT}\) values across varying parameters: number of edges (E), output nodes (M), total nodes (N), and with a fixed \(\Delta=0.1\). (While simulations were conducted for N = 50, 100, 150, 200, and 250, Fig.3 specifically displays results for N=50, 150, and 250). The other training parameters remain consistent with those used in Fig. 2. We define \(P_{SAT}\) as the proportion of networks that achieve successful tuning. A tuning process is deemed successful if the ratio of initial error to final error is less than \(10^{-2}\). This ratio was determined for each pixel through 100 training repetitions. A notable surge in \(P_{SAT}\) is observed, transitioning from a 'hard phase'--where tuning is ineffective--to an 'easy phase' where it predominantly succeeds. This phenomenon is further elaborated as a SAT-USAT phase transition in Fig. 4. The relevant tuning parameter in such transitions is the clause density \(\alpha\)[18, 13], which represents the ratio of clauses to variables. In our context, it is the ratio of nodes to tune to the number of edges (\(M/E\)). The right column of Fig. 3 illustrates the decline of \(P_{SAT}\) with an increasing \(\alpha\). In these plots, curves for constant \(E\) are shown. Increasing \(\alpha\)--achieved by increasing \(M\) while maintaining \(E\)--signifies with an increasing problem hardness. We fit these curves to a sigmoid-like function given by:
\[f(x,a,b)=\frac{1}{2}\left(1-\frac{e^{b\cdot(x-a)}-e^{-b\cdot(x-a)}}{e^{b\cdot( x-a)}+e^{-b\cdot(x-a)}}\right) \tag{6}\]
From these fits, we deduce the critical point \(\alpha_{c}\) and the transition width \(w\). \(\alpha_{c}\) is the \(\alpha\) value at which \(P_{SAT}=0.5\), and \(w\) represents the horizontal span between \(P_{SAT}=0.25\) and \(P_{SAT}=0.75\).
Figure 2: _The tuning process._ [a,b] The network undergoes modifications due to the tuning process. The color bar illustrates the conductance values both pre and post-training. [c] A plot of error versus time. The error plateaus since tuning a particular node ‘\(a\)’ is stopped when \(|v(a)-v_{des}(a)|<10^{-6}\)). [d] Pressures at the output nodes converge to their target values, represented by similarly colored translucent horizontal lines.
By analyzing how \(\alpha_{c}\) and \(w\) vary with \(E\), we deduce the critical number of nodes \(M_{c}=\alpha_{c}E\) that can be successfully tuned for varying system sizes, specifically for N=50, 100, 150, 200, and 250. The width around this critical value, in terms of the number of nodes for varying system sizes, is given by \(\Delta M_{c}=wE\) (refer to Fig. 4). We observe that the critical number of nodes \(\alpha_{c}E\) scales as a power law with respect to \(E\) with an exponent greater than 1 (\(M_{c}\sim E^{1.03}\)). This indicates that our tuning process is scalable and will remain effective for larger system sizes. To demonstrate that this is a SAT-UNSAT phase transition and not just a crossover, we present two supporting arguments: 1) We observe that \(wE\) scales with \(E\) with an exponent less than 1 (\(wE\sim E^{0.76}\)). This implies that \(w\) scales with \(E\) with a negative exponent, indicating that in the thermodynamic limit as \(E\rightarrow\infty\), the transition width \(w\) vanishes. 2) We note that this transition is universal, as all the transition plots in Fig. 3 (d-f) (and for sizes 100 and 150) collapse onto a universal function upon rescaling by \(\alpha\rightarrow(\alpha-\alpha_{c})/w\).
## IV Discussion
Inspired by Physarum polycephalum, we introduce a physical mechanism to tune flow networks, where each network component uses locally available information to optimise a global cost function. We show that this mechanism is scalable to larger networks and present a phase diagram demonstrating for what values of network parameters successful tuning is observed. Additionally, we showed that this optimization strategy exhibits a SAT-UNSAT phase transition.
In previous work, we employed a similar concept to train physical networks [19], where the gradient information was encoded in a chemical signal diffused throughout the network. While this approach is equivalent to executing gradient descent, its biological plausibility is questionable due to the slow diffusion-based transport of chemical signals. In contrast, our current methodology involves advective transport of these signals so that the adaptation occurs over faster time scales. However, with this approach, we do not anticipate a gradient descent of the MSE, as the chemical signals only modify the conductances downstream.
Given the parallels with particle jamming, which
Figure 3: _Adaptation phase diagram._ (a-c) Depictions of \(P_{SAT}\) for networks comprising 50, 150, and 250 nodes, respectively. The color bar indicates \(P_{SAT}\) values. The x-axis represents the number of output nodes (M), while the y-axis denotes the number of edges (E). (d-f) Illustrations of the decline in \(P_{SAT}\) in relation to an increase in \(\alpha=M/E\). Each curve is representative of a horizontal slice of the 2D phase diagram (displayed on the left) at a constant \(E\) value. The data is fitted to a sigmoid like curve as outlined in Eq. 6.
Figure 4: _Universal scaling near the transition._ a) Illustrates the scaling of the critical number of nodes that can be tuned with the varying number of edges \(E\) for different system sizes. (b) Depicts the scaling of the width around this critical number of nodes with respect to the number of edges \(E\) for varying system sizes. (c) Shows that the transition plots collapse onto one another upon rescaling for \(N=50,100..250\).
encompasses a SAT-USAT transition [20], we propose that an analogous tuning mechanism can be established for hard sphere systems. This has the potential to lay the groundwork for methods to embed memory [21] within these systems and as well learning within these systems given recent work reframing the learning process as feedback-based aging in a glassy landscape [22]. Moreover, the potential applications of our findings extend further. One immediate and practical application lies in the domain of urban water supply systems. As cities grow and evolve, their water supply networks face the challenge of dynamically adjusting to meet varying demands. The traditional methods of managing these pressures might not be efficient or responsive enough to cater to the rapid changes in demand. Our proposed mechanism offers a potential solution to this challenge, enabling water supply systems to self-optimize and adapt in real-time. Furthermore, the principles elucidated in our study could inspire innovative methodologies for other human-made transport systems. For instance, power grids, which require a delicate balance of supply and demand across vast networks, could benefit from a similar approach. By integrating mechanisms that allow for localized adjustments based on real-time feedback, these systems could become more resilient, efficient, and adaptive.
The authors acknowledge Benjamin Scellier and Karen Alim for helpful discussions. JMS acknowledges financial support from NSF-DMR-2204312.
|
2309.00132 | When a complementarity in the neutrino and the quark mixing meets a
parameter symmetry and its implications to the unitarity | We present a complementarity that addresses relationships among the
parameters in the neutrino and the quark mixing matrix, use it to estimate the
size of the uncertainty among the elements in the matrix and address its
implications to the unitarity of the quark mixing matrix and Wolfenstein
parameterization and the tension in the first row. First, we describe how a
complementarity with a phase being introduced as an extra parameter can be held
in the nine independent schemes of parameterizing the matrix introducing a
discrete parameter symmetry within a certain size of uncertainty and how it can
be related to a combination of sine functions. With that, for the first time,
we describe a method that we can use to constrain the size of the uncertainty
associated with the parameters, not the central values, complementing that
among the diagonal elements in the neutrino mixing matrix. Then we do the same
for the quark sector and discuss its implication in the relation to the size of
the uncertainty among the elements. Seeing that our estimation is larger than
that was reported by running the global fit in the quark sector, our result
could be an indication that we may need to be cautious when addressing the
tension in the first row of the matrix in the quark sector and when running
global fit to constrain the size of the uncertainty, where Wolfenstein
parameterization, one that is not unitarity guaranteed, is used, as opposed to
the combination of the three rotational matrix. Given that the size of the
uncertainty for the individual diagonal element in the second and the third
row, our result also could be an indication that we may need to wait until the
size of uncertainty for the second and the third row goes down further before
addressing the tension. It could be an opening of considering the possibility
of a mixing between the neutrino and the quark sector too. | Jae Jun Kim | 2023-08-31T20:48:49Z | http://arxiv.org/abs/2309.00132v3 | # When a complementarity in the neutrino mixing meets a parameter symmetry and its implications
###### Abstract
We present a complementarity that complements relationships among the elements in the neutrino mixing matrix and address its physical implications. First we show how a complementarity with a phase being introduced as an extra parameter can be held in the nine independent schemes of parameterizing the matrix introducing a discrete parameter symmetry and a combination of sine functions, a part of Jarlskog invariant, within a certain size of uncertainty. Then, for the first time, we show that we can use the uncertainty associated with the complementarity as an empirical constraint complementing that among the diagonal elements in the neutrino mixing matrix. We discuss its physical implication in relation to the size of the uncertainty among the elements in the end.
## 1 Introduction
Are there complementarities that can complement the relationship among the elements in the neutrino mixing matrix? Furthermore, can the relationship that depends on how we parameterize the mixing lead us to one that is independent of the parameterization? If so, are there some physical implications associated with the complementarity? We initiated our study to seek answers to the questions.
Ever since we empirically confirmed that neutrinos do oscillate [7; 8], physicists have been striving to understand the nature of the neutrinos further. Our coming up with and studying lepton-quark [1] and self complementarities [4] in the neutrino sector have been a part of the efforts to uncover hidden layers of laws of our nature. We continue the effort in this study. We present that a version of a self complementarity that can be considered as an empirical constraint when building a mixing matrix model, investigate its possible origin in a combination of sine functions, from which a relationship among the diagonal elements in the unitary mixing matrix can be constrained further. Our goal is not only study a complementarity, which depends on how we parameterize the matrix, but to move further and address its relationship to the neutrino mixing matrix, which is independent no matter how we parameterize it. We use the relationship to address physical implications in the end.
We start with a previous complementarity studied before. One of most common complementarities can be written as,
\[\theta_{12}+\theta_{13}\sim\theta_{23} \tag{1}\]
, where the size of the three mixing angles are related to each other. Such has been studied under a few different versions of flavor symmetry models too [2; 9; 14].
A challenge we had with Equation 1 was that it cannot be held when the matrix was parameterized differently [4]. For instance, when we do the rotations in different orders, we end up with different analytical expressions, as illustrated in Equation 2. When we do so, the complementarity such as Equation 1 cannot be held in all the schemes but in a few.
Such hindered us to go further down the stream to generalize the relationship and come up with something that stays invariant.
Based on the result shown in [4], it is not too difficult to see that having only one or two \(\theta\)s as the parameters in the complementarity does not help us much when the goal is to realize one that can be held in the parameterization schemes. In other words, in any combination of three or less number of \(\theta\)s as parameters did not lead us to find out some pattern, complementarities, that can be held in the nine independent schemes. For that reason, as an ansatz, we introduce \(\delta\) as an extra parameter and propose a revised version of a complementarity, \(SC\), as
\[SC\equiv\theta_{12}+\theta_{13}+\theta_{23}+\delta_{13}\sim C \tag{2}\]
, where \(C\) is a constant, and it may be expressed as a function of a modulus of \(180^{\circ}\), which happened to work out under the discrete parameter symmetry. Equation 2 was briefly alluded in [5], where complementarities were studied to calculate the size of the mixing between the active and the sterile neutrino sector, but it was not tested further since then.
Equation 2 has advantages over Equation 1. First, it takes the parameters in a more democratic manner. It may not need to but doing so gives us an opportunity to address the complementarity under some symmetry that holds independent of which \(\theta\) being taken as a parameter. Second, we can introduce a modulus when expressing the complementarity, which will be addressed later on. In this study, we test the complementarity introducing a discrete parameter symmetry, relate it to the size of the elements in the neutrino mixing matrix and address its physical implications. In particular, we focus on using the uncertainty associated with the complementarity to estimate the size of that in the unitary mixing matrix.
Note that our study is not to justify why such a complementarity works out under some flavor mass model but to identify that can be held in different order of parameterizing them, calculate the size of the uncertainty associated with and then use it to constrain that for the elements in the mixing matrix.
## 2 Complementarity in different schemes of parameterizing the neutrino mixing matrix
As described in [4], the neutrino mixing matrix in the combination of three rotations in \(\theta_{12}\), \(\theta_{23}\) and \(\theta_{13}\) can be expressed in nine different ways. For the standard scheme [9] of writing the matrix, it can be written as,
\[PS1:U_{23}(\theta_{23})U_{13}(\theta_{13},\delta)U_{12}(\theta_{12}) \tag{3}\]
. In addition to the standard scheme, they can be written in eight other schemes as reordering the rotations as,
\[\begin{split} PS2:U_{12}(\theta_{13})U_{23}(\theta_{23},\delta)U_{12 }^{-}(\theta_{12})\\ PS3:U_{23}(\theta_{23})U_{12}(\theta_{12},\delta)U_{23}^{-}(\theta_ {13})\\ PS4:U_{23}(\theta_{23})U_{12}(\theta_{12},\delta)U_{13}^{-}(\theta_ {13})\\ PS5:U_{13}(\theta_{13})U_{23}(\theta_{23},\delta)U_{12}^{-}(\theta_ {12})\\...\\ PS9:U_{13}(\theta_{13})U_{12}(\theta_{12},\delta)U_{23}(\theta_ {23})\end{split} \tag{2}\]
, where \(U^{-}\) stands for an inverse matrix and the parameters are written in a same manner. Given the unitarity of the matrix, once we have the size of \(\theta\)s and \(\delta\)s in any one of the schemes including the standard scheme [8], we can calculate the size of the parameters expressed in other schemes.
Note that we do constrain the size of \(\theta\) to be in the physical region, \(0^{\circ}{<}\theta{<}90^{\circ}\). However, there could be four possible values of \(\delta\) with a same size of sine function. Taking the sign associated with Jarlskog invariant in the standard scheme,
\[J=sin\theta_{12}cos\theta_{12}sin\theta_{13}cos^{2}\theta_{13}sin\theta_{23} cos\theta_{23}sin\delta_{13} \tag{3}\]
, as a way to avoid the ambiguity associated with the size of \(\delta\), which will eliminate two out of four, given that it is \(<0^{\circ}\), and then manually testing the two remaining choices by entering \(\delta\) back to the mixing matrix, we calculate the size of all the parameters in all the nine parameterization schemes.
Taking the measured value of the parameters in the standard scheme [6],
\[\theta_{12}=33.8^{\circ},\theta_{23}=48.3^{\circ},\theta_{13}=8.6^{\circ}, \delta_{13}\sim 280.0^{\circ} \tag{4}\]
, for an inverted hierarchy, the result goes as,
\[\begin{split} PS:\ \theta_{12}:\ \theta_{23}:\ \theta_{13}:\ \ \ \ \delta_{13}:\ \ \ \ Sum\\ 1:\ 33.82\ 48.30\ \ 8.61\ \ \ 280.00\ \ \ 370.73\\ 2:\ 32.92\ 48.87\ 11.46\ \ 273.42\ \ 366.73\\ 3:\ 34.77\ 45.87\ 15.21\ \ 281.83\ \ 377.70\\ 4:\ 33.38\ 49.22\ 10.32\ \ 278.50\ \ 371.44\\ 5:\ 36.05\ 47.58\ 12.82\ \ 268.86\ \ 383.32\\ 6:\ 25.79\ 43.86\ 24.16\ \ 330.25\ \ 424.08\\ \
So, at least for the measured size of the mixing angles, the complementarity calculated in the different schemes agree in the order of \(\sim 10^{\circ}\), but only for the first five schemes. In other words, it is hard to realize a relationship that can be written as a function of the elements in the unitary mixing matrix at this moment. We need one that can be held in the nine different schemes so we can use the complementarity to address some relationship among the elements in the unitary matrix.
As a resolution, we introduce a discrete parameter symmetry [3]. We take the complementarity as a part of a combination of sine functions and do some approximation.
We start with \(S\), a combination of sine functions,
\[S\equiv sin\theta_{12}sin\theta_{23}sin\theta_{13}sin\delta_{13} \tag{6}\]
, which happened to be a part of \(J\). One of the reasons for considering the expression to embrace our complementarity as a part of it is due to its being invariant under the translation of the parameters under the parameter symmetry. It does not have to be what we have in Equation 6, under which the complementarity could be a part of, but we do need one where it stays invariant under the translation of the parameter by changing their sign or by shifting it with \(180^{\circ}\). For instance, when we change sign associated with \(\theta_{13}\) and \(\delta_{13}\), the expression remains same.
When we expand the sine function in the expression and take the first two leading terms and add them together, we end up with,
\[S\sim\frac{F}{A}\cdot[\;B+\theta_{12}+\theta_{13}+\theta_{23}+\delta_{13}+.. \;]\times sc+hc \tag{7}\]
, where \(A\) and \(B\) are numerical coefficient, \(sc\) is a sign conjugate of the expression in the bracket and \(hc\) is higher order terms. \(F\) can be written as,
\[F=\theta_{12}\cdot\theta_{13}\cdot\theta_{23}\cdot\delta_{13} \tag{8}\]
. For our convenience when calculating the size of the uncertainty later, we may define, \(AC\), the term in the bracket,
\[AC=[\;B+\theta_{12}+\theta_{13}+\theta_{23}+\delta_{13}+..\;]\times sc \tag{9}\]
. The next higher order terms in \(AC\) while keeping the complementarity is in the order of
\[hc\sim\frac{1}{40}\;\cdot\;\theta^{3}\sim 5\% \tag{10}\]
of the linear order term, \(\theta\), before doing the full expansion, even when \(\theta\sim 90^{\circ}\). For that reason, when estimating the size of the uncertainty for \(AC\) is a main goal, we may not include the higher order terms. However, we may need to include them for other studies later.
Equation 7 and 9 is where we see the complementarity in Equation 2 to be a part of the expression. What we can do with the complementarity is to apply translation, which is changing sign, among the parameters, under a discrete parameter symmetry [3], as a way to realize the complementarity being held for all the schemes.
In other words, we want to use Equation 2 as a part of a more general expression that stays invariant under a symmetry such as a discrete parameter symmetry and that can be related to some elements in the unitary matrix. If \(S\) can be expressed as a function of the elements, and the components in \(S\), can be constrained further based on what we have as a complementarity, we can use the complementarity to constrain the size of the uncertainty associated with elements in \(U\). Such is doable due to the combination of the sine function stays invariant under the translation of \(\theta\) when it is accompanied by that of \(\delta\)[3] in the modulus of \(180^{\circ}\). Changing signs of parameters in \(SC\) in Equation 2 does not change the overall value of \(S\) in Equation 2.
With that, for those where the complementarity does not hold, the bottom four in Equation 2, we apply the translation to some parameters. The symmetry in essence is about changing the sign of a \(\theta\) accompanied by that of a \(\delta\), when we do not consider the exchange of mass terms [3]. We end up with,
\[PS: \theta_{12}: \theta_{23}: \theta_{13}: \delta_{13}: Sum\] \[1: 33.82 48.30 8.61 280.00 370.73\] \[2: 32.92 48.87 11.46 273.42 366.73\] \[3: 34.77 45.87 15.21 281.83 377.70\] \[4: 33.38 49.22 10.32 278.50 371.44\] \[5: 36.05 47.58 12.82 268.86 383.32 \tag{11}\] \[6: [25.79] 43.86 [24.16] 209.75 203.68\] \[7: 56.95 61.72 48.96 204.72 372.37\] \[8: 45.26 [39.21] 31.89 337.12 374.97\] \[9: 23.39 53.54 [26.49] 328.75 379.20\]
, where the numbers in the bracket is to indicate that translated under the symmetry. For instance, in \(PS6\), instead of adding \(\theta_{12}\) as a part of the expression, we subtract it.
Under the discrete parameter symmetry, the complementarity, as a part of \(S\), the combination of the sine functions in Jarlskog invariant, can be held within \(\sim 10^{\circ}\), to the first order. In short, as long as we do the mixing in three \(\theta\)s and one \(\delta\), the values for individual parameters can change but the sum, \(SC\), can stay within the size. We use that to address a relationship about \(U\), the elements in the unitary mixing matrix.
Note that it is not to show that the complementarity stays exact, but to take the size of the uncertainty and use that in estimating that in other quantities such as the elements in \(U\).
Constraining the size of the uncertainty associated with a few elements in the unitary mixing matrix
Coming back to Equation 2, we do aware that it cannot be held exactly nor it can be directly expressed as a function of some elements in the neutrino mixing matrix, since it varies depending on how we parameterized it.
However, the combination of sine functions, Equation 6, can be. There we take an advantage of Equation 2 being a part of it and that is the essence of our study.
Interestingly, it was shown how the expression can be written differently depending on the order of the rotation [20]. For the standard scheme, \(U_{123}\), which is \(PS1\) in our case and where we do the rotation in 1, 2 and 3 in order, \(S\) can be written as,
\[S_{1}=J\cdot\frac{1}{U_{1}}\cdot\frac{1}{U_{3}} \tag{10}\]
, where \(J\) is Jarlskog invariant as we know and \(U_{1}\) and \(U_{3}\) are the two of the three diagonal elements in the unitary mixing matrix. The elements in the unitary mixing matrix can be written as,
\[\begin{array}{cccc}U_{1}:0.8214&NDE&NDE\\ U=&NDE&U_{2}:0.5453&NDE\\ &NDE&NDE&U_{3}:0.6577\end{array} \tag{11}\]
, where \(U\) is the neutrino mixing matrix and the numerical size for the diagonal elements are written, and \(NDE\) stands for non-diagonal elements in the matrix.
For the remaining five ways of parameterizing the matrix, where the order is taken place in different permutation, it is a matter of expressing them using a different set of elements.
Depending on the rotation, six different permutation is possible and there \(S\) can be written as,
\[S_{2}=J\cdot\frac{1}{U_{1}}\cdot\frac{1}{U_{2}} \tag{12}\]
, and the rest can be done in a same manner,
\[S_{3}=J\cdot\frac{1}{U_{2}}\cdot\frac{1}{U_{3}}\;,\;S_{4}=J\cdot \frac{1}{U_{1}}\cdot\frac{1}{U_{2}} \tag{13}\] \[S_{5}=J\cdot\frac{1}{U_{2}}\cdot\frac{1}{U_{3}}\;,\;S_{6}=J \cdot\frac{1}{U_{1}}\cdot\frac{1}{U_{3}} \tag{14}\]
. Due to Equation 6 can vary depending on the order of the rotation, we cannot say that the expression for different schemes need to be same. In other words, \(S\) can have different size.
However, as shown in Equation 12, 13 and 14, they can be represented by two out of three diagonal elements in the unitary mixing matrix with \(J\). There, taking the ratio of the two \(S\)s, we can use the complementarity studied to reduce or constrain the size of the uncertainty related to the elements in the unitary matrix. That is the focus in our study and that is going to be parameterization-independent since all the elements are expressed as a function of \(U\).
We show a case for the ratio of \(S_{1}\) and \(S_{2}\) and the rest can be done in a similar manner. It can be reduced down to the ratio of the diagonal elements in the matrix as,
\[R=\frac{S_{1}}{S_{2}}=\frac{U_{2}}{U_{3}} \tag{15}\]
, and we can do the same for other cases. We can use Equation 2 as an empirical constraint for the size of the uncertainty associated with \(U_{2}\) and \(U_{3}\). The relative uncertainty can be expressed as,
\[\Delta^{2}=\sum\Delta^{2}X\cdot\frac{1}{X^{2}} \tag{10}\]
, where \(\Delta^{2}\) represents a square of the size of the relative uncertainty, \(\Delta X\) represents the uncertainty associated with \(X\), and \(X\) represents the components in \(S\). \(X\) in our case are,
\[X=\theta_{12},\theta_{13},\theta_{23},\delta_{13},AC \tag{11}\]
. In Equation 11, the first four components is a part of \(F\) in Equation 10 and the last one is the complementarity in the expansion of \(S\), which is \(AC\).
Taking the size of the uncertainty for \(\theta\)s and \(\delta\),
\[\Delta\theta_{12}\sim 1^{\circ},\Delta\theta_{23}\sim 1^{\circ},\Delta\theta_{13 }\sim 0.1^{\circ},\Delta\delta_{13}\sim 30^{\circ} \tag{12}\]
, in \(1\sigma\) level of confidence interval, and that for,
\[\Delta SC\sim 10^{\circ} \tag{13}\]
, which is based on our study of the complementarity as shown in Equation 2, we calculate the size of the relative uncertainty for one of two \(S\)s to be,
\[\Delta_{1}^{2}\sim\frac{1}{35^{2}}+\frac{1}{45^{2}}+\frac{1}{100^{2}}+\frac{ 1}{7^{2}}+\frac{1}{20^{2}} \tag{14}\]
. Then we end up with for the size of the relative uncertainty for one of \(S\) as,
\[\Delta S\cdot\frac{1}{S}\sim 0.156\sim 16\% \tag{15}\]
. However, when the size is calculated for \(S_{1}\) in Equation 14, the component in the uncertainty calculation for \(S_{2}\) can be reduced further, due to the variations in the size of \(SC\) in Equation 10. It does not need to be same as \(\Delta_{1}\), given the result of Equation 10.
In other words, for \(S_{2}\) in Equation 10, we have different size, smaller size, for some components in the calculation of the overall relative uncertainty. For instance, that for \(\delta_{13}\), due to the complementarity that we studied, we can constrain it further, at least within the scenario of ordering the matrix in different orders.
\[\Delta\delta_{13}\sim\Delta SC+\Delta\theta_{12}+\Delta\theta_{23}+\Delta \theta_{13}\sim 10^{\circ} \tag{16}\]
. So the relative uncertainty for \(\delta_{13}\), which is a dominant uncertainty in the expression, for one of two \(S\)s in Equation 10 can be reduced by a factor of \(\sim\) 3. Such is doable since that for \(S_{1}\) addressed the size in general case.
With that, the size of the relative uncertainty for \(S_{2}\) in Equation 10 can be,
\[\Delta_{2}^{2}\sim\frac{1}{35^{2}}+\frac{1}{45^{2}}+\frac{1}{100^{2}}+\frac{ 1}{18^{2}} \tag{17}\]
, where the fourth component, which is for \(\delta_{13}\), is reduced due to Equation 13 and the last one, which is for \(SC\), is canceled out in the first order. Then the relative uncertainty for \(S_{2}\) in Equation 14 is,
\[\Delta S\cdot\frac{1}{S}\sim 0.067\sim 7\% \tag{15}\]
. Taking that into account, we calculate the size of the total relative uncertainty for the ratio of any combination of \(U\)s to be in the order of,
\[\Delta R\cdot\frac{1}{R}\sim 17\% \tag{16}\]
, as opposed to its being \(\sim 23\%\) when the complementarity in Equation 2 is not taken into account, or even larger when the complementarity is not considered at all. The point here is that the uncertainty associated with the ratio of the two elements in the unitary matrix are constrained, as long as we have expressed the elements in the mixing matrix as a function of three \(\theta\)s and one \(\delta\). Such an approach can be taken for that in the quark sector too. Identifying the relation with what we have in this study to that in the quark sector can certainly by one of future studies [18].
## 4 Discussion
In short, as introducing \(\delta\) as an extra parameter in the self complementarity, we can come up with a relationship among the diagonal elements in the unitary neutrino mixing matrix and use that to constrain the uncertainty associated with the ratio among the diagonal elements in the unitary mixing matrix. One of common complementarities such as Equation 1 can depend on how we parameterize the mixing matrix by simply ordering the rotation differently. However, a revised version such as Equation 2 can be taken into account under some analytical expression such as Equation 6, from which the relationship among the unitary matrix can be written, then we can utilize the complementarity as an empirical constraint for the relative uncertainty associated with, as described in this study.
Our study indicates that we may need to be cautious when reporting the size of the uncertainty among the diagonal elements. For instance, given the uncertainty for \(U_{1}\) being measured, we may use that to constrain that for \(U_{2}\) or \(U_{3}\) further, independent of how we parameterize the matrix. When you have a look at the size of the uncertainty reported in [6], it is based on what we have measured in the standard scheme of parameterizing the matrix with the unitarity of the matrix. For \(U_{1}\), we do not have \(\delta\) being a part of the component in the standard scheme, but it can have one when parameterized by ordering the rotation differently as in Equation 2. The same holds for other diagonal elements. With all that being taken into account, the uncertainty being small for \(U_{1}\) in the standard scheme need to be taken into account when addressing that of the other two diagonal elements.
As for our future study, we may take a revised version of the lepton-quark complementarity [1] and come up with a relationship among the elements in the unitary mixing matrix in the quark and the neutrino sector [18]. Due to the size of the uncertainty associated with the elements is smaller in the quark sector, such study may lead us to constrain the size of the elements in the neutrino mixing matrix even further or give us some idea that
we did not think of before. In addition, we may initiate some computational study of the revised version of the complementarity to see how large the variation is when the size of the mixing parameter varies. It is of our interest to use the method in this study but with other models that were described in [16, 17] too.
The author truly thanks his family for all their support. The author also thanks those for providing many thoughtful comments a while ago.
|
2302.14235 | Ntuple Wizard: An Application to Access Large-Scale Open Data from LHCb | Making the large data sets collected at the Large Hadron Collider (LHC)
accessible to the world is a considerable challenge because of both the
complexity and the volume of data. This paper presents the Ntuple Wizard, an
application that leverages the existing computing infrastructure available to
the LHCb collaboration in order to enable third-party users to request specific
data. An intuitive web interface allows the discovery of accessible data sets
and guides the user through the process of specifying a configuration-based
request. The application allows for fine-grained control of the level of access
granted to the public. | Christine A. Aidala, Christopher Burr, Marco Cattaneo, Dillon S. Fitzgerald, Adam Morris, Sebastian Neubert, Donijor Tropmann | 2023-02-28T01:34:54Z | http://arxiv.org/abs/2302.14235v2 | # Ntuple Wizard: an application to access large-scale open data from LHCb
###### Abstract
Making the large data sets collected at the Large Hadron Collider (LHC) accessible to the world is a considerable challenge because of both the complexity and the volume of data. This paper presents the Ntuple Wizard, an application that leverages the existing computing infrastructure available to the LHCb collaboration in order to enable third-party users to request specific data. An intuitive web interface allows the discovery of accessible data sets and guides the user through the process of specifying a configuration-based request. The application allows for fine-grained control of the level of access granted to the public.
**Keywords:** Open Data, Open Access, LHCb, LHC, CERN, HEP, Outreach
## 1 Introduction
In an increasingly diverse research landscape, management and curation of public data are becoming critical components of transdisciplinary science. Keys to the realization of an open research ecosystem that adds scientific value have been identified in the FAIR principles of scientific data management and stewardship [1]. Making data Findable, Accessible, Interoperable, and Reusable, however, requires a considerable amount of tooling and infrastructure.
A common problem, which is acute for data in high-energy physics but increasingly an issue in other fields as well, is the sheer size of data sets stored in custom file formats. For large-scale experimental facilities, such as the LHC at the European Organization for Nuclear Research (CERN), the data sets are so large that even access by the directly involved scientists has to be centrally managed. As an example, the LHCb data collected in the years 2011-12, corresponding to \(\sim 3\) fb\({}^{-1}\) of proton-proton collisions amount to a volume of 900 TB. This volume only refers to the already preprocessed data available to members of the collaboration and scheduled for release to the public. For the purpose of processing these data, extensive computing infrastructure has been set up by the countries participating in this type of research [2]. Replicating such an infrastructure to allow the public to handle the data would not
only require dedicated expert knowledge, but it would also duplicate existing facilities. On the other hand, any individual research conducted on a typical LHC data set will often only make use of a tiny portion of the full data, filtered and selected according to the requirements of the respective research question. It is therefore natural to provide the public with FAIR access to those highly selective subsamples.
In the following, an application is presented that exposes a data query service to allow the public to request sub-samples of data collected and published by the LHCb experiment. The samples are delivered as ROOT Ntuples [3] a data format that requires no special LHCb-specific software to read and for which converters to other standard file formats exist. We call the application the Ntuple Wizard.
The application interface guides users with basic knowledge in particle physics through the process of discovering the available data and formulating a useful query. The queries can be processed by the existing data production infrastructure, and results will be delivered through the CERN Open Data Portal [4]. By splitting the data request into the construction of a data query and subsequent processing of the query on the internal infrastructure, the LHCb collaboration retains fine-grained control over access to the data. Crucially this system protects the compute infrastructure from attacks by malicious code injection.
### Accessible open data
In 2020, the LHC experiments at CERN adopted a new Open Data Policy [5], the scope of which expanded in 2022 to an Open Science Policy [6]. These documents define the commitments of CERN to make the data collected at the LHC, at several levels of complexity, publicly available [7]:
_Level 1_ Published results -- this can include tables and figures but also preprocessed Ntuples or binned and unbinned fit likelihood functions.
_Level 2_ Outreach and education -- usually in the form of highly preprocessed Ntuples.
_Level 3_ Reconstructed data -- these data have been preprocessed to derive physics objects, such as charged particle candidates, photons, or particle jets. Reconstructed data may or may not be corrected for detector effects, such as efficiency and resolution.
_Level 4_ Raw data - the basic quantities recorded by the experimental instruments.
Both Level 1 and 2 data are considered to be highly processed, abstracted, and manageable using commonly available computers. Level 4 raw data will not be made available due to practical reasons concerning data size but also detector-specific information needed for the interpretation of these data. This leaves Level 3 data as the most versatile and basic data set which will be publicly accessible.
All LHC experiments have long and intricate data reconstruction pipelines, which yield several intermediate output data formats. During a pipeline, the raw data are converted to physical objects such as charged particle trajectories, jets, and vertices. Furthermore, the raw data are classified and filtered to obtain samples enriched in interesting signatures.
Figure 1 shows an overview of the data processing pipeline in LHCb as it has been used during LHC data-taking Runs 1 and 2 (2011-18). The various steps of the pipeline are outlined further in the LHCb computing technical design reports [8, 9]. Level 3 data have been defined as the output of the _stripping_ step. The stripping consists of a large number of selection algorithms called _lines_, which are designed to filter the data and sort events into several collections, which are called _streams_. Streams are defined according to common physics signatures and aim to collect selections with significant overlaps into a common set of files, to reduce duplication of the data. The LHCb data organization is discussed in more detail in Appendix A, including a list of streams available in Runs 1 and 2.
The stripping selections are based on the concept of physics _candidates_. A candidate refers to a set of data matching a particular physics signature. In most cases, this signature will be a particular particle decay, such as for example \(B^{+}\to\bar{D}^{0}\pi^{+}\) with the subsequent decay \(\bar{D}^{0}\to K^{+}\pi^{-}\), where \(B,D,K,\) and \(\pi\) mesons are the lightest hadrons containing \(b,c,s,\) and \(u/d\) quarks respectively. Such cascading decays are represented as tree-like data structures, where the nodes represent (intermediate) particles and the edges indicate a parent-child relationship in the
decay. These data structures are referred to as _decay trees_. The root particle of the decay tree (the \(B^{+}\) in our example) is called its _head_. Stripping selections attempt to find sets of physics objects in the reconstructed LHCb data, which match the desired decay tree and any additional criteria that might be applied to distinguish the intended signal process from background. Typical selection criteria include kinematic variables, vertex and track reconstruction qualities, and particle identification variables. Some stripping lines rely on multivariate classifiers to combine several observables into a single powerful selection criterion. The output of this procedure is collections of decay candidates specified by their particular decay trees in a custom LHCb-specific data format.
It is important to note that candidates are distinct from the concept of _events_ in the LHCb data processing. An event is defined during the data acquisition and refers to a particular time window in which collisions can occur. Several proton-proton collisions can happen during this time window, and in principle, it can happen that several candidates for a particular decay are identified for a single collision. In such cases, relevant quantities (related to vertex reconstruction and flight distances) can be computed for every primary vertex (e.g. collision point) in the event.
In order to convert these data into a framework-independent format a useful concept is the aforementioned Ntuples. The idea of a Ntuple is simple: each candidate is described by a tuple of variables, i.e. physical observables of interest measured on the particular candidate, or referring to the global event in which the candidate was found. A data set consists of \(N\) such tuples, much like a simple CSV file. Ntuples are saved in ROOT files [3] and only basic data types are allowed for the variables. As a small complication in some instances, the variables can actually be arrays of basic data types. In such cases, the Ntuple Wizard provides the necessary documentation for the interpretation.
### Principle of Ntuple creation and the Ntuple Wizard
Both the stripping as well as the Ntuple-making step in Fig. 1 are handled by DaVinci [8, 9, 10], an LHCb application for event selection and data analysis using the Gaudi framework [8, 9, 11]. DaVinci is configured via Python scripts and used to process entire data sets with batch processing. Both the Python configuration as well as the batch production system are intentionally hidden from users of the Ntuple Wizard for security reasons.
The DaVinci application provides access to a number of algorithms that can be combined in sequence for event selection and processing. In order to produce a Ntuple the user has to specify which variables should appear in the output data. This Ntuple configuration is handled by an algorithm named _DecayTreeTuple_, in which variables are registered through the use of so-called _TupleTools_ and _LoKi functors_. A large collection of those tools and functors are available for the user to choose from. In general, a TupleTool will add a set of variables to the Ntuple, while a LoKi functor usually computes a single number. The _LoKi::Hybrid::TupleTool_ can be used to write the output of functors into the tuple. Functors can be combined with standard arithmetic and
Figure 1: LHCb data flow in Runs 1 and 2. The output of the stripping step will be made public through the CERN Open Data Portal [4].
logic operations, providing a flexible and powerful system to compute derived quantities. A list of important available tools is presented in Appendix B.
Figure 2 shows an overview of the Ntuple Wizard architecture, the core functionality of which is the configuration of DaVinci. The metadata and documentation describing the available data, pre-selections, as well as available selection operations are generated from the original provenance traces of the data and the stripping selection code. The web interface presents this metadata and documentation to the user in a pedagogical way that facilitates data discovery and formulation of the query. The query to the data has two principal parts: Data set discovery and Ntuple configuration. First, the application allows the user to select from the available predefined stripping selections, data-taking periods, and conditions. In the second step, the user defines what quantities should be computed and written to the output Ntuple. Standard tools for the computation of typical quantities, such as kinematic variables, particle identification (PID) variables, etc., are available. The query formulated by the user is stored in a set of configuration files. These files can be converted into a Python configuration compatible with the internal LHCb Analysis Productions system [12]. This conversion and the final submission of the query to the compute infrastructure are handled through an LHCb Analysis Productions manager.
### Security considerations
Accepting arbitrary external code to run on the LHCb computing resources has obvious unacceptable security risks. Therefore, the Ntuple Wizard is designed to generate the configuration in a pure data-structure format. As shown in Figure 2, the configuration of the query is captured in YAML files, which can be downloaded and submitted to an LHCb Analysis Productions manager for further processing.
## 2 Metadata and documentation acquisition
In order to facilitate the core functionality of the Ntuple Wizard -- namely data set discovery (Sec. 4) and algorithm configuration (Sec. 5), metadata and documentation from several sources are required. In particular, the application needs to know what types of decays can be queried and what tools are available to compute derived quantities of interest about these candidates.
Since these metadata are unchanging, and providing direct access to the various sources requires authentication and introduces more points of failure, the metadata are collated and served as static files over HTTP. No additional access to the LHCb code or database is needed by the Ntuple Wizard once it has been deployed.
The sources of metadata can be grouped into two coarse categories: the LHCb software stack and the LHCb database. Metadata are acquired from the LHCb software stack in two ways. The first is from the Gaudi Python interface; particularly under the DaVinci application environment. Metadata about the configuration interface of each TupleTool are extracted from DaVinci. Details of the stripping lines, including the chain of selection algorithms that define them, are extracted from the DaVinci versions used in the corresponding stripping campaigns.
The process of building decay candidates in a stripping line often involves a combination of many algorithms from the LHCb selection framework, which combine particles, impose selection requirements, perform PID substitution, and build final-state particle candidates from trajectories of charged particles and calorimeter clusters. The algorithms can be related to each other through their input and output locations. The full list of decays (including all sub-decays) must be inferred by traversing the 'dependency tree' of the selection algorithms. This is performed using custom code during metadata acquisition.
The second, more indirect way is from the LHCb Doxygen pages, which themselves are generated from the source code of the LHCb software stack. The latest Doxygen pages for Run 1 or Run 2 DaVinci versions are used to extract the documentation for each TupleTool and LoKi functor. A campaign to improve the Doxygen documentation at its source was undertaken during the development of the Ntuple Wizard.
The LHCb database provides metadata about the centrally managed data sets, which is necessary to configure the Ntupling productions as explained above. In order not to duplicate effort,
a common code base is employed to extract metadata from the LHCb database for both the Ntuple Wizard and the CERN Open Data Portal.
## 3 User interface
The user interface consists of a sequence of dialogues that guide the user through the configuration steps. This is designed as a client-side dynamic web page that reads metadata acquired at deployment time to serve as static files (see Sec. 2).
Since users of LHCb open data do not, in general, have access to the same support network of experienced users and developers enjoyed by LHCb collaboration members, a key design element of the Wizard is to provide the necessary documentation for a novice user to complete each step of the configuration process.
The existing documentation of DaVinci [8, 9, 10] is fragmented across several sources (Twiki [13], the Starterkit [14], Doxygen [15] and the source code itself), so where possible, the Wizard pulls text from each of these disparate sources and renders it in the relevant context within the user interface.
There are two main steps to formulate a query using the Ntuple Wizard: Dataset discovery and Ntuple configuration. These steps are explained in the following.
## 4 Dataset discovery and production configuration
The available data contain a wide range of stripping selections, data-taking periods, and running conditions. The **Production configuration** dialogue of the Ntuple Wizard guides the user through the selection of the desired subsets. The interface allows the selection of several decays to be processed simultaneously as part of one query. For each decay, a separate Ntuple will be produced.
### Discovering available candidate decays
In the **Decay search** dialogue, the Ntuple Wizard presents a list of all decays selected by the stripping, accompanied by decay descriptors in LoKi and LaTeX formats, information about which stripping lines build them, as well as 'tags' that can be used to filter different types of decays. Decays are searchable through various filters, including the identity or properties of the parent particle and decay products, whether the candidates are built by a specific stripping line, and the aforementioned tags. An example of the decay search is shown in Figure 3. The selected candidate of interest is highlighted in blue, and the collection was narrowed down from the list of all possible decays by using the filters and tags at the top of the page. The 'none of' option of the tags drop-down menu is chosen by default, indicating that decays with the displayed tags are hidden
Figure 2: Architecture of the Ntuple Wizard.
from the list of selectable decays. The tags 'charge-violating' and 'undefined-unstable' corresponding to decays that violate charge conservation and that contain unstable particles without defined decays respectively are hidden by default. If the user wishes to instead isolate decays that meet the criteria of a given tag, a different option can be selected from the 'tags' drop-down menu. It is possible to select several decays for further processing at this stage.
### Stripping line and data set selection
Once a decay is selected by the user, all corresponding stripping lines and data sets from the various running periods are listed, and the desired combination(s) can be selected. The case can arise where the same stripping line shows up in multiple stripping versions within the same dataset (stream, running year, and magnet polarity). These are rendered as separate options in the dataset selection drop-down menu of the Ntuple Wizard. For a given decay, it is recommended to choose only one dataset for each magnet polarity within a given running year, and to use the most recent stripping version in the case of duplicates. The data organization of LHCb is elaborated on in Appendix A, including a table of running years, as well as corresponding collision energies and stripping versions.
Links to documentation about each stripping line including selection algorithms that went into building the decay candidates are displayed to the user to guide them in choosing the most suitable stripping line and data stream for their physics interest. Figure 4 shows an example of the production configuration page, where an available stripping line and data set have been chosen from lists of all possibilities corresponding to the selected decay channel. The blue question mark button contains links to the aforementioned stripping documentation. At this point, the query is specified up to deciding what information to write into the Ntuple.
## 5 Ntuple configuration
The **DecayTreeTuple configuration** dialogue is designed to guide the user through customization of the quantities written to the Ntuple for the selected candidates. For each decay, a separate DecayTreeTuple has to be configured. Care should be taken to name the Ntuples appropriately. The Ntuple Wizard requires a unique name for each Ntuple.
Selected decay trees are visually represented as graphs, where each physics object (e.g. particle) is represented by a node as shown in the screenshots in Figure 5. The user can interact with this graph by selecting one or multiple nodes at a time and determining which TupleTools will be added to each node, which in turn determines which quantities are saved to the Ntuple. A list is rendered on screen depending on the selected node(s), each element of which corresponds to a selected TupleTool, with buttons for configuring and removing the tool. The TupleTool configuration interface includes links to relevant documentation about the tool, including lists of quantities written by the tool where available. Each node in the graph comes with the standard set of TupleTools for LHCb analyses, but more will often be needed depending on the particular physics interests of the user. Furthermore, any added tool will come with the standard configuration, which can be further modified if the user desires. A custom set of standard LoKi variables and functions of these variables can also be saved to the Ntuple for each node, using the _Loki::Hybrid::TupleTool_. Appendix B contains a brief description of the standard set of TupleTools included with each node on the graph, as well as other useful TupleTools for physics analysis. Figure 5 shows an example of the configurable graph corresponding to the selected candidate shown in Figures 3 and 4, as well as a list of TupleTools corresponding to the entire decay candidate (top), and particular nodes selected on the graph (bottom). It can be seen from the figure that nodes can also be selected through the categories shown below the graph and that TupleTools can be added, removed, or configured for each node or grouping of nodes.
Figure 6 shows an example of the user interface for configuring TupleTools, with the particular example showing _TupleToolTISTOS_, which saves trigger information to the Ntuple. It can be seen at the bottom how relevant information is provided.
### Configuration output
Figure 7 shows an example of the output YAML file used to configure the DecayTreeTuple algorithm that was populated via configurations captured in Figs. 5 - 6, where the tools, groups and branches keys are shown specifying which TupleTools and therefore which information will be saved to the Ntuple. The top-level key tools contains a list of TupleTool configurations, from which the parsing functions create and configure TupleTool algorithms attached to the DecayTreeTuple itself, which will thus write either particle-level information about the decay or event-level information, depending on the class of the TupleTool. The keys branches and groups themselves contain lists of dictionaries whose keys specify particles and have their own tools lists which are used similarly to attach TupleTool algorithms to the specified particle(s) in the decay tree. Note that groups differs from branches in that it specifies multiple particles to be looped over and have identically configured TupleTool algorithms attached.
Figure 4: Example of the data set selection and production configuration step of the Ntuple Wizard.
Figure 3: Example of the decay candidate search function of the Ntuple Wizard.
Ntuple Wizard: an application to access large scale open data from LHCb
Figure 5: Example of an interactive graph used to configure DecayTreeTuple, with selected TupleTools displayed for both the entire candidate (top) and selected nodes (bottom).
Figure 6: Example of the configuration interface of a TupleTool within the Ntuple Wizard, (in particular, _TupleToolTISTOS_ for saving trigger information), including links to relevant documentation at the bottom of the modal.
Figure 7: Output of Btree.yaml, the data file used to configure the DecayTreeTuple algorithm.
D_0: particle: D^0 tools: [] Klus: particle: K+ tools: [] piminus: particle: pi- tools: [] piplus: particle: pi+ tools: [] groups: Klus,piminus: particles: - K+ - pi- tools: - TupleToolTISTOS: ExtraName: '' Verbose: false MaxPV: 100 VerboseL0: false VerboseHlt1: false VerboseHlt2: false VerboseStripping: false Fill1L0: true FillHlt1: true FillHlt2: true FillStripping: false TriggerList: [] Hlt1TriggerTisTosName: Hlt1TriggerTisTos Hlt2TriggerTisTosName: Hlt2TriggerTisTos L0TriggerTisTos PIDList: [] TopParticleOnly: false Hlt1Phys: >- Hlt1(?!ODIN)(?!LO)(?!Lumi)(?!Tell1)(?!MB)(?!NZS)(?!Velo)(?! BeamGas)(?!Incident).*Decision Hlt2Phys: >- Hlt2(?!Forward)(?!DebugEvent)(?!Express)(?!Lumi)(?! Transparent)(?!PassThrough).*Decision TIS: true TOS: true TUS: false TPS: false name: DecayTreeTuple/Btree
### Future developments
It is planned to extend the current functionality of the Ntuple Wizard by including the ability to create custom candidates from standard collections of LHCb particles. Another important planned addition is the ability to configure custom jet reconstruction. Ideally, support will be included for the full set of algorithms available in DaVinci for data analysis and event/candidate selection as resources allow.
As of Run 3, which started in 2022, the majority of the filtering and preselection of the data will be done in real time within the LHCb high-level trigger (HLT). In this architecture, the data will be fully reconstructed online and the final preselection algorithms will run in the HLT. Offline preselections will be feasible for a subset of the events. In both cases the output will have the same level of abstraction as the output of the stripping, allowing for a relatively simple adaptation of the Ntuple Wizard once the Run 3 data are made public.
## 6 Request submission and execution
Once the candidate(s) of interest, data set(s), and information to be saved in the Ntuple(s) are specified, and a name and email address have been provided for the production, a ZIP format file containing all relevant output files for the data query can be downloaded (as shown in Figure 8) and submitted to an LHCb Analysis Productions manager.
Requests for Ntuple creation are handled using the Analysis Productions package. The files describing a new request are committed to a repository hosted on the CERN GitLab [16], and a merge request is created once they are ready for review. The Continuous Integration feature of GitLab is used to submit test productions to LHCbDIRAC [8, 9], which automatically processes a small fraction of the data when the remote repository is updated.
Once the request is submitted, it is handled by the LHCbDIRAC production system. A production defines how a dataset is to be processed, and LHCbDIRAC will launch and manage computing jobs until the dataset is fully processed. Productions are defined in'steps' that specify which application to run and which configuration files to read, and may be chained together such that the output of the first step is the input to the second, etc. The info.yaml file produced by the Ntuple Wizard defines one production per dataset, each consisting of a single DaVinci step.
Within the production jobs, DaVinci is configured by functions defined in an external Python module according to the YAML files produced by the Ntuple Wizard. The data structure configured in Section 5 and displayed in Figure 7 is traversed, and the configurable properties of the DecayTree-Tuple algorithm are assigned the corresponding values.
After the Analysis Production jobs are complete, the produced Ntuples will be delivered to the CERN Open Data Portal for retrieval.
## 7 Summary
Providing public access to the large data sets at the LHC is a significant technical challenge, but it is becoming increasingly important for the longevity of high-energy physics in order to optimize acquired knowledge from the collected data. The volume and complexity of the data collected at LHCb make providing direct access to reconstructed (Level 3) data suitable for physics research difficult, motivating the design of the Ntuple Wizard, where users can submit queries to obtain skimmed data samples (Ntuples) of the reconstructed data suitable for their physics interests. The Ntuple Wizard is a web-based application that intuitively guides the user through specifying a query, from discovering a data set from a physics candidate (e.g. decay) of interest, to configuring the information to be saved in the output Ntuple. The output of the Ntuple Wizard is a pure data structure (YAML) format, which is to be submitted to an LHCb Analysis Productions manager so it can be parsed internally to provide the necessary Python scripts needed to configure the DaVinci application. The Ntuples will ultimately be delivered to the CERN Open Data Portal for retrieval.
## Appendix A LHCb Data Organization
Table 1 shows a list of running years, including corresponding collision energies and stripping versions available in the Ntuple Wizard.
LHCb data streams come in two formats, Data Summary Tape (DST) files, which contain the full event information, and micro Data Summary Tape (MDST) files, which only contain the physics objects directly matched in at least one stripping selection. In MDST streams, the rest of the information in the events, apart from a few global event properties, is discarded. Table 2 shows a list of streams from Run 1 and Run 2, with the DST vs MDST format indicated in the stream name.
## Appendix B List of useful TupleTools
### Default TupleTools
A set of TupleTools is included by default for all Ntuple configuration files produced by the Ntuple Wizard. Tools can be removed by the user if desired, but are standard tools used in LHCb analyses, and are recommended to keep for physics analyses. Given the flexible data structure of the Ntuple, it is easy to produce a reduced data structure with a subset of variables at a later stage in data processing, while still maintaining the full set of variables in the original Ntuple.
* _TupleToolANNPID_ -- A tool used to add artificial neural network particle identification information about the physics candidate to the Ntuple.
* _TupleToolEventInfo_ -- A tool used to add event and run information to the Ntuple.
\begin{table}
\begin{tabular}{|c|c|} \hline Running Year & \(\sqrt{s}\) (TeV) & Stripping Versions \\ \hline \multicolumn{3}{|c|}{Run 1} \\ \hline
2011 & 7 & s21r1, s21r1p1, s21r1p2 \\
2012 & 8 & s21, s21r0p1, s21r0p2 \\ \hline \multicolumn{3}{|c|}{Run 2} \\ \hline
2015 & 13 & s24r2 \\
2016 & 13 & s28r2, s28r2p1 \\
2017 & 13 & s29r2, s29r2p1, s29r2p2 \\
2018 & 13 & s34, s34r0p1, s34r0p2 \\ \hline \end{tabular}
\end{table}
Table 1: Table of running years, including collision energy (\(\sqrt{s}\)) and relevant stripping versions available in the Ntuple Wizard.
\begin{table}
\begin{tabular}{|c|} \hline Stream \\ \hline BHADRON.MDST \\ BHADRONCOMPLETEEVENT.DST \\ CALIBRATION.DST \\ CHARM.MDST \\ CHARMCOMPLETEEVENT.DST \\ CHARMTOBSWUM.DST \\ DIMUON.DST \\ EW.DST \\ LEPTONIC.MDST \\ MINBIAAS.DST \\ PID.MDST \\ RADIATIVE.DST \\ SEMILEPTONIC.DST \\ \hline \end{tabular}
\end{table}
Table 2: Table of data streams from Runs 1 and 2 available through the Ntuple Wizard.
Figure 8: Example of downloading the output files of the Ntuple Wizard after the query is fully specified.
* _TupleToolGeometry_ -- A tool used to add information about physics candidate geometry and event geometry to the Ntuple.
* _TupleToolKinematic_ -- A tool used to add kinematic information about the physics candidate to the Ntuple.
* _TupleToolPid_ -- A tool used to add particle identification information about the physics candidate to the Ntuple, with additional information than that in _TupleToolANPID_, including information about which PID detector subsystems were used in the probability calculations.
### Other useful TupleTools
* _TupleToolTISTOS_ -- A tool that saves the trigger TIS/TOS (Trigger independent of Signal/Trigger on Signal) decisions for each particle to the Ntuple.
* _LoKii:Hybrid:TupleTool_ -- A tool that allows the user to add LoKi variables or expressions of these variables known as LoKi functors to the Ntuple.
## Acknowledgements
We thank our colleagues at LHCb for providing the necessary data and software, and within the Data Processing & Analysis (DPA) project for their incredibly valuable discussions. We additionally would like to thank Jose Marco Arias for systematic testing of the web interface. All authors acknowledge support from CERN. In addition, C.A.A. and D.S.F. acknowledge support from the National Science Foundation under Award Number 2012926, and A.M. and S.N. acknowledge support from the DFG Grant NE 2185/1-1.
|
2309.16750 | Memory in Plain Sight: Surveying the Uncanny Resemblances of Associative
Memories and Diffusion Models | The generative process of Diffusion Models (DMs) has recently set
state-of-the-art on many AI generation benchmarks. Though the generative
process is traditionally understood as an "iterative denoiser", there is no
universally accepted language to describe it. We introduce a novel perspective
to describe DMs using the mathematical language of memory retrieval from the
field of energy-based Associative Memories (AMs), making efforts to keep our
presentation approachable to newcomers to both of these fields. Unifying these
two fields provides insight that DMs can be seen as a particular kind of AM
where Lyapunov stability guarantees are bypassed by intelligently engineering
the dynamics (i.e., the noise and step size schedules) of the denoising
process. Finally, we present a growing body of evidence that records DMs
exhibiting empirical behavior we would expect from AMs, and conclude by
discussing research opportunities that are revealed by understanding DMs as a
form of energy-based memory. | Benjamin Hoover, Hendrik Strobelt, Dmitry Krotov, Judy Hoffman, Zsolt Kira, Duen Horng Chau | 2023-09-28T17:57:09Z | http://arxiv.org/abs/2309.16750v2 | # Memory in Plain Sight
###### Abstract
Diffusion Models (DMs) have recently set state-of-the-art on many generation benchmarks. However, there are myriad ways to describe them mathematically, which makes it difficult to develop a simple understanding of how they work. In this survey, we provide a concise overview of DMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs) which exposes a mathematical connection to the highly related yet often overlooked class of energy-based models, called Associative Memories (AMs). Energy-based AMs are a theoretical framework that behave much like denoising DMs, but they enable us to _directly compute_ a Lyapunov energy function on which we can perform gradient descent to denoise data. We then summarize the 40 year history of energy-based AMs, beginning with the original Hopfield Network, and discuss new research directions for AMs and DMs that are revealed by characterizing the extent of their similarities and differences.
## 1 Introduction
Diffusion Models [1; 2; 3; 4] (DMs) have rapidly become the most perfomant class of generative models on images [5; 6; 7; 8; 9]. Recent examples of well known DMs include Imagen [7] (backed by Google), DALLE-2 [6] (backed by OpenAI), Midjourney [10] (an independent research lab), and Stable Diffusion [8] (backed by StabilityAI). The strong adoption of DMs by large companies, startups, and academia continue to lower the computational barriers to these models and push the capabilities of these models to domains like videos [11], molecules [12], audio [13; 14; 15], and even text [16].
On each of these domains, a trained DM will use a neural network to iteratively denoise a noisy image (or other data point) until there is no noise remaining. Each denoising step aims to increase the probability (specifically, the _log-probability_) that the noisy data looks like a sample from the real data distribution. An example log-probability landscape with two peaks of "real-looking data" (located at the top right and bottom left of the landscape) is shown in the left half of Figure 1, where each denoising step pushes the initial noisy image (blue dot) towards regions of higher log-probability.
The overall goal of a DM is to repeatedly denoise an image until no noise remains. That is:
**Goal of Diffusion Models**: _Given a corrupted representation of some data, recreate the original uncorrupted data._
However, this is not how diffusion processes behave in non-equilibrium thermodynamics [1]. Consider an example of diffusion in nature, where a droplet of red food dye is dropped into a glass of hot water. The millions of tiny red particles in the droplet rapidly "diffuse" through the water away from its initial concentration until the resultant water is a uniformly pale pink. We can say that diffusion is then an inherently _information-destroying_ process in forward time; one which maximizes the entropy of a distribution of particles.
If diffusion is information-destroying, then training a neural network to _reverse_ this process must be information-adding. This is the essence of DMs as proposed by [1, 2]. The _forward process_ of DMs iteratively adds noise to some image (or other data), whereas the _reverse process_ is trained to remove the noise introduced at each step of the forward process in an attempt to recreate the original image. The forward process on its own is un-parameterized and non-interesting, only requiring that the added noise intelligently explores all important regions of the log-probability landscape; the computational power of Diffusion Models lies entirely in its reverse process.
### Diffusion's Unseen Connection to Associative Memories
Associative Memory (AM) is a theory for the computational operation of brains that originated in the field of psychology in the late 19th century [17]. We are all familiar with this phenomenon. Consider walking into a kitchen and smelling a freshly baked apple pie. The smell could elicit strong feelings of nostalgia and festivities at grandma's, which surfaces memories of the names and faces of family gathered for the holidays. The smell (query) retrieved a set of feelings, names, and faces (values) from your brain (the associative memory).
Formally, AMs are dynamical systems that are concerned with the storage and retrieval of data (a.k.a. signals) called _memories_[18]. These memories live at local minima of an energy landscape that also includes _all possible corruptions of those memories_. From the Boltzmann distribution (Eq. 1), we know that energy can be understood as a negative and unscaled log-probability, where local peaks in the original probability distribution correspond exactly to local minima in the energy. An example energy landscape with two memories representing "real-looking data" (valleys located at the top right and bottom left of the landscape) is shown on the right side of Figure 1. Memories are retrieved (thus
Figure 1: Comparing the emphases of Diffusion Models and Associative Memories tasked with learning the same energy (negative log-probability) landscape, represented with both contours and gradient arrows. Diffusion Models (left) train a score function (depicted as orange arrows) to model the gradient of the energy. The noisy starting signal (depicted as a blue circle) becomes less corrupted by following these gradients in the reverse denoising process. Associative Memories (right) instead learn a smooth energy function, depicted as contours. The “memory retrieval dynamics” is the process by which a fixed point is retrieved by following the energy gradient from the initial signal. _This process is mathematically equivalent to the objective of the reverse denoising process of Diffusion Models_. Memory retrieval dynamics always converge to fixed points (there are two in the figure, the top right and lower left) where the energy is at a local minimum. This guarantee does not exist for Diffusion Models.
reconstructing our initial signal) by descending the energy according to the equation in the center of Figure 1.
The goal of Associative Memories as stated above is identical to that of DMs:
**Goal of Associative Memories**: _Given a corrupted representation of some data, recreate the original uncorrupted data._
Yet, though Diffusion Models (or score-based models in general) have been related to Markovian VAEs [19, 4, 1], Normalizing Flows [20], Neural ODEs [21], and Energy Based Models [2, 22], an explicit connection to Associative Memories has not been acknowledged. Such a connection would contribute to a growing body of literature that seeks to use modern AI techniques to unravel the mysteries of memory and cognition [23, 24, 25, 26, 27, 28, 29].
The class of AMs discussed in this paper are compelling architectures to study for more reasons than their original biological inspiration. All AMs define a _Lyapunov function_[18] (i.e., energy function with guaranteed stable equilibrium points), a characteristic of many physical dynamical systems that is strikingly absent in the formulation of DMs. This feature allows AMs to be well behaved for any input and for all time. It also allows us to formalize the _memory capacity_ of a given architecture: i.e., how many stable equilibrium points (memories) can we compress into a given choice of the architecture? This leads to architectures that are incredibly parameter efficient. For example, the entire energy landscape depicted in Figure 1 is simulated using an AM with four parameters: a single matrix containing the locations \(\mathbf{x}^{\star}\in\mathbb{R}^{2}\) of the two fixed points. We discuss the memory capacity of AMs in SS 4.3.
### Our Contributions
This survey explores a here-to-fore unseen connection between Diffusion Models and Associative Memories (specifically, Hopfield Networks and their derivatives), straddling a unique overlap between traditional and modern AI research that is rarely bridged. Through this survey we aim to:
* **Raise awareness** about AMs by exploring their uncanny mathematical similarities to DMs. AMs are a theoretical framework that behave much like DMs, but they enable us to design architectures that _directly compute_ a negative log-probability (energy) on which we can perform gradient descent to denoise data (retrieve memories). We isolate the fundamental differences between DMs and AMs (e.g., AM architectures satisfy Lyapunov stability criteria that DMs do not) and provide evidence for how these differences are mitigated through the design and usage of DMs.
* **Provide an approachable overview** of both DMs and AMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs). The dynamical equations of each model allow us to describe both processes using terminology related to particle motion taken from undergraduate physics courses.
* **Propose future research directions** for both DMs and AMs that is enabled by acknowledging their uncanny resemblance. We additionally identify similarities that AMs have to other modern architectures like Transformers [30, 31, 32]. We believe these similarities provide evidence that the field of AI is converging to models that strongly resemble AMs, escalating the urgency to understand Associative Memories as an eminent paradigm for computation in AI.
### A Language of Particles, Energies, and Forces
Different notations are used to describe the dynamical processes of DMs and AMs. We have tried to unify notation throughout this survey, describing the time-evolving state \(\mathbf{x}^{t}\) of our "data point" using the convention that denoising/memory retrieval happens in forward-time \(t\), and preferring to "minimize the energy" rather than "maximize the log-probability". This choice of language comes from the literature of AMs, allowing us to describe the reconstruction/retrieval dynamics using analogies taken from physics.
**Particle**: A _data point_ (e.g., an image from some data set). In DMs, we choose to describe a particle by its time-varying _position_\(\mathbf{x}^{t}\). In the case of colored images, the position is a high dimensional tensor of shape \(\mathbb{R}^{C\times H\times W}\) (\(C\)-number of channels, \(H\)-height, \(W\)-width).
**Energy**: The _distribution_ of our data that we model, exactly equal to the negative log-likelihood of a data point. This scalar-valued function must be defined for every possible position \(\mathbf{x}\) of our particle and be bounded from below. The update equations of both DMs and AMs seek to minimize this energy, though only AMs compute the energy itself.
**Force**: In physics, force is defined as the negative gradient of the energy: \(\mathbf{F}(\mathbf{x})\triangleq-\nabla_{\mathbf{x}}E(\mathbf{x})\). In data science, _force_ is equal to the _score_ discussed in SS 3.1 (the gradient of the log-likelihood). The force is a tensor of the same shape as our particle's position \(\mathbf{x}\) pointing in the direction of steepest energy descent.
Thus, data can be understand as particles described by their _positions_ rolling around a hilly _energy landscape_ according to some observed _force_. A _memory_ is a local minimum (valley) on the energy landscape into which the particle will eventually settle. We find that this language improves intuition for both DMs and AMs.
### Related Surveys
The popularity of DMs has resulted in surveys that focus on the methods and diverse applications of DMs [33; 2; 34] alongside tutorial-style guides that seek to gently explain them [2; 35; 36]. In all cases, these surveys/guides make no connection to AMs. For an exhaustive reference on DM literature, [37] has collected a (still growing) list of \(>\)\(600\) diffusion papers.
Other surveys cover AMs and their applications [38; 39; 40; 41]. However, we are aware of only two efforts to identify AM-like behavior in modern networks [42; 43], and these efforts serve more to acknowledge high-level similarities between recurrent networks and AMs. Certainly, no existing surveys have explored the depth of similarity between DMs and AMs.
## 2 Mathematical Notations
In this survey we deviate from notations typically used for DMs and AMs. To minimize visual clutter, we prefer tensor notation over Einstein notation, representing scalars in non-bolded font (e.g., energies \(E\) or time \(t\)), and tensors (e.g., vectors and matrices) in bold (e.g., states \(\mathbf{x}\) or weights \(\mathbf{W}\)). These distinctions also apply to scalar-valued and tensor-valued functions (e.g., energy \(E(\cdot)\) vs. activations \(\mathbf{g}(\cdot)\)). A collection of learnable parameters is expressed through the generic variable \(\mathbf{\theta}\). Gradients are expressed using "nabla" notation of a scalar valued function, where \(\nabla_{\mathbf{x}}(\cdot)\) will be a tensor of the same shape as vector \(\mathbf{x}\). \(\mathbf{g}^{\intercal}\) represents the transpose, whereas \(\mathbf{g}^{T}\) represents the tensor \(\mathbf{g}\) occurring at time \(T\). See Appendix A for an overview of all notation used in this survey.
## 3 Diffusion Models
### Diffusion Models are Score-Based Models
As generative models, Diffusion Models seek to synthesize new data by sampling from some learned probability distribution \(p\) of our training data. Other generative models, like Variational Auto-Encoders (VAEs) [44; 45; 46] and Normalizing Flows [47; 48; 49], use their parameters \(\mathbf{\theta}\) to directly model the data's likelihood or probability density function (p.d.f.) \(p_{\mathbf{\theta}}\). Mathematically, any p.d.f. can be expressed as the following equation:
\[p_{\mathbf{\theta}}(\mathbf{x})=\frac{e^{-E_{\mathbf{\theta}}(\mathbf{x})}}{Z_{\mathbf{ \theta}}} \tag{1}\]
where \(Z_{\mathbf{\theta}}\) is a normalizing constant to enforce that the \(\int p_{\mathbf{\theta}}(\mathbf{x})d\mathbf{x}=1\), known as the partition function, and \(E_{\mathbf{\theta}}(\mathbf{x})\) is the **energy function**, also known as an "unnormalized probability function". Instead of modeling the p.d.f. itself, DMs use their parameters to model the **score function**\(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x})\) of the distribution [21; 50; 51], and as such are considered a class of _score-based models_. The score
function is defined as the gradient of the log-likelihood itself, or equivalently, the negative gradient of the energy function (as the normalizing constant \(Z_{\mathbf{\theta}}\) does not depend on \(\mathbf{x}\)) [2]:
\[\mathbf{F}_{\mathbf{\theta}}(x)=\nabla_{\mathbf{x}}\log p_{\mathbf{\theta}}(\mathbf{x})= -\nabla_{\mathbf{x}}E_{\mathbf{\theta}}(\mathbf{x})-\nabla_{\mathbf{x}}\log Z_{\bm {\theta}}=-\nabla_{\mathbf{x}}E_{\mathbf{\theta}}(\mathbf{x}) \tag{2}\]
Figure 1 depicts the score function as vectors pointing to peaks in the log probability (equivalently, local minima in the energy function). In practice, we often think of the score function as predicting the noise we need to remove from \(\mathbf{x}^{t}\), where adding the estimated score \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{t})\) in Eq. 3 is the same as removing the predicted noise. Thus, given a neural network trained to predict the score, the process of generating data using discrete score-based models can be construed as an iterative procedure that follows the approximate score function (energy gradient) for some fixed number of steps \(T\). The final state \(\mathbf{x}^{T}\) is declared to be a local peak (minima) of the log-likelihood (energy) and should now look like a sample drawn from the original distribution \(p\).
\[\mathbf{x}^{t+1}=\mathbf{x}^{t}+\alpha\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{t }),\ \ \ \ \ t=0,...,T-1 \tag{3}\]
where \(\alpha(t)\in R\) is a step size in the direction \(F_{\mathbf{\theta}}\). Note that Eq. 3 is described using the convention that _time progresses forward when reconstructing the data_. However, the literature around DMs describes _time in the reconstruction process as going backwards_, denoting \(x^{T}\) to refer to the sample drawn from pure noise and \(x^{0}\) to refer to the final reconstructed sample drawn from \(p\)[1; 2; 4; 19; 21]. Eq. 4 rewrites Eq. 3 using the variable \(s\triangleq T-t\) to represent the reverse-time convention used in most DM papers (shown in Figure 2).
\[\mathbf{x}^{s-1}=\mathbf{x}^{s}+\alpha\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{s }),\ \ \ \ \ s=T,...,1 \tag{4}\]
Using score-based models is then conceptually very simple: they seek to maximize (minimize) the log-likelihood (energy) of an initial sample by following the score \(\mathbf{F}_{\mathbf{\theta}}\) (energy gradient). In practice, many of the functions above are additionally conditioned on the time \(s\); i.e., the p.d.f. can be expressed as \(p_{\mathbf{\theta}}(\mathbf{x};s)\), the score function can be expressed as \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};s)\), and the step size \(\alpha\) can be expressed as \(\alpha(s)\). We are also free to condition the score function however we desire; e.g., controllable image generation using DMs express the tokens \(\mathbf{C}\) of language prompts as a conditioning variable on the score function \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};t,\mathbf{C})\)[8; 10; 52].
### Diffusion Models Cleverly Train the Score
Though simple in concept, DMs require several tricks to train. How do you train the "score" of a dataset? Several techniques to train the score-based models have been proposed, with the most popular being the technique of _denoising score-matching_[53; 2; 54; 55; 56; 4]. To train with denoising score-matching, samples \(\mathbf{x}\) from our original data distribution \(p\) are repeatedly perturbed with small amounts of noise \(\mathbf{\eta}(s)\). We then train our score-based model \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{s+1})\) to remove the noise added at the previous step. If the added noise is small enough, we can guarantee that our optimal score function (parameterized by optimal weights \(\mathbf{\theta}^{*}\)) approximates the score of the true data distribution: i.e., \(\mathbf{F}_{\mathbf{\theta}^{*}}(\mathbf{x})\approx\nabla_{\mathbf{x}}\log p( \mathbf{x})\) and \(\mathbf{F}_{\mathbf{\theta}^{*}}(\mathbf{x}^{s+1})\approx-\mathbf{\eta}(s)\). This process of noisifying a signal is known as the _forward process_, which we also call the _corruption process_.
The original DM proposed by [1] trained their model using a forward process consisting of \(T=1000\) noise-adding steps. Unfortunately, to sample from DMs each step in the forward process must be paired with a step in the _reconstruction_ or _reverse process_, which likewise required \(T=1000\) steps/applications of the score network \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x}^{s})\). [2] improved the computational efficiency by introducing several techniques into the forward process, including a form of annealed Langevin dynamics where larger noises are added the further you are from the original data distribution, controlled by a variance scheduler \(\sigma(s)\in\mathbb{R}\). They then introduce Noise Conditional Score Networks (NCSNs) to condition the score network \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};\sigma(s))\) on the amount of noise \(\sigma(s)\) to remove. Because the \(\sigma(s)\) has a 1:1 correspondence to a particular time step \(s\), NCSNs can also be written as \(\mathbf{F}_{\mathbf{\theta}}(\mathbf{x};s)\).
### Diffusion Models are Continuous Neural ODEs
The original DM [1] and NCSN [2] relied on a fixed number of discrete steps in the forward and reverse process and could not work without Langevin dynamics (stochastic descent down the energy
function during reconstruction). [21] introduced a _probability flow Ordinary Differential Equation_ (PF-ODE) formulation for DMs to unify and extend previous approaches, which [34, 57, 58] claim represents the culmination of DM theory for several reasons.
1. PF-ODEs show that Langevin dynamics are optional to making DMs work, a phenomena also detected by [59]. Deterministic forward and reverse processes perform just as well as those formulated stochastically.
2. PF-ODEs operate in continuous time and do not require a fixed number of discrete time-steps or noise scales. This makes it possible to use black-box ODE solvers during the reverse-process enabling faster generation than previously possible.
3. PF-ODEs enable exact log-likelihood calculations, drawing connections to normalizing flows [47, 48] and neural ODEs [60]. This perspective proffers DMs as a new technique to solve inverse problems.
4. The latent encodings for each step of PF-ODEs are both meaningful and uniquely identifiable. Given a sufficiently performant model trained on sufficient data, we can interfere with latent representations to perform tasks the model was not originally trained to perform (e.g., inpainting, outpainting, interpolation, temperature scaling, etc.)
The importance of PF-ODEs to the modern understanding of DMs cannot be overstated and warrants sufficient background. Consider the standard form of a generic ODE under constrained time \(s\):
\[\frac{d\mathbf{x}}{ds}=\boldsymbol{\mu}(\mathbf{x};s),\hskip 14.226378pts\in[T,0] \tag{5}\]
where \(\boldsymbol{\mu}(\mathbf{x};s)\) is an arbitrary _drift function_ that represents some deterministic change in position of particle \(\mathbf{x}^{s}\) at time \(s\). Diffusion models need to further corrupt the input \(\mathbf{x}\), so we add to Eq. 5 an infinitesimal amount of noise \(\frac{d\mathbf{w}}{ds}\) scaled by some real-valued _diffusion coefficient_\(\sigma(s)\). The forward process of PF-ODEs is now called an Ito Stochastic Differential Equation (SDE) [21].
\[\frac{d\mathbf{x}}{ds}=\boldsymbol{\mu}(\mathbf{x};s)+\sigma(s)\frac{d\mathbf{ w}}{ds} \tag{6}\]
[58] argues that the equation above can be further simplified without any loss in performance by assuming a constant drift function \(\boldsymbol{\mu}(\mathbf{x};s)=0\), a convention adapted by [57] to set SOTA one-step generation with DMs. This convention simplifies Eq. 6 to make the forward process depend only on a noise scale and the infinitesimal random noise shown in Eq. 7.
\[\frac{d\mathbf{x}}{ds}=\sigma(s)\frac{d\mathbf{w}}{ds}. \tag{7}\]
The reverse process now depends only on the noise scale \(\sigma\) and the score \(\mathbf{F}_{\boldsymbol{\theta}}\)[21, 57]
\[\frac{d\mathbf{x}}{ds}=-\frac{1}{2}\sigma^{2}(s)\mathbf{F}_{\boldsymbol{ \theta}}(\mathbf{x};s)\,. \tag{8}\]
We have written the strange "reverse time" convention of DMs using time variable \(s\triangleq T-t\). Eq. 9 rewrites Eq. 8 using forward time \(t\) and collects the noise scale into a real-valued time-variable \(\tau(t)\triangleq\frac{2}{\sigma^{2}(t)}\) to control the rate of change.
\[\tau(t)\frac{d\mathbf{x}}{dt}=\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x};t )\,,\hskip 14.226378ptt\in[0,T] \tag{9}\]
PF-ODEs unify previous theories of DMs by simplifying the assumptions needed to make DMs work. At the same time, the continuous dynamics of PF-ODEs exposes a strong mathematical connection to Associative Memories that was difficult to see before.
## 4 Associative Memories
An Associative Memory (AM) is a dynamical system concerned with the storage and retrieval of signals (data). In AMs all signals exist on an _energy landscape_; in general, the more corrupted a signal the higher its energy value. Uncorrupted signals or _memories_ live at local minima of the system's energy or _Lyapunov_ function. The process of _memory retrieval_ is the dynamic process of descending the energy to a fixed point or _memory_ as shown in Figure 1, a phenomenon that is guaranteed to occur by constraints placed on the architecture (see SS 4.1).
The following subsections describing AMs can get quite technical and rely on modeling techniques used regularly in physics (e.g., Lagrangian functions, Lyapunov functions, Legendre transforms [18, 61]) but that are foreign to experts in modern AI. We thus deem it important to summarize the core points of AMs before diving into them and their history.
AMs are a class of neural architectures that defines an energy function \(E_{\boldsymbol{\theta}}(\mathbf{x})\in\mathbb{R}\), where \(\boldsymbol{\theta}\) is a set of learnable parameters and \(\mathbf{x}\) represents a data point. In other words, an AM is designed to compute a scalar energy on every possible input signal \(\mathbf{x}\). Given some initial input \(\mathbf{x}^{0}\) at time \(t=0\) (this could be any kind of signal, including pure noise), we want to minimize the energy (by descending the gradient) to a fixed point that represents a memory learned from the original data distribution.
\[\tau\frac{d\mathbf{x}}{dt}=-\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}( \mathbf{x})\,,\hskip 14.226378ptt>0 \tag{10}\]
Here, \(\tau\) is a time constant that governs how quickly the dynamics evolve. This equation can of course be discretized as
\[\mathbf{x}^{t+1}=\mathbf{x}^{t}-\frac{dt}{\tau}\nabla_{\mathbf{x}}E_{ \boldsymbol{\theta}}(\mathbf{x}^{t}) \tag{11}\]
and treated as a neural network that is recurrent through time. The energy of AMs is constructed in such a way that the dynamics will eventually converge to a fixed point \(\mathbf{x}^{*}\) at some \(t=T\). That is,
Figure 2: Diffusion Models through the lens of PF-ODEs, adapted from [2]. The forward (corrupting) process takes a true data point at time \(s=0\) and corrupts the data into pure noise at time \(s=T\) using an SDE. The reverse (reconstructing) process undoes the corruption using the score of the distribution. Both equations are shown without drift terms as in Eq. 7 and Eq. 8.
\[\frac{d\mathbf{x}^{\star}}{dt} =0\quad\mathrm{and}\] \[\mathbf{x}^{\star} =\mathbf{x}^{\star}-\frac{dt}{\tau}\nabla_{\mathbf{x}}E_{\mathbf{ \theta}}(\mathbf{x}^{\star})\qquad\forall t>T\,.\]
### The TinyBrain Sandbox for Building Associative Memories
AMs are constrained to use only neural architectures that are themselves a Lyapunov function: that is, these architectures must compute a scalar energy and be fixed point attractors. In this section we discuss the architectural constraints needed to ensure a Lyapunov function on AMs, following the abstractions of [62].
AMs were originally inspired by neurons and synapses within the brain [64]. As such, it is useful to consider a simple model we call the TinyBrain model1 as depicted in Figure 3. The TinyBrain model consists of two dynamic variables called _neuron layers_ connected to each other by a single synaptic weight matrix \(\mathbf{W}\) (relationship between variables). Each neuron layer (dynamic variable) can be described by an _internal state_\(\mathbf{x}\in\mathbb{R}^{N}\) (which one can think of as the _membrane voltage_ for each of the \(N\) neurons in that layer), and their _axonal state_\(\mathbf{g}\in\mathbb{R}^{N}\) (which is analogous to the _firing rate_ of each neuron). We call the axonal state the _activations_ and they are uniquely constrained to be the gradient of a scalar, convex Lagrangian function \(\mathcal{L}:\mathbb{R}^{N}\mapsto\mathbb{R}\) we choose for that layer; that is, \(\mathbf{g}=\nabla_{\mathbf{x}}\mathcal{L}\).
Footnote 1: Our use of the term “brain” does not claim resemblance to real brains or brain-like behavior; instead, the term encapsulates a useful thought experiment around the usage of the terms “neurons” and “synapses”.
The _Legendre Transform_ of the Lagrangian defines the energy \(E_{\nu}^{\mathrm{layer}}(\mathbf{g}_{\nu},\mathbf{x}_{\nu})\in\mathbb{R}\) of a neuron layer \(\nu\), shown in Eq. 12[18, 61, 62]. If all neuron-layers of a system have a Lagrangian, a synaptic energy can be easily found such that the entire system has a defined energy.
\[E_{\nu}^{\mathrm{layer}}=\mathbf{g}_{\nu}^{\intercal}\mathbf{x}_{\nu}- \mathcal{L}_{\nu}(\mathbf{x}_{\nu}) \tag{12}\]
Figure 3: The TinyBrain sandbox for understanding Associative Memories is a fully connected bipartite graph (structurally similar to the Restricted Boltzmann Machine [63]). Both feature neurons and memory neurons have states \(\mathbf{x}_{f}\) and \(\mathbf{x}_{m}\) respectively that evolve in time; these states have corresponding activations \(\mathbf{g}_{f}\) and \(\mathbf{g}_{m}\). The energy of the synapse is minimized when the memory activations \(\mathbf{g}_{m}\) perfectly align with the feature activations \(\mathbf{g}_{f}\) according to the learned parameters \(\mathbf{W}\). The energy for the entire system is defined by this the energies of each individual component, and the internal states of each neuron layer evolve by descending the global energy’s gradient.
For historical reasons, we call one neuron layer in our TinyBrain the "feature neurons" (fully described by the internal state \(\mathbf{x}_{f}\) and Lagrangian \(\mathcal{L}_{f}\) of the features) and the other the "memory neurons" (fully described by the internal state \(\mathbf{x}_{m}\) and Lagrangian \(\mathcal{L}_{m}\) of the memories). These layers are connected to each other via a synaptic weight matrix \(\mathbf{W}\in\mathbb{R}^{N_{m}\times N_{f}}\), creating a synaptic energy \(E^{\mathrm{synapse}}(\mathbf{g}_{f},\mathbf{g}_{m};\mathbf{W})\in\mathbb{R}\) defined as
\[E^{\mathrm{synapse}}=-\mathbf{g}_{m}^{\intercal}\mathbf{W}\mathbf{g}_{f}\,. \tag{13}\]
We now have defined all the _component energies_ necessary to write the total energy \(E^{\mathrm{system}}\) for TinyBrain as in Eq. 14. Note that Eq. 14 is a simple summation of the energies of each component of the TinyBrain: two layer energies and one synaptic energy. This perspective of modular energies for understanding AMs was originally proposed by [62].
\[E^{\mathrm{system}}=E_{f}^{\mathrm{layer}}+E_{m}^{\mathrm{layer}}+E^{ \mathrm{synapse}} \tag{14}\]
The hidden states of our neurons \(\mathbf{x}_{f}\) and \(\mathbf{x}_{m}\) evolve in time according to the general update rule:
\[\begin{cases}\tau_{f}\frac{d\mathbf{x}_{f}}{dt}&=-\nabla_{\mathbf{g}_{f}}E^{ \mathrm{system}}\\ \tau_{m}\frac{d\mathbf{x}_{m}}{dt}&=-\nabla_{\mathbf{g}_{m}}E^{ \mathrm{system}}\,.\end{cases} \tag{15}\]
Eq. 15 reduces to the manually derived update equations for the feature and memory layers presented in Eq. 16[18].
\[\begin{cases}\tau_{f}\frac{d\mathbf{x}_{f}}{dt}&=\mathbf{W}^{\intercal} \mathbf{g}_{m}-\mathbf{x}_{f}\\ \tau_{m}\frac{d\mathbf{x}_{m}}{dt}&=\mathbf{W}\mathbf{g}_{f}-\mathbf{x}_{m} \end{cases} \tag{16}\]
where \(\mathbf{W}^{\intercal}\mathbf{g}_{m}\) and \(\mathbf{W}\mathbf{g}_{f}\) represent the _input currents_ to feature neurons and memory neurons respectively, and \(-\mathbf{x}_{f}\) and \(-\mathbf{x}_{m}\) represent exponential decay (this implies that the internal state \(\mathbf{x}\) will exponentially decay to \(\mathbf{0}\) in the absence of input current). Note that the synaptic matrix \(\mathbf{W}\) plays a symmetric role: the same weights that modulate the current from feature neurons \(\rightarrow\) memory neurons are used to modulate the current from memory neurons \(\rightarrow\) feature neurons. This "symmetric weight constraint" present in AMs does not exist in real neurons and synapses [18, 65].
Krotov and Hopfield identified [18] that this toy model is mathematically equivalent to the original Hopfield Network [64, 66] under certain choices of the activation function \(\mathbf{g}\); hence, it is through the lens of the abstraction presented in this section that we explore the long history of AMs, beginning with the famous Hopfield Network.
### Hopfield Networks are the First Energy-Based Associative Memory
John Hopfield formalized the dynamic retrieval process of Associative Memory in the 80s [64, 66] in an energy-based model that became famously known as the _Hopfield Network_ (HN). Unlike previous models for associative memory (see SS 4.5), HNs performed the task of memory retrieval through gradient descent of an energy function. A HN resembles in form a single-layer McCulloch & Pitts perceptron [67], with input feature neurons \(\mathbf{x}_{f}\) and a single weight matrix; however, unlike perceptron-based models, the HN operated continuously in time, repeatedly updating the inputs \(\mathbf{x}_{f}^{t}\) over time to minimize some energy.
Hopfield described the AM energy dynamics for both binary [64] and graded (continuous) [66] neurons. Like the TinyBrain model, a continuous HN consists of \(N_{f}\) feature neurons connected via a synaptic weight matrix \(\mathbf{W}\in\mathbb{R}^{N_{m}\times N_{f}}\) to \(N_{m}\) memory neurons. Feature neurons \(\mathbf{x}_{f}\) have corresponding activations \(\mathbf{g}_{f}=\mathrm{sigmoid}(\mathbf{x}_{f})\), whereas memory neurons \(\mathbf{x}_{m}\) have a linear activation function \(\mathbf{g}_{m}=\mathbf{x}_{m}\). We call this configuration of TinyBrain the _Classical Hopfield Network_ (CHN).
Hopfield never considered the time evolution of the memory neurons in his model, instead assuming that \(\mathbf{x}_{m}^{t}=W\mathbf{g}_{f}^{t}\) at any point in time (i.e., that the state of the memory neurons was instantaneously defined by the activations of the feature neurons). This simplification of the two-layer TinyBrain did not change the fixed points of the AM dynamics and allowed him to consider the time-evolution for only the feature neuron layer. The simplified energy function and update rules for the CHN are shown in Eq. 17 and Eq. 18 respectively, where \(\mathcal{L}_{f}(\mathbf{x}_{f})\triangleq\sum_{N_{f}}\log\left(1+\exp( \mathbf{x}_{f})\right)\) is the Lagrangian of the sigmoid activation function and \((\cdot)^{2}\) operates elementwise.
\[E^{\mathrm{system}}(\mathbf{g}_{f},\mathbf{x}_{f}) =-\frac{1}{2}\sum_{N_{m}}\left(\mathbf{W}\mathbf{g}_{f}\right)^{ 2}+\mathbf{g}_{f}^{\intercal}\mathbf{x}_{f}-\mathcal{L}_{f}(\mathbf{x}_{f}) \tag{17}\] \[\tau\frac{d\mathbf{x}_{f}}{dt} =\mathbf{W}^{\intercal}(\mathbf{W}\mathbf{g}_{f})-\mathbf{x}_{f}\,. \tag{18}\]
A full derivation is included in [18].
### Solving the Small Memory Capacity of Hopfield Networks
The CHN unfortunately suffered from a tiny memory capacity that scaled linearly according the number of input features \(N_{f}\); specifically, the maximum storage capacity of the classical Hopfield Network was discovered to be \(\sim\)\(0.14N_{f}\) by [68; 64; 69]. Consider the problem of building an associative memory on the \(60\mathrm{k}\) images in MNIST [70], where each image can be represented as a binary vector with \(N_{f}=784\) features. If the patterns were random, one could only reliably store a maximum of \(0.14(784)\approx 110\) images using the classical Hopfield paradigm, **no matter how many memories \(N_{m}\) you add to the synaptic matrix \(\mathbf{W}\)**. Given that MNIST images have strong correlations between their pixel intensities, this bound is even lower.
A breakthrough in the capacity of Hopfield Networks was proposed by Krotov & Hopfield over 30 years later [67]. Their new network, called the Dense Associative Memory (DAM), enabled the energy dynamics of the CHN to store a super-linear number of memories. The core idea was to use a rapidly growing non-linear activation functions \(\mathbf{g}_{m}(\cdot)\) on the memory neurons. For instance, choosing \(\mathbf{g}_{m}(\cdot)\) to be higher orders of (optionally rectified) polynomials allowed much greater memory storage capacity than the CHN. Extending this idea to exponential functions \(\mathbf{g}_{m}=\exp(\mathbf{x}_{m})\) can even lead to exponential storage capacity [71]. The intuition is that the "spikier" the activation function \(\mathbf{g}_{m}\) (i.e., the faster it grows in the region around \(\mathbf{x}_{m}\)), the more memories the network can store and retrieve.
Recall that a CHN could reliably store a maximum of \(110\) MNIST images. Using the exponential DAM, one can increase the number of stored memories up to \(N_{m}\sim\exp(784/2)\), assuming no correlations, _and still reliably retrieve each individual memory_. This marked difference has led to DAMs being branded as the "Modern Hopfield Network" (MHN) [31] and opened the door for a new frontier in AMs [43].
### Adding Hierarchy to Shallow Associative Memories
The CHN and DAM have another fundamental limitation: like the TinyBrain, both only have a single weight matrix \(\mathbf{W}\) and thus cannot learn hierarchical abstractions that simplify the task of memorizing complex signals. This is not a limitation of Deep Learning networks today, which can be seen as a stack of distinct learnable functions ("layers") each processing the output of a previous layer. These architectures are inherently _hierarchical_; that is, deeper layers operate on higher levels of abstraction output by previous layers. A classic example of this occurs in deep Convolutional Neural Networks (CNNs), where neurons in earlier layers (i.e., layers closer to the image) detect simple shapes, and those in deeper layers respond to more complex objects (see Figure 4).
However, Eq. 12 makes it easy to constrain the energy of an AM. Why can't these systems be extended beyond the TinyBrain in SS 4.1 to include more interacting layers? This is the realization of [61; 62] who generalize the theoretical abstraction of AMs in SS 4.1 to connect arbitrary numbers of neuron layers via arbitrary synaptic relationships that can resemble the convolutional, pooling, or even attention operations in modern architectures. This version of AMs is known as a Hierarchical Associative Memory (HAM).
The HAM has given AMs a theoretical maturity and flexibility to rival the representational capacity of any existing neural architecture while guaranteeing stable attractor points. However, the HAM is still a very young architecture, having been proposed in 2021, and has yet to establish itself as a viable alternative to traditional Deep Learning architectures in practice.
### Other models for Associative Memory
The term "associative memory" has become a catch-all term for many different types of Content Addressable Memories (CAMs), including models like Sparse Distributed Memories [73, 74],Memory Networks [75], Key-Value Memories [76, 77], Hopfield Networks [64, 66], and Boltzmann Machines [78, 79, 63]. Even the popular attention mechanism in Transformers [30] is itself a differentiable form of associative memory [31] where tokens act as queries to retrieve values stored by the other tokens. In this paper we have considered a particular class of AMs that refers to systems with a defined Lyapunov function - that is, a CAM must have both a tractable energy function and guaranteed stable states to be considered in this survey. The paradigmatic examples of this class of AMs is of course the Hopfield Network (HN) [64, 66] and its modern counterparts [67, 61] which have been discussed in earlier sections.
## 5 An Uncanny Resemblance between Associative Memories and Diffusion Models
Associative Memories are different from Diffusion Models in that they are not primarily understood as generative models. This said, the memory retrieval process can be easily be construed as a "generation process" that samples from some original distribution. This process can be visualized as follows. We first pick a point at random on our energy landscape by initializing a data point \(\mathbf{x}^{0}\) to random noise. We then descend the energy (as in Eq. 10) until the system settles in the nearest local minimum \(\mathbf{x}^{\star}\): this retrieved memory is our generated sample. If desired, small amounts of noise can be added to each update step (a process called _Langevin dynamics_) that can help improve the diversity of generations and jar the dynamics out of undesirable local minima (a technique that is used regularly during generation in DMs [57, 3, 4, 1]). This realization makes it possible to directly compare AMs to DMs.
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Diffusion** & **Associative Memory** \\ \hline Parameterizes the... & Score function \(\mathbf{F}_{\boldsymbol{\theta}}\) & Energy function \(E_{\boldsymbol{\theta}}\) \\ Continuous Update & \(\tau\frac{d\mathbf{x}}{dt}=\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x})\) & \(\tau\frac{d\mathbf{x}}{dt}=-\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}(\mathbf{ x})\) \\ Discrete Update & \(\mathbf{x}^{t+1}=\mathbf{x}^{t}+\alpha\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{ x}^{t})\) & \(\mathbf{x}^{t+1}=\mathbf{x}^{t}-\alpha\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}( \mathbf{x}^{t})\) \\ Valid Time Domain & \(t\in[0,T]\) & \(t\geq 0\) \\ Fixed Point Attractor? & No\({}^{*}\) & Yes \\ Tractable Energy? & No\({}^{*}\) & Yes \\ Undoes Corruption of... & Noise it was trained on\({}^{*}\) & Any kind \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summarizing the similarities and differences between Diffusion Models and Associative Memory. Fields marked with a \({}^{*}\) indicate caveats. See § 5.1 and § 5.2 for details.
Figure 4: An explanation of hierarchical representations as learned in images, adapted from [72]. The top row shows the filters learned by the second layer in a convolutional deep belief network, whereas the bottom row the filters of the third layer. The deeper filters are able to recognize more complex objects and are thus considered more “abstract”.
We tabulate the similarities and differences between DMs and AMs in Table 1, providing additional details in SS 5.2.
### Characterizing the Similarities
There are several reasons to be skeptical of significant overlap between generative models like DMs and AMs. As presented, DMs only compute gradients of energies and not energies themselves; thus they have no Lyapunov function guaranteeing stability and cannot operate continuously in time. However, it is clear that both DMs and AMs have very similar goals as shown in Eq. 9 and Eq. 10. We summarize the similarities between these two data modeling approaches as follows:
* **Both model the energy.** DMs learn a parameterized score function \(\mathbf{F}_{\boldsymbol{\theta}}\) to approximate the gradient of some true energy function \(E\) at every point \(\mathbf{x}\) in the energy landscape such that \(\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x})\approx-\nabla_{\mathbf{x}}E( \mathbf{x})\). In AMs, this energy function is explicit and is defined by using architectures that directly model the energy \(E_{\boldsymbol{\theta}}\) such that \(E_{\boldsymbol{\theta}}(\mathbf{x})\approx E(\mathbf{x})\).
* **Both generate samples by descending the predicted gradient of the energy.** A DM will directly output the estimated score \(\mathbf{F}_{\boldsymbol{\theta}}\approx-\nabla_{\mathbf{x}}E(\mathbf{x})\), whereas an AM will directly output a smooth energy \(E_{\boldsymbol{\theta}}(\mathbf{x})\) on which the gradient \(-\nabla_{\mathbf{x}}E_{\boldsymbol{\theta}}(\mathbf{x})\) can be directly calculated and descended. The discrete (continuous) update rules of both are identical to a step size \(\alpha\) (time variable \(\tau\)).
* **Both converge to a solution that lies in the neighborhood of a local energy minimum.** In DMs, this behavior is a consequence of the manner in which it is trained: the final output \(\mathbf{x}^{T}\) exists in a region such that a small perturbation in any direction would increase its energy. In AMs, this statement is a requirement of the Lyapunov function; if the dynamics progress for a sufficiently long time, we are guaranteed to reach a true fixed point \(\mathbf{x}^{\star}\) that lies at a local energy minimum.
### Reconciling the Differences
DMs and AMs are certainly not equivalent methods. However, we have discovered evidence that the theoretical differences between DMs and AMs are not so significant in practice. We present rebuttals to potential objections (which we call "misconceptions") that Diffusion Models do not simulate Associative Memories below.
**Misconception 1: Diffusion models are not fixed point attractors.** Though the dynamics of DMs have no theoretical guarantees of fixed point attractors, we notice that the design of DMs seems to intelligently engineer the behavior of fixed-point attractors like AMs without constraining the architecture to represent a Lyapunov function. We identify two fundamental tricks used by DMs that help approximate stable dynamics:
**Trick 1**: DMs explicitly halt their reconstruction process at time \(t=T\) (i.e., requiring \(\mathbf{x}^{\star}=\mathbf{x}^{T}\)) and are thus only defined for time \(t\in[0,T]\). \(\mathbf{x}^{T}\) then represents a _contrived fixed point_ because no further operations change it. We can say that \(\mathbf{x}^{t\neq T}\) corresponds to a data point with some corruption and \(\mathbf{x}^{T}\) corresponds to a _memory_ in the language of AMs.
**Trick 2**: We know that \(\mathbf{x}^{T}\) approximates a local energy minimum because of the _noise annealing_ trick introduced by [2] and used by all subsequent DMs. In the corruption process, points in the data distribution are perturbed with gradually increasing amounts of noise, implying that smaller noise is added at earlier points in time. This leads to a robust approximation of the true energy gradient localized around each training point, where the original data point lies at the minimum.
We additionally see evidence of DMs storing "memories" that are actual training points. [80] showed that one can retrieve training data almost exactly from publicly available DMs by descending an energy conditioned on prompts from the training dataset. It seems that this behavior is particularly evident for images considered outliers and for images that appear many times. Viewing DMs as AMs, this behavior is not surprising, as the whole function of AMs is to retrieve data (or close approximations to the data) that it has been seen before.
Tricks 1 & 2 also mean that a DM is inescapably bound to a knowledge of the current time \(t\). The time \(t\) defines not only how much noise the model should expect to remove from a given
signal in a single step, but also how much total noise it should expect to remove from a given signal. Given a signal with an unknown quantity of noise, a user must either make a "best guess" for the time \(t\) corresponding to this amount of noise, or restart the dynamics at time \(t=0\) which will cause the model to make large jumps around the energy landscape and likely land it in a different energy minimum. Currently, AMs have no such dependence between corruption levels and time \(t\) and can run continuously in time.
**Misconception 2: Diffusion models can only undo Gaussian noise.**: In order for a DM to behave like an AM, they must be able to undo any kind of corruption (e.g., inpainting, blur, pixelation, etc.), not just the white- or Gaussian-noise associated with Brownian motion as originally formulated in [1, 2]. [81, 82] showed that the performance of DMs can actually improve when considering other types of noisy corruption in the forward process. However, it also seems that DMs can learn to reverse any kind of corrupting process. [59] empirically show that DMs can be trained to invert arbitrary image corruptions that generate samples just as well as those trained to invert Gaussian noise. Because DMs can be trained to undo any kind of corruption, they can exhibit behavior identical to that of Associative Memory which focuses on the "fixed points" of the energy landscape rather than the kind of denoising/de-corrupting steps required to get there.
**Misconception 3: Unconstrained Diffusion Models work with any architecture.**: One advantage of DMs over AMs is that they are "unconstrained" and can use any neural network architecture to approximate the score function; in other words, the architecture is not required to be the gradient of an actual scalar energy. The only requirement is that the neural network chosen to approximate the score must be _isomorphic_ such that the function's output is the same shape as its input (i.e., \(\mathbf{F}_{\boldsymbol{\theta}}:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\)). However, not all isomorphic architectures are created equal and only select architectures are used in practice. Both standard feedforward networks [51] and vanilla Transformers have struggled to generate quality samples using diffusion [83, 84]. Most applications of DMs use some modification of the U-Net architecture [85] originally used by [4], though the original DM paper [1] used shallow MLPs, and recent work [83] has shown how to engineer vision Transformers [86] to achieve a similar reconstruction quality as U-Nets on images.
**Misconception 4: Diffusion Models with explicit energy don't work.**: Though DMs characterize an energy landscape by modeling its gradient everywhere, they do not inherently have a concept of the energy value itself. However, [21] showed that one can actually compute an exact energy for DMs using the instantaneous change of variables formula from [87], with the caveat that this equation is expensive to compute; estimations over direct calculations of the energy computation are preferred in practice [20].
Another approach for enforcing an energy on DMs is to choose an architecture that parameterizes an actual energy function, whose gradient is the score function. Ref. [51] researched exactly this, exploring whether a generic learnable function \(\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t)\) that is constrained to be the true gradient of a parameterized energy function as in Eq. 19 is able to attain sample quality results similar to those of unconstrained networks.
\[E_{\boldsymbol{\theta}}(\mathbf{x};t)=\frac{1}{2\sigma(t)}||\mathbf{x}- \mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t)||^{2} \tag{19}\]
The score \(\mathbf{F}_{\boldsymbol{\theta}}\) of this energy can be written by computing the analytical gradient
\[\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x};t)=\frac{1}{\sigma(t)}(\mathbf{x }-\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t))\nabla_{\mathbf{x}}\mathbf{f} _{\boldsymbol{\theta}}(\mathbf{x};t)-\frac{1}{\sigma(t)}(\mathbf{x}-\mathbf{f }_{\boldsymbol{\theta}}(\mathbf{x};t))\,. \tag{20}\]
Ref. [51] notes that the second term of the equation is the standard DM, while the first term involving \(\nabla_{\mathbf{x}}\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{x};t)\) is new and helps guarantee that \(\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x};t)\) is a conservative vector field. They also showed that constraining the score function to be the gradient of the energy in Eq. 19 does not hurt the generation performance on the CIFAR dataset [88] and provides hope that AMs with constrained energy can match the performance of unconstrained DMs.
## 6 Conclusions & Open Challenges
Diffusion Models and Associative Memories have remarkable similarities when presented using a unified mathematical notation: both aim to minimize some energy (maximize some log-probability)
by following its gradient (score). The final solution of both approaches represents some sort of _memory_ that lies in a local minimum (maximum) of the energy (log-probability). However, these approaches are certainly not identical, as is evidenced by different validity constraints on architecture choices and time domains. The training philosophy behind each approach is also different: Diffusion Models assume that the energy function is intractable and fixate on training the gradient of the energy (using known perturbations of training data as the objective), while AMs focus on learning the fixed points of a tractable energy.
### Directions for Associative Memories
Associative Memories have not gained nearly the traction of Diffusion Models in AI applications. Many researchers in the field focus on the TinyBrain architecture, trying to improve its theoretical memory capacity [89; 90] or apply related models to modern problems [91]. Other researchers are integrating the memory-retrieval capabilities of AMs into existing feed-forward architectures [31; 92; 93]; in doing so they discard the idea of global optimization on the energy. In part, these other research directions exist because no pure AM has shown impressive performance on large data until [32] introduced the Energy Transformer, though even this AM does not yet show significant performance gain over traditional methods.
The empirical success of Diffusion Models across many domains should provide hope that modern AM architectures [61; 62] can achieve performance parity on similar tasks. Constrained Diffusion Models show no worse performance than unconstrained models [51], and the introduction of HAMs allow AMs to be built that resemble U-Nets [62]. Even the training pipeline of a Diffusion Model can be mapped directly to an AM paradigm if we condition the energy function and optimization procedure on the time: from a starting image \(\mathbf{x}^{*}\) in the training set, repeatedly add varying amounts of random noise and train the score function \(-\nabla_{\mathbf{x}}E_{\theta}(\mathbf{x}^{t};t)\) to predict the added noise. Once trained, Diffusion Models have even shown strong steerability - the ability for a user to modify a trained Diffusion Model to complete tasks it was not trained to perform [1; 94; 95; 96; 52; 97; 98; 99; 100]. We can expect similar levels of controllability from a fully trained AM.
### Directions for Diffusion Models
Diffusion Models should find the theoretical framework of the Lyapunov function from AMs compelling, as it defines systems around fixed-point attractors that seem to already exist in Diffusion Models. Perhaps the score function \(\mathbf{F}_{\theta}\) learned by Diffusion Models has already learned to behave similarly? If the score function is shown to be a conservative vector field as in [51], perhaps the constraint of finite time in Diffusion Models is unnecessary and Diffusion Models can behave well in continuous time \(t>T\). Viewing Diffusion Models as fundamentally fixed-point attractors like AMs, it also becomes possible to theoretically characterize its memory capacity. Finally, recent research has focused on optimizing the sampling speed of Diffusion Models by improving the scheduling step in the update rule [57; 101; 102; 103]. By viewing this sampling procedure as ordinary gradient descent (as is the case in AMs), smart gradient optimization techniques already used in Deep Learning like ADAM [104] and L-BFGS [105] become available.
### Beyond Diffusion: Transformers Resemble Associative Memories
Diffusion Models are not the only method in modern Deep Learning to show similarities to AMs. In fact, recent literature shows increasing connections between the operations of AMs and deep architectures, e.g. feed-forward networks [93], convolutional neural networks [61], Transformer architecture [31], and optimization processes of Ordinary Differential Equations (ODEs) [106; 107; 108; 109]. In 2020 [31] discovered that the attention mechanism of Transformers strongly resembled a single-step update of a Dense Associative Memory [67] under the \(\operatorname{softmax}(\cdot)\) activation function, which exhibits similar properties to the power and \(\exp(\cdot)\) functions studied in [67; 71]. However, it is incorrect to call their contrived energy function an AM as it is integrated as a conventional feed-forward layer in the standard Transformer block and applied only using a single gradient descent step. Of particular interest to the scope of this paper is the following question: if Transformers have strong resemblances to ODEs and the attention operation is similar to a single step update of a DAM, what are the differences between an AM with desirable attractor dynamics and a Transformer?
A recent work by [32] explores this question, deriving a new "Energy Transformer" block that strongly resembles the conventional Transformer block, but whose fundamental operation outputs an energy. This energy satisfies all the desired properties of an AM, which allows us to interpret the forward pass through a stack of Transformer blocks as gradient descent down the block's energy.
### Scaling Laws from the Perspective of Associative Memory
The performance of Transformers on language is famously characterized by the "scaling laws", which claim that a model's performance will improve as a power-law with model size, dataset size, and the amount of compute used for training [110]. We expect similar behaviors to hold for Diffusion Models, though a similar study has not yet been conducted. However, the "scaling laws" are empirical only, and there is little theory to justify why a model's performance would continue to grow with the size. AMs offer one possible theory by characterizing large-model performance as one of _memory capacity_ (see SS 4.3). In the world of AMs, this scaling behavior makes intuitive sense: more parameters means more possible memories that can be stored; more data means more meaningful local minima in the energy; and more compute means the model can descend further down the energy, making it easier to distinguish between nearby fixed points (alternatively, more compute can imply that longer training allows the model to distribute the fixed points in the energy more efficiently, allowing for greater memory capacity). These hypotheses for understanding the scaling law come from intuitively understanding large models as AMs, but this is still an open research question.
Both Transformers and Diffusion Models are ubiquitous choices for foundation models [111] in Deep Learning today, and both strongly resemble Associative Memories. We believe that the trajectory of AI research would benefit by interpreting the problem of unsupervised learning on large, unstructured data from the perspective of Associative Memory.
### Closing Remarks
Very few researchers will observe the rapid advances of AI today and notice a trend towards the dynamical processes of Associative Memories first established by John Hopfield in the 1980s. However, many of the theoretical guarantees of Associative Memories are captured in the design of increasingly popular Diffusion Models that have proven themselves fixtures for many applications of generative modeling. This survey represents a first step towards a more comprehensive understanding of the connections between Diffusion Models and Associative Memories. We hope that our work inspires further research into these exciting fields and that it helps to foster a new generation of AI systems that are capable of unlocking the secrets of memory and perception.
|
2304.08592 | Improving Scene Text Recognition for Character-Level Long-Tailed
Distribution | Despite the recent remarkable improvements in scene text recognition (STR),
the majority of the studies focused mainly on the English language, which only
includes few number of characters. However, STR models show a large performance
degradation on languages with a numerous number of characters (e.g., Chinese
and Korean), especially on characters that rarely appear due to the long-tailed
distribution of characters in such languages. To address such an issue, we
conducted an empirical analysis using synthetic datasets with different
character-level distributions (e.g., balanced and long-tailed distributions).
While increasing a substantial number of tail classes without considering the
context helps the model to correctly recognize characters individually,
training with such a synthetic dataset interferes the model with learning the
contextual information (i.e., relation among characters), which is also
important for predicting the whole word. Based on this motivation, we propose a
novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1)
context-aware expert learns the contextual representation trained with a
long-tailed dataset composed of common words used in everyday life and 2)
context-free expert focuses on correctly predicting individual characters by
utilizing a dataset with a balanced number of characters. By training two
experts to focus on learning contextual and visual representations,
respectively, we propose a novel confidence ensemble method to compensate the
limitation of each expert. Through the experiments, we demonstrate that
CAFE-Net improves the STR performance on languages containing numerous number
of characters. Moreover, we show that CAFE-Net is easily applicable to various
STR models. | Sunghyun Park, Sunghyo Chung, Jungsoo Lee, Jaegul Choo | 2023-03-31T06:11:33Z | http://arxiv.org/abs/2304.08592v1 | # Improving Scene Text Recognition for Character-Level Long-Tailed Distribution
###### Abstract
Despite the recent remarkable improvements in scene text recognition (STR), the majority of the studies focused mainly on the English language, which only includes few number of characters. However, STR models show a large performance degradation on languages with a numerous number of characters (e.g., Chinese and Korean), especially on characters that rarely appear due to the long-tailed distribution of characters in such languages. To address such an issue, we conducted an empirical analysis using synthetic datasets with different character-level distributions (e.g., balanced and long-tailed distributions). While increasing a substantial number of tail classes without considering the context helps the model to correctly recognize characters individually, training with such a synthetic dataset interferes the model with learning the contextual information (i.e., relation among characters), which is also important for predicting the whole word. Based on this motivation, we propose a novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1) context-aware expert learns the contextual representation trained with a long-tailed dataset composed of common words used in everyday life and 2) context-free expert focuses on correctly predicting individual characters by utilizing a dataset with a balanced number of characters. By training two experts to focus on learning contextual and visual representations, respectively, we propose a novel confidence ensemble method to compensate the limitation of each expert. Through the experiments, we demonstrate that CAFE-Net improves the STR performance on languages containing numerous number of characters. Moreover, we show that CAFE-Net is easily applicable to various STR models.
## 1 Introduction
Recent studies in scene text recognition (STR) models have shown remarkable performances. As the most commonly spoken language worldwide, the English language has been the main focus of the existing STR stud
ies [38, 3, 4, 41]. However, achieving high performance on other languages with the existing models is a non-trivial task, especially when the languages have numerous characters (_e.g.,_ letter, number, symbol.), unlike English. More specifically, English has only 26 letters, while Asian languages like Chinese and Korean have thousands of letters.
There exist few studies that try to improve STR performance on languages other than English [5, 18]. However, they overlook the fact that languages with a large number of characters have the _long-tailed distribution at the character level_. Due to the character-level long-tailed distribution, the model mainly focuses on learning the head characters (_i.e.,_ those which frequently appear when forming words) while focusing less on learning the tail characters (_i.e.,_ those which rarely appear in words). This leads to significant performance degradation on the tail classes, a commonly observed phenomenon in the existing long-tailed recognition [21, 36], as shown in Fig. 1.
Although synthetic datasets such as SynthText [13] are often utilized in STR, constructing synthetic datasets with a balanced number of characters is challenging. To be more specific, to alleviate the performance degradation due to the long-tailed distribution, the existing image classification methods studies generally proliferate the data samples of the tail classes when constructing a balanced set of classes. In STR, however, increasing the number of words including the tail characters also increases the number of the head characters when they are included in the same word. While generating words only including tail classes is one straightforward solution, those words generally do not include the contexts people use in their everyday life since tail classes are rarely used in common words. Such an issue makes it demanding to construct a synthetic dataset for STR that can improve the performance on the tail characters, especially when the characters show a long-tailed distribution.
This paper is the first work to address the STR task in terms of the _character-level long-tailed distribution_. Such a long-tailed distribution of the characters causes a significant performance drop on tail characters in STR. We investigate the character-level long-tailed distribution by constructing two synthetic datasets having different character-level distributions: (1) one created by common words to preserve the context (_i.e.,_ WikiSynth) and (2) the other with randomly combined characters, which has a balanced distribution but lacks such contextual information (_i.e.,_ RandomSynth). While training with WikiSynth encourages the model to learn contextual information, the model fails to predict the tail classes correctly due to the long-tailed distribution of characters. In contrast, using RandomSynth helps to correctly predict characters individually by focusing on the visual representation and enhances the performance on tail classes significantly, but such training interferes the model with learning the contextual information.
Based on the findings, we propose a Context-Aware and Free Experts Network (CAFE-Net), a _simple yet effective_ approach, which utilizes the confidence score for aggregating experts handling different character-level distributions. At a high level, we train two experts separately: (1) context-aware expert that focuses on learning the contextual representation using a dataset including characters with a long-tailed distribution and (2) context-free expert that learns the visual representation by utilizing a dataset of a balanced number of characters. Additionally, we propose a new evaluation metric termed 'character-level (char) F1 score', which is more suitable than existing word-level evaluation metrics (_e.g.,_ accuracy) for character-level analysis. Extensive experiments demonstrate that CAFE-Net significantly outperforms the existing methods in predicting the tail characters, while improving the performance on predicting the whole words with languages containing a numerous number of characters. Furthermore, we demonstrate the applicability of CAFE-Net by combining various STR models.
The main contributions of our work are as follows:
* To the best of our knowledge, this is the first work to handle the STR model in terms of the languages with character-level long-tailed distributions.
* To take care of learning both contextual and visual information, we propose a novel CAFE-Net using context-aware and context-free experts, which separately handles different character-level distributions.
* We demonstrate the superior performance of our method and its applicability through experiments.
## 2 Related Work
**Scene Text Recognition.** A recent study [3] proposes a widely used STR framework composed of four stages by analyzing the performance gains of each module in a model. Leveraging such a well-performing framework, the majority of studies in STR mainly focused on English [32, 46]. Several studies [5, 18] propose unified methods to handle multiple languages. However, such existing multilingual STR approaches do not consider the characteristics of each language (_e.g.,_ long-tailed distribution of characters). Another recent work tackled the vocabulary reliance problem [41] at the word level, which mitigates the poor generalization on images with words not included in the vocabulary of a training dataset. In contrast to the previous STR studies, to the best of our knowledge, this is the first work to address the _character level_ long-tailed distribution in STR.
**Long-tailed Recognition.** There exist numerous datasets, which have long-tailed distributions in the real world. Previous studies addressing the long-tailed distribution focused on improving loss functions [28, 9], augmenting the data
samples of the tail classes [7, 27], and adjusting the logits [21, 26, 30, 36]. Recent studies proposed using multiple experts specialized for correctly predicting under a certain label distribution [49, 44, 48]. Such a design enables to handle different label distributions within a single model. Inspired by such a design, we train two different experts specialized to learn contextual and visual representation, respectively, by taking account of the characteristic of STR.
## 3 Motivation
**Overview.** This section investigates the impacts of character-level long-tailed distribution in STR. We first describe several synthetic datasets, which are generated by shifting the character-level distribution (_e.g._, varying from long-tailed datasets to balanced datasets) in Section. 3.1. Moreover, we introduce character-level F1 score in Section. 3.2. Next, we show the effectiveness of each synthetic dataset and analyze them in Section. 3.3. We use a TRBA model [3], a representative STR framework, for the experiments in this section. The details for STR framework we used are described in the supplementary.
### Synthetic Data
As widely used in the previous studies of STR [38, 3, 45, 11, 2], we utilize synthetic data for training. We use Korean and Chinese for the languages, which include the long-tailed distributions at the character level. We construct the training datasets for each language by following SynthText [13], which is one of the synthetic datasets generated by using a large corpus and diverse backgrounds. We generate new synthetic datasets for the study by shifting the character-level distribution as shown in Fig. 2.
**WikiSynth (WS)** This dataset utilizes Wikipedia text corpus. The wiki corpus is composed of word units using a tokenizer for each language. The limit of word length is set to 25. The number of samples in the training and test sets for Chinese and Korean are 5,000,000 and 10,000, respectively. Since WS is generated by common words, it has a long-tailed distribution at character level that is generally observed in languages with numerous number of characters.
**RandomSynth (RS)** In contrast to WS, RS is a character-level balanced dataset, where words are generated by randomly combining characters. Since RS samples the characters uniformly, the dataset does not consider the context, so it does not contain the words generally used in the real world. RS contains the same number of images as WS for a fair comparison. As previous studies in long-tailed recognition [6] evaluate the models with the balanced test set, we use RS as the character-level balanced test set in STR.
**CombinedSynth (CS)** WS and RS has each own limitation, respectively. To be more specific, models trained with WS fail to learn _few_ characters, while training with RS interferes the model with learning the contextual information between characters. A viable option for solving these problems is to mix WS and RS. CS is composed of WS and RS with an equal number of images from each dataset to compensate for the limitation of each dataset.
### Character-Level F1 Score
Accuracy is a widely used evaluation metric, which evaluates whether a model correctly outputs all the characters in a given word. Since the accuracy only considers the performance of STR at the word level, we propose a novel evaluation metric termed 'char F1 score' to evaluate the performance on the character level. When obtaining the char F1 score, we 1) perform the sequence alignment of ground truth and predicted characters, 2) compute the F1 score per character, and 3) average these scores. We report the F1 score in addition to the accuracy since it is more suitable than accuracy when evaluating models with an imbalanced number of data samples. The details of char F1 score are described in the supplementary.
Since we address the long-tailed distribution of characters, we categorize the characters into three groups. For simplicity, we denote \(n_{i}\) as the number of training samples
Figure 3: (a) Accuracy on \(\textbf{Real}_{Easy}\) of models trained with WS, CS, and RS individually using Korean. Since RS and CS include randomly combining characters, the models trained with RS and CS exhibit lower accuracy compared to the model trained on WS. (b) Char F1 score on \(\textbf{Real}_{Hard}\). We observe that training models with RS and CS improve the recognition performance on individual characters.
Figure 2: Character-level distribution of WikiSynth (WS), RandomSynth (RS), and CombinedSynth (CS). Unlike WS, both RS and CS include a sufficient number of characters for all classes.
including \(i^{\text{th}}\) character in a given dataset. The characters are categorized according to \(n_{i}\): 1) _many (i.e., \(n_{i}\geq 1500\)_), 2) _medium_ (i.e., \(n_{i}<1500\) and \(n_{i}\geq 100\)), and 3) _few_ (i.e., \(n_{i}<100\)). Straightforwardly, char F1 scores of _few_ characters are much lower than those of _many_ characters when training models with WS as shown in Fig. 3 (b).
### Tradeoff between Context-Free and Context-Aware Learning
We use AI Hub dataset [1], a publicly available Korean dataset, for Korean test set noted as '**Real**'. Additionally, we divide **Real** datasets into two types of test sets: 1) a test set without _few_ characters (_i.e.,_ **Real\({}_{Easy}\)**) and 2) a test set including _few_ characters (_i.e._, **Real\({}_{Hard}\)). The details of the experimental setup are described in the supplementary.
We evaluate the models with **Real\({}_{easy}\)** and **Real\({}_{hard}\)** by individually training them with WS, RS, and CS using Korean. Note that the model trained with WS mainly primarily relies on contextual information for making predictions, whereas the one trained with RS mainly uses visual information while lacking contextual information. We observe a tradeoff of using WS and RS for the training set. Fig. 3 (a) demonstrates that training with WS improves the _accuracy_ on **Real\({}_{easy}\)** compared to training with CS or RS. On the other hand, Fig. 3 (b) shows that training with CS or RS improves the _char F1 score_ for all _many_, _medium_, _few_ characters when evaluated with **Real\({}_{easy}\)** compared to training with WS.
Through the experiments, we found that the model focused on learning visual information without contexts (_i.e.,_ trained with RS or CS) can correctly predict individual characters, which is important for improving the performance of long-tailed recognition, especially for _few_ characters. However, the model focusing on learning the contextual information (_i.e.,_ trained with WS) shows improved accuracy even with low char F1 score. This indicates that capturing the contextual information is crucial for correctly predicting all characters of a given word, especially for those words frequently appearing. Without such understanding of the contextual information, models show limited accuracy with even high char F1 score. Therefore, to improve recognizing individual characters and the whole word, we need to enhance both visual and contextual representations.
## 4 Method
**Overview.** Based on the empirical analysis, we propose a Context-Aware and Free Experts Network termed 'CAFE-Net'. Different from previous STR methods, we utilize two types of training datasets, which have different label distributions (_e.g.,_ WS and RS). As described in Fig. 4, our model consists of two main experts: (1) context-aware expert trained with WS to focus on the contextual representation via utilizing an external language model; (2) context-free expert trained with a balanced number of characters (_i.e.,_ RS) to improve the performance on _few_ characters. By dividing the roles of two experts, it is possible to improve the performance on _few_ characters while understanding the contextual information.
Different from the existing STR methods, we utilize two synthetic datasets (_i.e._, WS and RS) separately during training. Let \(\{x_{ca},y_{ca}\}\sim\mathcal{D}_{ca}\) and \(\{x_{cf},y_{cf}\}\sim\mathcal{D}_{cf}\) denote training images and labels sampled for training the context-aware expert and the context-free expert, respectively. In specific, we utilize WS and RS for \(\mathcal{D}_{ca}\) and \(\mathcal{D}_{cf}\), respectively. In the following, we illustrate the details of our method and its objective functions.
**Feature Extractor.**\(x_{ca}\) and \(x_{cf}\) are fed into the feature extractor to acquire the context-aware and context-free feature representations \(f_{ca}\) and \(f_{cf}\), respectively. In our framework, two experts share the same feature extractor. Sharing weights largely reduces the computational complexity in the inference phase. For the feature extractor, various model architectures can be utilized such as ResNet [15] and vision transformer (ViT) encoder [2].
**Context-Free Expert.** Given the feature representation \(f_{cf}\) that is extracted from \(x_{cf}\), a context-free expert produces the output feature \(h_{cf}=\{h_{cf}^{(1)},\ldots,h_{cf}^{(T)}\}\) of the corresponding words \(\hat{y}_{cf}=\{\hat{y}_{cf}^{(1)},\ldots,\hat{y}_{cf}^{(T)}\}\). Here, \(T\) denotes the maximum length of the word. Due to the balanced number of characters, the context-free expert correctly predicts _few_ characters more compared to the context-aware expert. This is mainly due to the fact that the random sequences of characters devoid of semantic meaning make the context-free expert prioritize learning visual representation above contextual representation.
**Context-Aware Expert.** Different from the context-free expert, the context-aware expert is trained with \(\mathcal{D}_{ca}\) to focus on learning the contextual information, which is essential to predict the whole words accurately. Inspired by recent context-aware STR methods [45, 11], we leverage an external language model to capture semantic information to assist STR. Specifically, with the feature representations \(f_{ca}\) and \(f_{cf}\), the context-aware expert produces the output feature. Then, an external language model refines the output of the context-aware expert. Finally, the outputs of the context-aware expert and the language model are fused to produce the final output feature. In summary, the context-aware expert with the external language model produces the final output feature \(h_{ca}=\{h_{ca}^{(1)},\ldots,h_{ca}^{(T)}\}\) of the corresponding words \(\hat{y}_{ca}=\{\hat{y}_{ca}^{(1)},\ldots,\hat{y}_{ca}^{(T)}\}\).
**Objective Functions.** The context-free and context-aware experts are trained by the same objective function that minimizes negative log-likelihood of the conditional probability
of word label \(y_{cf}\). Formally, loss function \(\mathcal{L}\) is as follows:
\[\mathcal{L}=-\frac{1}{T}\sum_{t=1}^{T}\log p(y^{t}|h^{t}), \tag{1}\]
where \(y^{t}\) is the \(t\)-th ground truth character.
**Confidence Ensemble.** During inference, we aggregate the outputs of two experts. The output probability of each expert is defined as:
\[p(\hat{y})=\prod_{t=1}^{l}p(\hat{y}^{t}|\hat{y}^{<t}), \tag{2}\]
where \(\hat{y}^{<t}=y^{1}\cdots y^{t-1}\) and \(l\) is the length of the predicted words. In specific, we ignore \(pad\) token and only consider the words preceding the \(eos\) token, where \(eos\) token indicates the end of the words.
To ensemble the outputs of two experts, we leverage the maximum softmax probability, which represents the probability \(p(\hat{y}^{t}|\hat{y}^{<t})\) of the predicted character. The confidence score score\((\hat{y})\) of each expert is calculated based on the maximum softmax probability of the characters as follows:
\[\text{score}(\hat{y})=\frac{1}{l}\sum_{t=1}^{l}\log(\max(p(\hat{y}^{t}|\hat{y }^{<t}))), \tag{3}\]
where we apply the length normalization that normalizes the score using the length \(l\) of the predicted word. Since the probabilities \(p(\hat{y}^{t})\) are all values less than one, multiplying a non-trivial number of values less than one will result in the confidence score of shorter words increasing. To address this issue, we normalize the confidence score by dividing it by the word length \(l\). We denote the confidence scores of the context-aware expert and context-free expert as \(\text{score}(\hat{y}_{ca})\) and \(\text{score}(\hat{y}_{cf})\), respectively. Among \(\text{score}(\hat{y}_{ca})\) and \(\text{score}(\hat{y}_{cf})\), we select the output with the higher confidence score. Then, the final predicted words are computed by taking the highest probability character at each time step \(t\). Intuitively, since the maximum softmax probabilities of the two experts vary depending on the characters, CAFE-Net is capable of selecting the word prediction properly during inference by utilizing the confidence score obtained from the two experts.
**Applicability of CAFE-Net.** Our proposed method provides a practical solution for addressing character-level long-tailed distribution in various STR models. In the supplementary, we describe how to integrate our method with representative STR models such as CNN-based models [3, 34] and ViT-based models [2]. While ensembling or utilizing multiple experts has been widely explored in other fields [25, 44, 49, 48], we want to emphasize that we shed light on _how_ to utilize ensembling in the character-level long-tailed STR. Notably, the key difference between character-level long-tailed STR and previous studies is that STR includes both vision and language modalities, where the model requires both visual and contextual information to predict the whole words. Due to this fact, simply adopting previous ensembling methods may not be directly applicable in STR. To solve such an issue, we first discover a crucial finding and propose a _simple yet effective_ method based on our finding.
## 5 Experiments
**Experimental Setup.** Since we only use synthetic datasets for Chinese and Korean, we also utilize ICDAR 2017 MLT dataset (MLT17) [33], a real-world dataset, for each language to reduce the domain gap with the real-world datasets. We filter the images of MLT17 including Chinese and Korean for each language. We evaluate the model using accuracy, a widely used evaluation metric in STR.
We evaluate the performance of the models on large
Figure 4: Overview of CAFE-Net. CAFE-Net utilizes two different experts: a context-aware expert and a context-free expert. Context-aware expert is trained with \(\mathcal{D}_{ca}\) (_e.g._, WS) to focus on learning the contextual information. On the other hand, context-free expert focuses on learning recognizing individual characters, which is trained with \(\mathcal{D}_{cf}\) (_e.g._, RS), a balanced dataset with images including randomly sequenced characters. As evidenced by visualization of the maximum softmax probabilities of two experts, it is clear that the experts have different certainty depending on the characters. Based on this characteristic, CAFE-Net select the prediction with the higher confidence score from the two experts during the inference.
scale real-world datasets. We utilize real-world datasets as test sets noted as '**Real**'. In specific, AI Hub dataset [1] and ICDAR 2019 ReCTS dataset [39] are publicly available real-world Korean and Chinese datasets, respectively. AI Hub dataset, a Korean real STR dataset, includes 151,105 cropped images of street signs, traffic signs, and brands. ReCTS, a Chinese real STR dataset, contains 77,709 cropped images of Chinese signboards in the street view with diverse backgrounds and fonts, which is a widely used benchmark dataset in the STR field. We choose these two datasets for evaluation since they contain a sufficient number of tail characters. The details for preprocessing real-world datasets are depicted in the supplementary.
We assess the performance of the models using the synthetic test datasets (_e.g._, WS\({}_{test}\) and RS\({}_{test}\)) in addition to real-world datasets. WS\({}_{test}\) is an imbalanced test set using a real-world corpus, which contains the common words. In contrast, RS\({}_{test}\) is a balanced test set but failing to preserve the contexts. Since WS\({}_{test}\) maintains the contexts, the accuracy is an important evaluation metric in WS\({}_{test}\) since it requires a model to predict all characters of a given word correctly. However, WS\({}_{test}\) does not contain sufficient number of _few_ characters. On the other hand, RS\({}_{test}\) is a balanced test set at the character level, so the char F1 score is a more meaningful evaluation metric compared to the accuracy. Therefore, we measure only accuracy for WS\({}_{test}\) and only char F1 score for RS\({}_{test}\), which are collectively referred to as '**Synth\({}_{test}\)**' in our experiments.
**Effectiveness of CAFE-Net.** We implement four models for the experiments; (i) CNN-based STR model: TRBA [3] and TextAdaIN [34], (ii) ViT-based STR model: ViT-STR+Linear and ViTSTR+Attn [2]. Table 1 demonstrates that integrating CAFE-Net improves the accuracy consistently in evaluation datasets in both Korean and Chinese datasets, except for TextAdaIN+Ours on Chinese **Real\({}_{Hard}\)**. We want to emphasize that our method leads to a large performance improvement compared to utilizing only a long-tailed dataset (_e.g._, WS), which is widely used in the STR field. These results demonstrate that appropriately solving the character-level long-tailed distribution can enhance overall performance for languages with a large number of characters. Notably, our method can achieve consistent performance improvement regardless of the model architecture, demonstrating its wide applicability.
**Comparison with Baselines.** A myriad of methods for handling long-tailed distribution datasets [28, 21, 17, 36] have been introduced in recent years. Since we tackle the long-tailed distribution of characters in STR, we compare our proposed method with the existing long-tailed recognition approaches. For the long-tailed recognition approaches, we adopt the simple techniques that are possible to apply to the STR model: (1) Softmax: the model is trained with the standard cross-entropy loss, (2) Focal loss [28]: relatively easy classes (_i.e._, many characters) are de-emphasized, (3) \(\tau\)-Normalization [21]: the weights of classifier are normalized with the hyper-parameter \(\tau\), (4) PC Softmax [17]: the logits are modified based on the label distribution during inference, (5) Balanced Softmax [36]: adjusting the output logits using the training label distribution. In this experiment, we apply the baselines to TRBA model [3]. The implementation details of baselines and our method are described in the supplementary. For a fair comparison with our method, we train the TRBA [3] model using CS.
Table 2 provides the summary of the performances of baselines and our method. The results demonstrate that our method outperforms the baselines in accuracy signifi
\begin{table}
\begin{tabular}{c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Method.**} & **Train** & \multicolumn{4}{c|}{_Korean_} & \multicolumn{4}{c}{_Chinese_} \\ & **Data** & **Real** & **Real\({}_{Easy}\)** & **Real\({}_{Hard}\)** & **Synth\({}_{test}\)** & **Real** & **Real\({}_{Easy}\)** & **Real\({}_{Hard}\)** & **Synth\({}_{test}\)** \\ \hline \hline \multicolumn{2}{c}{**CNN-based**} & & & & & & & & \\ \hline \multirow{2}{*}{TRBA} & WS & 78.25 & 79.43 & 34.14 & 87.47 & 39.19 & 42.99 & 9.25 & 83.23 \\ & CS & 77.43 & 77.87 & 61.25 & 86.37 & 41.83 & 41.72 & 42.71 & 83.31 \\ \cline{2-10} & CS & **81.35** & **81.75** & **66.68** & **88.93** & **47.67** & **48.09** & **44.34** & **86.22** \\ \hline \multirow{2}{*}{TextAdaIN} & WS & 80.35 & 81.57 & 34.54 & 86.97 & 41.33 & 45.78 & 6.30 & 81.73 \\ & CS & 80.43 & 80.80 & 66.60 & 85.82 & 45.76 & 45.82 & **45.30** & 81.80 \\ \cline{2-10} & CS & **82.34** & **82.75** & **66.85** & **88.88** & **47.21** & **47.85** & 42.15 & **85.83** \\ \hline \multicolumn{2}{c}{**ViT-based**} & & & & & & & & \\ \hline \multirow{2}{*}{ViTSTR + Linear} & WS & 80.92 & 81.74 & 50.46 & 90.57 & 44.14 & 46.12 & 28.51 & 89.81 \\ & CS & 81.82 & 82.19 & 68.07 & 90.93 & 49.15 & 48.84 & 51.63 & 90.51 \\ \cline{2-10} & CS & **82.78** & **83.14** & **69.16** & **92.09** & **51.37** & **51.10** & **53.47** & **91.09** \\ \hline \multirow{2}{*}{ViTSTR + Attn} & WS & 83.39 & 84.05 & 58.90 & 91.24 & 48.22 & 50.39 & 31.17 & 89.35 \\ \cline{2-10} & CS & 83.56 & 83.91 & 70.38 & 90.82 & 50.94 & 51.14 & 49.34 & 89.27 \\ \cline{1-1} & CS & **85.39** & **85.75** & **72.17** & **91.66** & **55.21** & **55.45** & **53.38** & **91.24** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy on Korean and Chinese datasets. _2nd_ column indicates training synthetic datasets. Applying CAFE-Net consistently improves the performance on various evaluation datasets: **Real**, **Real\({}_{Easy}\)**, **Real\({}_{Hard}\)** and **Synth\({}_{test}\)**.
cantly, while showing comparable performance in char F1 score. While \(\tau\)-norm [43] generally achieves the best char F1 score, it shows degraded performance in accuracy. Such a result shows that the model fails to learn the contextual information, even with improved char F1 score. CAFENet, however, shows comparable char F1 score (visual representation) while achieving the best accuracy (contextual representation). This result demonstrates the motivation of our work, which is to improve both contextual and visual representation for enhancing performance on STR with languages including numerous number of characters.
**Analysis on Confidence Score.** To better comprehend why confidence ensemble has the capability to appropriately select the expert, we study the confidence score qualitatively and quantitatively. Fig. 5 shows the prediction and the maximum softmax probability of each expert on several samples. Since the context-free expert focuses on the visual representation, it mispredicts confusing characters (Fig. 5 left column). In contrast, we observe that context-aware expert incorrectly predicts _few_ characters as _many_ characters by resorting to the context when making predictions (Fig. 5 right column). We observe that each expert outputs low maxi
\begin{table}
\begin{tabular}{c c c|c c c c c|c} \hline \hline
**Lang.** & **Test Data** & **Metric** & Softmax & Focal & \(\tau\)-norm & PC-Sofmtax & Bal-Softmax & Ours \\ \hline \multirow{6}{*}{_Kr_} & \multirow{2}{*}{**Real**} & Acc & 77.43 & 77.10 & 77.59 & 77.51 & 78.37 & **81.35** \\ & & Char F1 & 0.66/0.79/**0.88** & 0.60/0.75/0.86 & 0.68/0.80/**0.88** & 0.65/0.78/0.87 & 0.62/0.76/0.87 & **0.69/0.81**/0.88 \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Easy}\)**} & Acc & 77.87 & 77.52 & 78.02 & 77.94 & 78.75 & **81.75** \\ & & Char F1 & —/0.80/0.88 & —/0.76/0.86 & —/**0.81**/0.88 & —/0.78/0.88 & —/0.77/0.87 & —/**0.81**/**0.89** \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Hard}\)**} & Acc & 61.25 & 61.18 & 61.46 & 61.23 & 63.86 & **66.68** \\ & & Char F1 & 0.79/0.78/0.77 & 0.75/0.72/0.73 & 0.79/**0.79**/0.77 & 0.80/0.78/0.77 & 0.79/0.73/0.75 & **0.80**/0.75/**0.78** \\ \cline{2-10} & \multirow{2}{*}{**Synth\({}_{test}\)**} & Acc & 86.37 & 84.63 & 86.34 & 86.04 & 86.84 & **88.93** \\ & & Char F1 & 0.86/0.85/**0.83** & 0.85/0.84/0.81 & **0.87**/**0.86**/0.82 & 0.86/0.85/**0.83** & 0.86/0.85/0.82 & **0.87**/0.85/0.79 \\ \hline \multirow{6}{*}{_Cn_} & \multirow{2}{*}{**Real**} & Acc & 41.83 & 41.45 & 41.85 & 41.74 & 41.26 & **47.67** \\ & & Char F1 & 0.48/0.54/0.57 & 0.45/0.50/0.54 & **0.49**/**0.55**/**0.59** & 0.47/0.52/0.56 & 0.47/0.52/0.55 & 0.48/0.53/0.57 \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Easy}\)**} & Acc & 41.72 & 41.48 & 41.76 & 41.63 & 41.14 & **48.09** \\ & & Char F1 & —/0.55/0.58 & —/0.52/0.55 & —/**0.57**/**0.60** & —/0.54/0.57 & —/0.53/0.56 & —/0.55/0.58 \\ \cline{2-10} & \multirow{2}{*}{**Real\({}_{Hard}\)**} & Acc & 42.71 & 41.24 & 42.57 & 42.59 & 42.25 & **44.34** \\ & & Char F1 & 0.57/**0.60**/0.55 & 0.54/0.56/0.53 & 0.57/**0.60**/**0.56** & 0.57/0.59/0.55 & **0.58**/0.59/0.55 & 0.53/0.58/0.55 \\ \cline{2-10} & \multirow{2}{*}{**Synth\({}_{test}\)**} & Acc & 83.31 & 80.06 & 83.18 & 83.02 & 82.07 & **86.22** \\ & & Char F1 & **0.83**/**0.83**/**0.78** & 0.80/0.79/0.76 & **0.83**/**0.83**/**0.78** & 0.82/0.82/**0.78** & 0.82/**0.83**/**0.78** & **0.83**/**0.73** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with long-tailed recognition baselines and our method trained on CS, where we employ TRBA [3] for STR framework. Char F1 scores represent the scores of _few_ / _medium_ / _many_ characters, respectively. Our method achieves the state-of-the-art STR performance on languages including a long-tailed distribution of characters.
Figure 5: The left and right column indicates the correctly predicted samples of the context-aware and context-free expert, respectively. For each plot under each image, the x-axis and the y-axis indicate character sequence and maximum softmax probability of each character, respectively.
Figure 6: Visualization of relation between the characters and the output probability of each expert on **Real**. X-axis and y-axis indicate the characters sorted by the number of samples and the averaged probability of each character, respectively. The red and the blue dots indicate the probability of the context-aware and context-free experts, respectively.
mum softmax probability with confusing samples (_e.g_., visually confusing character for context-free expert, and _few_ characters for context-aware expert). Our confidence ensemble enables to filter out such low-confident prediction of one expert and select the high-confident prediction of the other expert, improving the STR performance overall.
Fig. 6 visualizes the averaged prediction probability at the ground truth character. We observe that the context-aware expert (red) achieves higher prediction probability with _many_ classes than the context-free expert (blue). On the other hand, the context-free expert shows higher prediction probability with large margin on the _few_ characters compared to the ones of context-aware expert. Such a visualization demonstrates that confidence ensemble enables the two experts to compensate the limitation of each other.
**Expert Selection Ratio.** We analyze the relation between the proportion of the samples allocated to each expert and the character category ratio in the test sets. Interestingly, we discover that the ratio of predictions selected by the context-aware expert in dataset is proportional to the ratio of _many_ characters in a dataset as shown in Fig. 7. In summary, these results indicate that the context-free expert tends to predict the instances containing _few_ or _medium_ characters, whereas the context-aware expert predicts the rest of the instances including only _many_ characters more frequently. We also report the accuracy of each expert in the supplementary.
**Effectiveness of Confidence Ensemble.** In Table 3, we show that careful consideration regarding how to ensemble two different experts is important. We observe that utilizing our method, a word-level confidence ensemble, outperforms the character-level confidence ensemble, which aggregates the outputs at the character level using the maximum softmax probability. The main reason is that the word-level ensemble performs more robustly than the character-level ensemble when misalignment happens between the predicted words by two experts. As shown, while ensemble may be a straightforward and widely used approach, considering such a property for scene text recognition is important. We want to emphasize that our method well reflects such characteristic and improves STR performance.
**Computational Cost.** Fig. 8 summarizes the accuracy on **Real** dataset and the computational costs (_e.g_., flops and the number of parameters). While applying our method consistently improves performance regardless of the model architectures, we observe that our method requires a negligible amount of additional computational costs. The main reason is that we only require an additional classifier, which occupies a negligible amount of weight parameters. For example, about 1% flops and 3\(\sim\)7% parameters increase when applying our method to ViTSTR [2].
## 6 Conclusions
This paper investigates character-level long-tailed distribution in STR, which has been overlooked in STR previously. Our empirical analysis indicates that improving both contextual and visual representation is crucial for improving STR on languages including characters with long-tailed distribution. Based on the finding, we propose a Context-Aware and Free Experts Network (CAFE-Net), which trains two different experts to focus on learning contextual infor
Figure 8: Comparisons on accuracy and computational costs. We use **Real** dataset for the analysis.
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline
**Lang.** & **Method** & **Real** & **Real\({}_{Easy}\)** & **Real\({}_{Hard}\)** \\ \hline \multirow{2}{*}{_Kr_} & Char-level & 80.62 & 81.20 & 59.07 \\ & Word-level & **81.35** & **81.75** & **66.68** \\ \hline \multirow{2}{*}{_Cn_} & Char-level & 45.36 & 47.42 & 29.14 \\ & Word-level & **47.67** & **48.09** & **44.34** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on ensemble technique using **Real** Korean and Chinese datasets. TRBA model [3] is utilized.
Figure 7: (a) We observe that CAFE-Net selects more predictions from context-aware branch for WS\({}_{test}\) and **Real\({}_{Easy}\)**, while selecting more predictions from context-free branch for **Real\({}_{Hard}\)** and RS\({}_{test}\). CA and CF indicates the context-aware branch and the context-free branch, respectively. (b) We show the proportion of _many_, _medium_, and _few_ characters in each test set.
mation and visual representation, respectively. To aggregate two different experts, we propose the confidence ensemble to improve STR performance on all _many_, _medium_, and _few_ characters. Extensive experiments show that we achieve the state-of-the-art performance with languages showing the long-tailed distributions at the character level. We believe that our work inspires the future researchers to improve STR on languages with numerous characters, which is relatively under-explored compared to STR on English.
|
2309.07885 | Generating Sets and Algebraic Properties of Pure Mapping Class Groups of
Infinite Graphs | We completely classify the locally finite, infinite graphs with pure mapping
class groups admitting a coarsely bounded generating set. We also study
algebraic properties of the pure mapping class group: We establish a semidirect
product decomposition, compute first integral cohomology, and classify when
they satisfy residual finiteness and the Tits alternative. These results
provide a framework and some initial steps towards quasi-isometric and
algebraic rigidity of these groups. | George Domat, Hannah Hoganson, Sanghoon Kwak | 2023-09-14T17:31:35Z | http://arxiv.org/abs/2309.07885v1 | # Generating sets and algebraic properties of pure mapping class groups of infinite graphs
###### Abstract.
We completely classify the locally finite, infinite graphs with pure mapping class groups admitting a coarsely bounded generating set. We also study algebraic properties of the pure mapping class group: We establish a semidirect product decomposition, compute first integral cohomology, and classify when they satisfy residual finiteness and the Tits alternative. These results provide a framework and some initial steps towards quasi-isometric and algebraic rigidity of these groups.
## 1. Introduction
A recent surge of interest in mapping class groups of infinite-type surfaces has prompted the emergence of a "big" analog of \(\mathrm{Out}(F_{n})\) as well. Algom-Kfir-Bestvina [1] propose that the appropriate analog is the group of self proper homotopy equivalences up to proper homotopy of a locally finite, infinite graph.
One main difficulty of studying these "big" groups is that the classical approach of geometric group theory is not applicable. In particular, the mapping class groups of infinite-type surfaces and those of locally finite, infinite graphs are generally not finitely generated, and not even compactly generated. Fortunately, they are still Polish groups (separable and completely metrizable topological groups), to which Rosendal provides a generalized geometric group theoretic approach. The role of a finite or compact generating set is replaced with a _coarsely bounded_ (CB) generating set. For example, a group that admits a coarsely bounded generating set has a well-defined quasi-isometry type [13, Theorem 1.2 and Proposition 2.72], and a coarsely bounded group is quasi-isometric to a point. A group with a coarsely bounded neighborhood around the identity is said to be _locally coarsely bounded_, which is equivalent to having a well-defined _coarse equivalence_ type, and is necessary to have a coarsely bounded generating set. Using this framework, Mann-Rafi [12] gave a classification of (tame) surfaces whose mapping class groups are coarsely bounded, locally coarsely bounded, and generated by a coarsely bounded set. This established a first step toward studying the coarse geometry of mapping class groups of infinite-type surfaces. Recently, Thomas Hill [10] gave a complete classification of surfaces that have _pure_ mapping class groups with the aforementioned coarse geometric properties, without the tameness condition.
In the authors' previous work [11], we gave a complete classification of graphs with coarsely bounded, and locally coarsely bounded, pure mapping class groups, the subgroup of the mapping class group fixing the ends of the graph pointwise. In this paper, we provide the complete classification of infinite graphs with CB-generated pure mapping class groups, fulfilling our goal to provide a foundation for studying the coarse geometry of these groups. In the following statement, \(E\) refers to the space of ends of the graph \(\Gamma\) and \(E_{\ell}\) is the subset of ends accumulated by loops.
**Theorem A**.: _Let \(\Gamma\) be a locally finite, infinite graph. Then its pure mapping class group, \(\mathrm{PMap}(\Gamma)\), is CB generated if and only if either \(\Gamma\) is a tree, or satisfies both:_
1. \(\Gamma\) _has finitely many ends accumulated by loops, and_
2. there is no accumulation point in \(E\setminus E_{\ell}\)._
**Remark 1.1**.: Alternatively, we have a constructive description: \(\mathrm{PMap}(\Gamma)\) is CB generated if and only if \(\Gamma\) can be written (not necessarily uniquely) as a finite wedge sum of the four graphs from Figure 1.
Table 1 illustrates the complete classification of graphs with CB, locally CB, and CB-generated pure mapping class group. Observe the trend that \(\mathrm{PMap}(\Gamma)\) admits more complicated coarse geometric properties when \(\Gamma\) has more complicated geometry.
The main tool used to prove Theorem A is the following semidirect decomposition of the pure mapping class group (reminiscent of [1, Corollary 4] in the surface setting):
**Theorem B**.: _Let \(\Gamma\) be a locally finite graph. Let \(\alpha=\max\{0,|E_{\ell}(\Gamma)|-1\}\) for \(|E_{\ell}(\Gamma)|<\infty\) and \(\alpha=\aleph_{0}\) otherwise. Then we have the following short exact sequence,_
\[1\longrightarrow\overline{\mathrm{PMap}_{c}(\Gamma)}\longrightarrow\mathrm{ PMap}(\Gamma)\longrightarrow\mathbb{Z}^{\alpha}\longrightarrow 1\]
_which splits. In particular, we have \(\mathrm{PMap}(\Gamma)=\overline{\mathrm{PMap}_{c}(\Gamma)}\rtimes\mathbb{Z}^{\alpha}\)._
Here, \(\overline{\mathrm{PMap}_{c}(\Gamma)}\) is the closure of the group of compactly supported mapping classes and \(\mathbb{Z}^{\alpha}\) is generated by commuting loop shifts.
As a corollary, we compute the rank of the first integral cohomology of \(\mathrm{PMap}(\Gamma)\). This allows us to see that the number of ends acummuluted by loops of a graph \(\Gamma\) is an algebraic invariant of \(\mathrm{PMap}(\Gamma)\).
**Corollary C**.: _For every locally finite, infinite graph \(\Gamma\),_
\[\mathrm{rk}\left(H^{1}(\mathrm{PMap}(\Gamma);\mathbb{Z})\right)=\begin{cases}0& \text{if }|E_{\ell}|\leq 1,\\ n-1&\text{if }2\leq|E_{\ell}|=n<\infty,\\ \aleph_{0}&\text{otherwise}.\end{cases}\]
We also show that \(\mathrm{PMap}(\Gamma)\) distinguishes graphs of finite rank from graphs of infinite rank. Recall a group is _residually finite_ if it can be embedded into a direct product of finite groups.
**Theorem D**.: \(\mathrm{PMap}(\Gamma)\) _is residually finite if and only if \(\Gamma\) has finite rank._
Figure 1. Every graph with a CB-generated pure mapping class group can be written as a finite wedge sum of these four graphs. From left to right these are: a single loop, a single ray, a Loch Ness monster graph, and a Millipede monster graph.
A group satisfies the _Tits Alternative_ if every subgroup is either virtually solvable or contains a nonabelian free group. Interestingly, it is exactly the graphs with \(\mathrm{PMap}(\Gamma)\) residually finite that satisfy the Tits Alternative.
**Theorem E**.: \(\mathrm{PMap}(\Gamma)\) _satisfies the Tits Alternative if and only if \(\Gamma\) has finite rank._
These three results are steps towards determining when the isomorphism type of \(\mathrm{PMap}(\Gamma)\) determines the graph \(\Gamma\), as in the surface case [1].
If \(\Gamma\) is the infinite rank graph with a single end (the Loch Ness monster graph) and \(\Gamma^{\prime}\) is the wedge sum of \(\Gamma\) with a single ray, then the groups \(\mathrm{PMap}(\Gamma)\) and \(\mathrm{PMap}(\Gamma^{\prime})\) inject into \(\mathrm{Out}(F_{\infty})\) and \(\mathrm{Aut}(F_{\infty})\), respectively, by [1, Theorem 3.1 and Lemma 3.2]. Thus we immediately get the following corollary. We note that one can instead prove this directly, e.g. see [3].
**Corollary F**.: _For \(F_{\infty}\), the free group on a countably infinite set, \(\mathrm{Aut}(F_{\infty})\) and \(\mathrm{Out}(F_{\infty})\) are not residually finite and do not satisfy the Tits alternative._
**Comparison with Surfaces.** The statement of Theorem B is exactly the same as for pure mapping class groups of surfaces, seen in Aramayona-Patel-Vlamis [1]. Although the proof we give is similar in spirit as well, we have to make use of different tools. In [1] the authors make use of the _homology of separating curves_ on a surface and build an isomorphism between the first cohomology of the pure mapping class group and this homology group. For graphs, we do not have any curves to take advantage of. Instead we use partitions of the space of ends accumulated by loops. In order to make
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(t=\) the number of components of \(\Gamma\setminus\Gamma_{c}\) & \multicolumn{2}{c}{Finite rank \(r\)} & \multicolumn{2}{c}{Infinite rank with \(|E_{\ell}|=:n\)} \\ \cline{2-5} with infinite end spaces & \(r=0\) & \(r\in[1,\infty)\) & \(n=1\) & \(n\in[2,\infty)\) & \(n=\infty\) \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c} \hline \hline \(t=0\) & \(\begin{array}{c}\text{CB}\\ \text{CB}\end{array}\) & \(\begin{array}{c}\text{CB}\end{array}\) & \(\begin{array}{c}\text{CB}\end{array}\) & \(\begin{array}{c}\text{CB-generated}\\ \text{COCO}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t\in[1,\infty)\) & \(\begin{array}{c}\text{CB}\end{array}\) & \(\begin{array}{c}\text{locally CB}\end{array}\) & \(\begin{array}{c}\text{locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}\) \\ \(t=\infty\) & \(\begin{array}{c}\text{N/A}\end{array}\) & \(\begin{array}{c}\text{not locally CB}\end{array}
this precise and give this an algebraic structure we make use of the group of locally constant integral functions on \(E_{\ell}(\Gamma)\), i.e., the zeroth Cech cohomology of \(E_{\ell}(\Gamma)\), denoted as \(\mathring{C}(E_{\ell}(\Gamma))\). On a surface, any separating simple closed curve determines a partition of the end space. We can use this to show that the first cohomology groups of pure mapping class groups of graphs and surfaces are often in fact _naturally_ isomorphic. This also gives a slightly alternate proof of the main results in [1].
**Corollary G**.: _Let \(S\) be an infinite-type surface of genus at least one and \(\Gamma\) a locally finite, infinite graph. If \(E_{g}(S)\) is homeomorphic to \(E_{\ell}(\Gamma)\), then both \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) and \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\) are isomorphic to \(\mathring{C}(E_{g}(S))\cong\mathring{C}(E_{\ell}(\Gamma))\)._
### Rigidity Questions
The results above fit naturally into a general question about (pure) mapping class groups of infinite graphs. Namely: "How much does the group \(\operatorname{PMap}(\Gamma)\) determine the graph \(\Gamma\)?" One can obtain more concrete questions by considering certain types of rigidity. We will focus on _algebraic_ and _quasi-isometric_ rigidity. In the finite-type setting, mapping class groups of surfaces and \(\operatorname{Out}(F_{n})\) are known to exhibit strong rigidity properties. Various results starting with Ivanov [14] (see also [14, 15, 16, 17]) establish strong forms of algebraic rigidity and Behrstock-Kleiner-Minsky-Mosher [1] establish quasi-isometric rigidity for \(\operatorname{Map}(S)\). For \(\operatorname{Out}(F_{n})\) we also have strong forms of algebraic rigidity from the work of Farb-Handel [13] building on [15, 16] (see also [12]). Quasi-isometric rigidity for \(\operatorname{Out}(F_{n})\) is still unknown.
For infinite-type surfaces, the work of Bavard-Dowdall-Rafi [1] established a strong form of algebraic rigidity a la Ivanov (see also [10]). The question of quasi-isometric rigidity is still open, but Mann-Rafi [16] give a classification of which mapping class groups of tame infinite-type surfaces have a well-defined quasi-isometry type and which of those are trivial. This allows one to begin to distinguish between some of the mapping class groups (see also [11]).
One can ask the same rigidity questions for infinite graphs. The picture becomes less clear than in the surface case. In particular, trees fail to have algebraic rigidity for the _pure_ mapping class group, as they all have trivial pure mapping class group. Failure is also present for the full mapping class group. Let \(T\) be the regular trivalent tree and let \(T^{\prime}\) be the wedge sum of \(T\) with a single ray. Note that \(E(T)=\mathcal{C}\), a Cantor set, and \(E(T^{\prime})=\mathcal{C}\sqcup\{*\}\), a Cantor set together with a single isolated point. Now we have that \(\operatorname{Map}(T)=\operatorname{Homeo}(\mathcal{C})\) and \(\operatorname{Map}(T^{\prime})=\operatorname{Homeo}(\mathcal{C}\sqcup\{*\})\). However, these two groups are isomorphic, as any homeomorphism fixes the extra end \(*\) of \(T^{\prime}\). There are even more complicated examples of this failure of algebraic rigidity for mapping class groups of trees that come from work on Boolean algebras by McKenzie [14] answering a rigidity conjecture of Monk [15].
The results in this paper allow one to ask several natural rigidity questions for the pure mapping class groups of infinite graphs. We will restrict to some nice classes of graphs in order to state concrete questions. All of the following families of graphs are CB generated by Theorem A and hence have a well-defined quasi-isometry type. Let \(\Gamma_{n}\) denote the graph with exactly \(n\) ends, each of which is accumulated by loops.
**Question 1.2**.: Let \(n,m\geq 2\). If \(\operatorname{PMap}(\Gamma_{n})\) is quasi-isometric to \(\operatorname{PMap}(\Gamma_{m})\), then does \(n=m\)?
By Corollary C we do know that \(\operatorname{PMap}(\Gamma_{n})\) is algebraically isomorphic to \(\operatorname{PMap}(\Gamma_{m})\) if and only if \(n=m\). We can also use the fact that \(\operatorname{PMap}(\Gamma_{1})\) is CB to see that \(\operatorname{PMap}(\Gamma_{1})\) is not quasi-isometric to \(\operatorname{PMap}(\Gamma_{n})\) for any \(n\neq 1\). However, the general question is still open. In the authors' previous work [10], we computed asymptotic dimension for
all of these groups. However, it is infinite for \(n>1\). Therefore, in order to answer this question one would need to study and/or develop other "big" quasi-isometry invariants.
Instead of comparing the effect of changing the number of ends accumulated by loops, one could ask the same question for rays. Namely, let \(\Gamma_{n,r}\) denote the graph with \(n\) ends accumulated by loops and \(r\) rays. We start by asking for distinguishing features of "no ray" versus "one ray."
**Question 1.3**.: Is \(\operatorname{PMap}(\Gamma_{n,0})\) quasi-isometric to \(\operatorname{PMap}(\Gamma_{n,1})\)?
In fact, here we do not even know algebraic rigidity.
**Question 1.4**.: Is \(\operatorname{PMap}(\Gamma_{n,0})\) isomorphic to \(\operatorname{PMap}(\Gamma_{n,1})\)?
The other large family of graphs with CB-generated pure mapping class groups are the finite-type ones. Let \(\Omega_{n,r}\) denote the graph of finite rank \(n\) with \(r<\infty\) rays attached. We know that no \(\operatorname{PMap}(\Omega_{n,r})\) is isomorphic to any \(\operatorname{PMap}(\Gamma_{m})\) by using either residual finiteness Theorem D or the Tits alternative Theorem E. We do not know if any of them are quasi-isometric. Note that \(\operatorname{PMap}(\Omega_{n,r})\) is always finitely generated, but this does not preclude it from being quasi-isometric to an uncountable group.
**Question 1.5**.: Is \(\operatorname{PMap}(\Omega_{m,r})\) ever quasi-isometric to \(\operatorname{PMap}(\Gamma_{n})\), for \(m,r,n>1\)?
### Outline
In Section 2, we give background on mapping class groups of infinite graphs, examples of elements in the pure mapping class group, and coarse geometry of groups. In Section 3, we prove our semidirect product decomposition, Theorem B. We also prove Corollary C in Section 3.5. By exploiting the semidirect decomposition of \(\operatorname{PMap}(\Gamma)\), we prove the CB-generation classification, Theorem A, in Section 4. In Section 5 and Section 6, we finish by proving the residual finiteness characterization Theorem D and Tits alternative characterization Theorem E.
### Acknowledgments
Thank you to Mladen Bestvina for providing an idea of the proof for Lemma 3.14 and suggestion to use the zeroth Cech cohomology to prove Lemma 3.18. We also thank Priyam Patel for helpful conversations toward Section 5 and Theorem D, along with answering questions regarding [20] and [1]. Also we thank Camille Horbez for clarifying the proof of Fact 6.6.
The first author was supported in part by the Fields Institute for Research in Mathematical Sciences, NSF DMS-1745670, and NSF DMS-2303262. The second author was supported by NSF DMS-2303365. The third author acknowledges the support from the University of Utah Graduate Research Fellowship.
###### Contents
* 2 Preliminaries
* 2.1 Mapping class groups of infinite graphs
* 2.2 Elements of \(\operatorname{PMap}(\Gamma)\)
* 2.2.1 Loop swaps
* 2.2.2 Word maps
* 2.2.3 Loop shifts
* 2.3 Coarse geometry of groups
* 3 Semidirect product structure and cohomology
* 3.1 The case \(|E_{\ell}|\leq 1\)
* 3.2 Flux maps
* 3.3 Flux zero maps
* 3.4 Space of flux maps
### 2. Preliminaries
#### Mapping class groups of infinite graphs
Let \(\Gamma\) be a locally finite, infinite graph. Informally, an _end_ of a graph is a way to travel to infinity in the graph. The space of ends (or, the end space), denoted by \(E(\Gamma)\), is defined as:
\[E(\Gamma)=\varprojlim_{K\subset\Gamma}\pi_{0}(\Gamma\setminus K),\]
where \(K\) runs over compact sets of \(\Gamma\) in the inverse limit. Then each element of \(E(\Gamma)\) is called an **end** of \(\Gamma\). An end \(e\) of \(\Gamma\) is said to be **accumulated by loops** if the sequence of complementary components in \(\Gamma\) corresponding to \(e\) only consists of infinite rank graphs. Colloquially, if one continues to see loops along the way to \(e\). We denote by \(E_{\ell}(\Gamma)\) the set of ends of \(\Gamma\) accumulated by loops. Note \(E_{\ell}(\Gamma)\) is a closed subset of \(E(\Gamma)\), and \(E(\Gamma)\) can be realized as a closed subset of a Cantor set (hence so is \(E_{\ell}(\Gamma)\)). We say that the **characteristic triple** of \(\Gamma\) is the triple \((r(\Gamma),E(\Gamma),E_{\ell}(\Gamma))\), where \(r(\Gamma)\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\) is the rank of \(\pi_{1}(\Gamma)\).
Now we define the mapping class group of a locally finite, infinite graph \(\Gamma\). Recall that a map is **proper** if the pre-image of every compact set is compact.
**Definition 2.1**.: [1] The **mapping class group** of \(\Gamma\), denoted \(\operatorname{Map}(\Gamma)\), is the group of proper homotopy classes of proper homotopy equivalences of \(\Gamma\). The **pure mapping class group**, denoted \(\operatorname{PMap}(\Gamma)\), is the closed subgroup consisting of maps that fix the ends of \(\Gamma\) pointwise. More precisely, it is the kernel of the action of \(\operatorname{Map}(\Gamma)\) on the end space \((E(\Gamma),E_{\ell}(\Gamma))\) by homeomorphisms, hence fitting into the following short exact sequence:
\[1\longrightarrow\operatorname{PMap}(\Gamma)\longrightarrow\operatorname{ Map}(\Gamma)\longrightarrow\operatorname{Homeo}(E,E_{\ell})\longrightarrow 1\]
When \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) is nonempty and compact, we can further decompose \(\operatorname{PMap}(\Gamma)\) into subgroups of _core maps_ and of _ray maps_. To state the result, we need to introduce a few concepts.
**Definition 2.2**.: Let \(\Gamma\) be a locally finite, infinite graph. Denote by \(\Gamma_{c}\) the **core graph** of \(\Gamma\), the smallest connected subgraph of \(\Gamma\) that contains all immersed loops in \(\Gamma\). When \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) is nonempty, pick \(e_{0}\in E(\Gamma)\setminus E_{\ell}(\Gamma)\) and denote by \(\Gamma_{c}^{*}\) be the subgraph consisting of \(\Gamma_{c}\) and a choice of embedded ray in \(\Gamma\) limiting to \(e_{0}\) such that the ray intersects \(\Gamma_{c}\) in exactly one point.
Define \(\pi_{1}(\Gamma_{c}^{*},e_{0})\) to be the set of proper homotopy equivalence classes of lines in \(\Gamma_{c}^{*}\), both ends of which limit to \(e_{0}\). We endow it with a group structure by concatenation. This group is naturally isomorphic to \(\pi_{1}(\Gamma_{c}^{*},p)\) for any choice of basepoint \(p\in\Gamma_{c}^{*}\). Finally, define \(\mathcal{R}\) as the group of maps \(h:E(\Gamma)\rightarrow\pi_{1}(\Gamma_{c}^{*},e_{0})\) such that
1. \(h(e_{0})=1\), and
2. \(h\) is locally constant,
where the group operation is the pointwise multiplication in \(\pi_{1}(\Gamma_{c}^{*},e_{0})\).
We have the following decomposition of \(\mathrm{PMap}(\Gamma)\):
**Proposition 2.3** ([1, Corollary 3.9]).: _Let \(\Gamma\) be a locally finite, infinite graph with \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) nonempty and compact. Then_
\[\mathrm{PMap}(\Gamma)\cong\mathcal{R}\rtimes\mathrm{PMap}(\Gamma_{c}^{*}).\]
_In particular, when \(\Gamma\) has finite rank \(n\geq 0\) and finitely many ends, say \(|E(\Gamma)|=e\), then_
\[\mathrm{PMap}(\Gamma)\cong\begin{cases}\mathrm{Out}(F_{n}),&\text{if $e=0$,}\\ F_{n}^{e-1}\rtimes\mathrm{Aut}(F_{n}),&\text{if $e\geq 1$.}\end{cases}\]
**Remark 2.4**.: Any time \(K\) is a connected, compact subgraph of a locally finite, infinite graph \(\Gamma\), we use \(\mathrm{PMap}(K)\) to refer to the group of proper homotopy equivalences of \(K\) that fix \(\partial K\) pointwise up to proper homotopy fixing \(\partial K\). This group is naturally isomorphic to the group \(\mathrm{PMap}(\tilde{K})\) where \(\tilde{K}\) is the graph \(K\) together with a ray glued to each point in \(\partial K\). Applying the above proposition we see that \(\mathrm{PMap}(K)\) is always of the form \(F_{n}^{e-1}\rtimes\mathrm{Aut}(F_{n})\) for some \(n\) and \(e\) because \(K\) is always a proper subset of \(\Gamma\), so \(\partial K\) is nonempty.
The pure mapping class group \(\mathrm{PMap}(\Gamma)\) records the internal symmetries of \(\Gamma\). Contractible graphs (trees) have no internal symmetries. This follows from the work of Ayala-Dominguez-Marquez-Quintero [1]. They give a proper homotopy equivalence classification of locally finite, infinite graphs.
**Theorem 2.5** ([1, Theorem 2.7]).: _Let \(\Gamma\) and \(\Gamma^{\prime}\) be two locally finite graphs of the same rank. A homeomorphism of end spaces \((E(\Gamma),E_{\ell}(\Gamma))\to(E(\Gamma^{\prime}),E_{\ell}(\Gamma^{\prime}))\) extends to a proper homotopy equivalence \(\Gamma\to\Gamma^{\prime}\). If \(\Gamma\) and \(\Gamma^{\prime}\) are trees, then this extension is unique up to proper homotopy._
The second statement of Theorem 2.5 implies the following.
**Proposition 2.6**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(\pi_{1}(\Gamma)=1\). Then \(\mathrm{PMap}(\Gamma)=1\)._
In [1] the authors endow \(\mathrm{Map}(\Gamma)\) with the compact-open topology and show that this gives \(\mathrm{Map}(\Gamma)\), and hence \(\mathrm{PMap}(\Gamma)\), the structure of a Polish group. A neighborhood basis about the identity for the topology is given by sets of the form
\[\mathcal{V}_{K} :=\{[f]\in\mathrm{Map}(\Gamma)|\ \exists f^{\prime}\in[f]\ \text{s.t.}\ f^{\prime}|_{K}=\mathrm{id}\,\text{ and}\] \[f^{\prime}\ \text{preserves the complementary components of $K$ setwise}\}\]
where \(K\) is a compact subset of \(\Gamma\).
Recall the **support** of a continuous map \(\phi:X\to X\) is the closure of the set of points \(x\in X\) such that \(\phi(x)\neq x\). The group of compactly supported mapping classes, denoted by \(\mathrm{PMap}_{c}(\Gamma)\), is the subgroup of \(\mathrm{PMap}(\Gamma)\) consisting of classes that have a compactly supported representative. Its closure in this topology is denoted by \(\overline{\mathrm{PMap}_{c}(\Gamma)}\) and it is a closed (hence Polish) subgroup of \(\mathrm{PMap}(\Gamma)\).
As proper homotopy equivalences are not necessarily injective, unlike homeomorphisms, we need the following alternate notion of support for a proper homotopy equivalence.
**Definition 2.7**.: We say that \([f]\in\operatorname{Map}(\Gamma)\) is **totally supported** on \(K\subset\Gamma\) if there is a representative \(f^{\prime}\in[f]\) so that \(f^{\prime}(K)=K\) and \(f^{\prime}|_{\Gamma\setminus K}=\operatorname{id}\).
To see how a proper homotopy equivalence can have different support and total support, consider a rose graph with two loops labeled by \(a_{1}\) and \(a_{2}\). Then a (proper) homotopy equivalence mapping \(a_{1}\) to \(a_{1}a_{2}\) which is the identity elsewhere is supported on \(a_{1}\), but not totally supported on \(a_{1}\). It is totally supported on \(a_{1}\cup a_{2}\). This is in contrast with homeomorphisms on surfaces, where \(f\) is supported on \(K\) if and only if \(f\) is totally supported on \(K\).
As mapping class groups of graphs are independent of the proper homotopy equivalence representative of the graph, it is often useful to consider a'standard' representative within a proper homotopy equivalence class of graphs.
**Definition 2.8**.: A locally finite graph, \(\Gamma\), is in **standard form** if \(\Gamma\) is a tree with loops attached at some of the vertices. We endow \(\Gamma\) with the path metric that assigns each edge length \(1\).
We also give special names to specific graphs that we will reference often.
**Definition 2.9**.: The **Loch Ness Monster** graph is the graph with characteristic triple \((\infty,\ \{*\},\ \{*\})\). The **Millipede Monster** graph is the graph with characteristic triple \((\infty,\ \{0\}\cup\{\frac{1}{n}\mid n\in\mathbb{Z}_{>0}\},\ \{0\})\). A **monster** graph refers to either one of these.
### Elements of \(\operatorname{PMap}(\Gamma)\)
Here we give a brief treatment of elements of \(\operatorname{PMap}(\Gamma)\). For more detailed definitions with examples, see [1, Section 3].
#### 2.2.1. Loop swaps
A Loop swap is an order \(2\) proper homotopy equivalence induced from a transposition automorphism of a free group. It is totally supported on a compact set. More precisely, we define it as follows.
**Definition 2.10**.: Let \(\Gamma\) be a locally finite graph in standard form with \(\operatorname{rk}\Gamma\geq 2\). Let \(A\) and \(B\) be disjoint finite subsets of loops such that \(|A|=|B|\). Then the **loop swap**\(\mathcal{L}(A,B)\) is a proper homotopy equivalence induced from the group isomorphism on \(\pi_{1}(\Gamma)\) swapping the free factors corresponding to \(A\) and \(B\).
More concretely, pick a basepoint \(p\in\Gamma\) and collapse each maximal tree of the subgraphs corresponding to \(A\) and \(B\) in \(\pi_{1}(\Gamma,p)\). This results in two roses of \(|A|=|B|\) petals. Then swap the two roses, followed by blowing up each rose to the original subgraph. Define \(\mathcal{L}(A,B)\) as the composition of these three maps. Note \(\mathcal{L}(A,B)\in\operatorname{PMap}_{c}(\Gamma)\).
As mentioned above, loop swaps of a graph correspond to the transposition free group automorphisms, which are part of generating set for \(\operatorname{Aut}(F_{n})\). (See Section 4).
#### 2.2.2. Word maps
Next, word maps, which are the most diverse kind of mapping classes among the three kinds of maps introduced in this section.
**Definition 2.11**.: Let \(\Gamma\) be a locally finite graph with \(\operatorname{rk}\Gamma\geq 1\), with a base point \(p\in\Gamma\). Let \(w\in\pi_{1}(\Gamma,p)\) and \(I\) be an interval in an edge of \(\Gamma\). Then the **word map**\(\varphi_{(w,I)}\) is a proper homotopy equivalence that maps \(I\) to a path in \(\Gamma\) determined by \(w\in\pi_{1}(\Gamma,p)\) and is the identity outside of \(I\).
See [1, Section 3.3] for a careful construction of these maps. Note \(\varphi_{(w,I)}\) is supported on \(I\), but in general not _totally_ supported on \(I\). Rather, it is totally supported on the compact set that is the union of \(I\) with a path in \(\Gamma\) determined by \(w\in\pi_{1}(\Gamma,p)\).
The following two properties of word maps will be important in Section 4.
**Lemma 2.12** ([14, Lemma 3.5]).: _If \(I\) is contained in an edge of \(\Gamma\setminus\Gamma_{c}\) and \(w_{1},w_{2}\) are elements in \(\pi_{1}(\Gamma,p)\), then_
\[[\varphi_{(w_{1},I)}\circ\varphi_{(w_{2},I)}]=[\varphi_{(w_{1}w_{2},I)}]\]
_in \(\operatorname{PMap}(\Gamma)\)._
**Lemma 2.13** ([14, Lemma 3.10]).: _Let \(I\) be an interval of \(\Gamma\) which is outside of \(\Gamma_{c}\), and \(\psi\in\operatorname{PMap}(\Gamma)\) be totally supported on a compact subgraph of \(\Gamma_{c}\). Then_
\[\psi\circ[\varphi_{(w,I)}]\circ\psi^{-1}=[\varphi_{(\psi_{*}(w),I)}].\]
In particular, we can use Lemma 2.13 when \(\psi\) is a loop swap.
#### 2.2.3. Loop shifts
Loop shifts are to graphs as handle shifts are to surfaces, which were introduced in Patel-Vlamis [11]. We first define a loop shift on the standard form of the graph \(\Lambda\), the graph with characteristic triple \((\infty,\{e_{-},e_{+}\},\{e_{-},e_{+}\})\). (See Definition 2.8.) Embed \(\Lambda\) in \(\mathbb{R}^{2}\) by identifying the maximal tree with the \(x\)-axis such that \(e_{\pm}\) is identified with \(\pm\infty\) of the \(x\)-axis, and each vertex is identified with an integer point in the \(x\)-axis. Identify the loops with the circles \(\{(x-n)^{2}+(y-\frac{1}{4})^{2}=\frac{1}{16}\}_{n\in\mathbb{Z}}\). Note these circles are tangent to the integer points \(\{(n,0)\}_{n\in\mathbb{Z}}\), thus representing the loops in \(\Lambda\). Now define the **primitive loop shift**\(h\) on \(\Lambda\) as the horizontal translation \(x\mapsto x+1\). One can also omit some loops from \(\Lambda\) and define the loop shift to avoid those loops. For a more general definition, see [14, Section 3.4].
**Definition 2.14**.: Now we define the loop shift on a locally finite, infinite graph \(\Gamma\) with \(|E_{\ell}|\geq 2\). Pick two distinct ends \(e_{-},e_{+}\in E_{\ell}(\Gamma)\) accumulated by loops. By considering a standard form of \(\Gamma\), we can find an embedded ladder graph \(\Lambda\) in \(\Gamma\) such that \(e_{\pm}\) is identified with \(e_{\pm}\) of \(\Lambda\), respectively. Now define the **primitive loop shift**\(h\) on \(\Gamma\) associated to \((e_{-},e_{+})\) as the proper homotopy equivalence induced from the primitive loop shift on the embedded ladder graph \(\Lambda\). For the rest of the graph, define \(h\) to be the identity outside of the \(\frac{1}{2}\)-neighborhood of \(\Lambda\) and interpolate between the shift and the identity on the \(\frac{1}{2}\)-neighborhood.
Finally, a proper homotopy equivalence \(f\) on \(\Gamma\) is a **loop shift** if \(f=h^{n}\) for some primitive loop shift \(h\) and \(n\in\mathbb{Z}\setminus\{0\}\).
### Coarse geometry of groups
**Definition 2.15**.: Let \(A\) be a subset of a topological group \(G\). Then \(A\) is **coarsely bounded (CB)** in \(G\) if for every continuous isometric action of \(G\) on a metric space, every orbit is bounded.
We say a group is **CB-generated** if it has an algebraic generating set that is CB. Similarly, a group is **locally CB** if it admits a CB neighborhood of the identity. In Section 4, we will construct a CB-generating set for the pure mapping class groups of certain graphs, proving the if direction of Theorem A. On the other hand, we have previously classified which graphs have CB or locally CB mapping class groups:
**Theorem 2.16** ([14, Theorem A, D]).: _Let \(\Gamma\) be a locally finite, infinite graph. Then its pure mapping class group \(\operatorname{PMap}(\Gamma)\) is coarsely bounded if and only if one of the following holds:_
* \(\Gamma\) _has rank zero, or_
* \(\Gamma\) _has rank one, and has one end, or_
* \(\Gamma\) _is a monster graph with finitely many rays attached._
_Moreover, \(\operatorname{PMap}(\Gamma)\) is locally coarsely bounded if and only if one of the following holds:_
* \(\Gamma\) _has finite rank, or_
* \(\Gamma\) _satisfies both:_ 1. \(|E_{\ell}(\Gamma)|<\infty\)_, and_ 2. _only finitely many components of_ \(\Gamma\setminus\Gamma_{c}\) _have infinite end spaces._
**Remark 2.17**.: Mirroring the constructive description in Remark 1.1 of the CB-generated \(\mathrm{PMap}(\Gamma)\) classification, we can alternatively characterize the locally CB condition as: \(\mathrm{PMap}(\Gamma)\) is locally CB if and only if \(\Gamma\) can be written as a finite wedge sum of single loops, monster graphs, and trees.
After confirming that a group is CB-generated, the Rosendal framework enables the exploration of the group through the lens of coarse geometry.
**Theorem 2.18**.: _[_13_, Theorem 1.2, Proposition 2.72]_ _Let \(G\) be a CB-generated Polish group. Then \(G\) has a well-defined quasi-isometry type. Namely, any two CB-generating sets for \(G\) give rise to quasi-isometric word metrics on \(G\)._
## 3. Semidirect product structure and cohomology
In this section, we prove:
**Theorem 3.1** (Theorem B, revisited).: _Let \(\Gamma\) be a locally finite graph. Let \(\alpha=\max\{0,|E_{\ell}(\Gamma)|-1\}\) for \(|E_{\ell}(\Gamma)|<\infty\) and \(\alpha=\aleph_{0}\) otherwise. Then we have the following short exact sequence,_
\[1\longrightarrow\overline{\mathrm{PMap}_{c}(\Gamma)}\longrightarrow\mathrm{ PMap}(\Gamma)\longrightarrow\mathbb{Z}^{\alpha}\longrightarrow 1\]
_which splits. In particular, we have \(\mathrm{PMap}(\Gamma)=\overline{\mathrm{PMap}_{c}(\Gamma)}\rtimes\mathbb{Z}^{\alpha}\)._
The map to \(\mathbb{Z}^{\alpha}\) is defined using _flux maps_, which were first defined for locally finite, infinite graphs in [1]. We quickly treat the case when the graph has at most one end accumulated by loops in Section 3.1. Then in Section 3.2, we recap the necessary definitions for flux maps and further expand on their properties. In Section 3.3, we characterize \(\overline{\mathrm{PMap}_{c}(\Gamma)}\) as the common kernel of all flux maps (Theorem 3.11), which provides the left side of the desired splitting short exact sequence. Then in Section 3.4, we construct the other side of the short exact sequence by finding a section, proving Theorem B. This requires us to study the space of flux maps, which is done in the same subsection. As an application, in Section 3.5 we compute the first integral cohomology of \(\mathrm{PMap}(\Gamma)\). Finally, we show the same approach could have been applied to infinite-type surfaces in Section 3.6 to recover the surface version of Theorem B by Aramayona-Patel-Vlamis [1], by showing that there is a natural isomorphism between the first cohomology of the pure mapping class groups of infinite-type surfaces and infinite graphs.
### The case \(|E_{\ell}|\leq 1\)
**Proposition 3.2**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(|E_{\ell}|\leq 1\). Then \(\mathrm{PMap}(\Gamma)=\overline{\mathrm{PMap}_{c}(\Gamma)}\). Furthermore, if \(|E_{\ell}|=0\), then \(\mathrm{PMap}(\Gamma)=\mathrm{PMap}_{c}(\Gamma)\)._
Proof.: The case when \(|E_{\ell}(\Gamma)|=1\) is the result of [1, Corollary 4.5]. Now we assume \(|E_{\ell}(\Gamma)|=0\), i.e., \(\Gamma\) has finite rank.
Let \(f\in\mathrm{PMap}(\Gamma)\). Because \(f\) is proper, \(f^{-1}(\Gamma_{c})\) is compact. Thus, there is some connected compact set \(K\) such that \(\Gamma_{c}\cup f^{-1}(\Gamma_{c})\subset K\). Now \(f|_{\Gamma\setminus K}\) is a proper homotopy equivalence between two contractible sets and thus \(f\) can be homotoped to be totally supported on \(K\). Hence, we conclude \(f\in\mathrm{PMap}_{c}(\Gamma)\)
### Flux maps
We begin the case when \(|E_{\ell}|\geq 2\), where the flux maps come onto the scene. Here we recap the definitions and properties of flux maps developed in [10, Section 7].
Let \(\Gamma\) be a locally finite, infinite graph with \(|E_{\ell}|\geq 2\). For each nonempty, proper, clopen subset \(\mathcal{E}\) of \(E_{\ell}\), we will construct a flux map \(\Phi_{\mathcal{E}}\), which will evaluate to \(1\) for every primitive loop shift that goes from an end in \(E_{\ell}\setminus\mathcal{E}\) to an end in \(\mathcal{E}\). We fix such a subset \(\mathcal{E}\) for this discussion.
After potentially applying a proper homotopy equivalence, we can put \(\Gamma\) into a standard form so that there is a maximal tree \(T\) and a choice of \(x_{0}\) in \(T\) such that \(\Gamma\setminus\{x_{0}\}\) defines a partition of the ends that is compatible with the partition \(\mathcal{E}\sqcup(E_{\ell}\setminus\mathcal{E})\) of \(E_{\ell}\). That is, the components of \(\Gamma\setminus\{x_{0}\}\) determine a partition \(E=\bigsqcup_{i=1}^{m}\mathcal{F}_{i}\) so that we can rewrite as \(\mathcal{E}=\bigsqcup_{i=1}^{k}(\mathcal{F}_{i}\cap E_{\ell})\) and \(E_{\ell}\setminus\mathcal{E}=\bigsqcup_{i=k+1}^{m}(\mathcal{F}_{i}\cap E_{ \ell})\).
Now we group the components of \(\Gamma\setminus\{x_{0}\}\) by the set \(\mathcal{E}\). Let \(\Gamma_{+}\) and \(\Gamma_{-}\) be the unions of the closures of the components of \(\Gamma\setminus\{x_{0}\}\) so that \(E_{\ell}(\Gamma_{+})=\mathcal{E}\) and \(E_{\ell}(\Gamma_{-})=E_{\ell}\setminus\mathcal{E}\). More precisely, \(\Gamma_{+}\) is exactly the union of the complementary components of \(x_{0}\) with end spaces corresponding to \(\mathcal{F}_{1},\dots,\mathcal{F}_{k}\) together with adding back in \(x_{0}\). Similarly, \(\Gamma_{-}\) is the union of the components corresponding to \(\mathcal{F}_{k+1},\dots,\mathcal{F}_{m}\), together with \(x_{0}\). Finally, let \(T_{-}\) be the maximal tree of \(\Gamma_{-}\) contained in \(T\). Define for each \(n\in\mathbb{Z}\):
\[\Gamma_{n}:=\begin{cases}\overline{\Gamma_{-}\cup B_{n}(x_{0})}&\text{ if }n \geq 0,\\ (\Gamma_{-}\setminus B_{n}(x_{0}))\cup T_{-}&\text{ if }n<0,\end{cases}\]
where \(B_{n}(x_{0})\) is the open metric ball of radius \(n\) about \(x_{0}\). See [10, Section 7.2] for more details and pictures of the \(\Gamma_{n}\)'s.
Recall that a subgroup \(A\) of a group \(G\) is a **free factor** if there exists another subgroup \(P\) such that \(G=A*P\). Given a free factor \(A\) of \(B\), we define the **corank** of \(A\) in \(B\), denoted by \(\operatorname{cork}(B,A)\), as the rank of \(B/\langle\!\langle A\rangle\!\rangle\), the quotient of \(B\) by the normal closure of \(A\). For the \(\Gamma_{n}\) defined above we write \(A_{n}=\pi_{1}(\Gamma_{n},x_{0})\), the free factor determined by the subgraph \(\Gamma_{n}\).
Denote by \(\operatorname{PPHE}(\Gamma)\) the group of proper homotopy equivalences on \(\Gamma\) that fix the ends of \(\Gamma\) pointwise and fix the basepoint \(x_{0}\), i.e., the group of _pure_ proper homotopy equivalences. Any pure mapping class can be properly homotoped to fix a point, hence every pure mapping class has a representative in \(\operatorname{PPHE}(\Gamma)\). Note a proper homotopy equivalence on \(\Gamma\) induces an isomorphism on the level of fundamental group. Hence, with our choice of basepoint \(x_{0}\in\Gamma\), for each element \(f\in\operatorname{PPHE}(\Gamma)\), we denote by \(f_{*}\) the induced map on \(\pi_{1}(\Gamma,x_{0})\).
**Definition 3.3** ([10, Definition 7.9]).: Given \(f\in\operatorname{PPHE}(\Gamma)\), we say that a pair of integers, \((m,n)\), with \(m>n\), is **admissible** for \(f\) if
1. \(A_{n}\) and \(f_{*}(A_{n})\) are free factors of \(A_{m}\), and
2. both \(\operatorname{cork}(A_{m},A_{n})\) and \(\operatorname{cork}(A_{m},f_{*}(A_{n}))\) are finite.
In [10, Corollary 7.8], we showed that for every \(f\in\operatorname{PPHE}(\Gamma)\), and \(n\in\mathbb{Z}\), there exist \(m\in\mathbb{Z}\) such that \(m>n\) and \((m,n)\) is admissible for \(f\). Hence, we can define:
**Definition 3.4**.: For a map \(f\in\operatorname{PPHE}(\Gamma)\) and an admissible pair \((m,n)\) for \(f\), we let
\[\phi_{m,n}(f):=\operatorname{cork}(A_{m},A_{n})-\operatorname{cork}(A_{m},f_{*} (A_{n})).\]
Call such a \(\phi_{m,n}\) a **PPHE-flux map**.
**Lemma 3.5** ([10, Lemma 7.10]).: _The PPHE-flux of a map \(f\in\operatorname{PPHE}(\Gamma)\) is well-defined over the choice of admissible pair \((m,n)\). That is, if \((m,n)\) and \((m^{\prime},n^{\prime})\) are two admissible pairs for the map \(f\in\operatorname{PPHE}(\Gamma)\) then \(\phi_{m,n}(f)=\phi_{m^{\prime},n^{\prime}}(f)\)._
Furthermore:
**Proposition 3.6** ([13, Proposition 7.11 and Lemma 7.12]).: _The PPHE-flux maps are homomorphisms. Moreover, for any nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), if \(f,g\in\mathrm{PPHE}(\Gamma)\) are properly homotopic, then \(\phi_{\mathcal{E}}(f)=\phi_{\mathcal{E}}(g)\)._
Hence, the PPHE-flux map factors through \(\mathrm{PMap}(\Gamma)\), so we can define the flux map on \(\mathrm{PMap}(\Gamma)\) as follows.
**Definition 3.7**.: For each nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), we define the **flux map** as:
\[\Phi_{\mathcal{E}}:\mathrm{PMap}(\Gamma)\to\mathbb{Z}\] \[\mapsto\phi_{\mathcal{E}}(f),\]
which is a well-defined homomorphism by Proposition 3.6.
This independence of the choice of admissible pairs further implies the independence of the choice of the basepoint \(x_{0}\).
**Lemma 3.8** (Independence to choice of \(x_{0}\)).: _For a nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), let \(x_{0}\) and \(x^{\prime}_{0}\) be two different points that realize the partition \(E_{\ell}=\mathcal{E}\sqcup(E_{\ell}\setminus\mathcal{E})\). Say \(\phi_{\mathcal{E}}\) and \(\phi^{\prime}_{\mathcal{E}}\) are the flux maps constructed from \(x_{0}\) and \(x^{\prime}_{0}\) respectively, with the same orientation; \(E_{\ell}(\Gamma_{+})=E_{\ell}(\Gamma^{\prime}_{+})=\mathcal{E}\). Then \(\phi_{\mathcal{E}}=\phi^{\prime}_{\mathcal{E}}\)._
Proof.: Note \(x_{0}\) and \(x^{\prime}_{0}\) together cut \(\Gamma\) into three parts (not necessarily connected), where two of them are of infinite rank and realize \(\mathcal{E}\) and \(E_{\ell}\setminus\mathcal{E}\) respectively, and the middle part is of finite rank (but not necessarily compact), and we call it \(M\).
Let \(\{\Gamma_{n}\}\) and \(\{\Gamma^{\prime}_{n}\}\) be the chains of graphs used to define \(\phi_{\mathcal{E}}\) and \(\phi^{\prime}_{\mathcal{E}}\) respectively. Then since \(\phi_{\mathcal{E}}\) and \(\phi_{\mathcal{E}^{\prime}}\) are in the same direction, there exists \(k\in\mathbb{Z}\) such that \(A_{n+k}=A^{\prime}_{n}\) for all \(n\in\mathbb{Z}\). To be precise, this holds for \(k\) such that \(\Gamma_{k}\) and \(\Gamma^{\prime}_{0}\) have the same core graph. Now, given \(f\in\mathrm{PMap}(\Gamma)\) and an admissible pair \((m,n)\) for \(f\) at \(x_{0}\), the pair \((m-k,n-k)\) is admissible for \(f\) at \(x^{\prime}_{0}\). Then
\[(\phi_{\mathcal{E}})_{m,n}(f) =\mathrm{cork}(A_{m},A_{n})-\mathrm{cork}(A_{m},f_{*}(A_{n}))\] \[=\mathrm{cork}(A^{\prime}_{m-k},A^{\prime}_{n-k})-\mathrm{cork}(A^ {\prime}_{m-k},f_{*}(A^{\prime}_{n-k}))=(\phi^{\prime}_{\mathcal{E}})_{m-k,n- k}(f).\]
All in all, the independence of the choice of admissible pairs by Lemma 3.5 proves that \(\phi_{\mathcal{E}}(f)=\phi^{\prime}_{\mathcal{E}}(f)\). Since \(f\) was chosen arbitrarily, this concludes the proof.
Therefore, for each nonempty proper clopen subset \(\mathcal{E}\) of \(E_{\ell}\), we can write the resulting flux map as \(\phi_{\mathcal{E}}\) without specifying \(x_{0}\).
We end this subsection by exploring basic properties of flux maps, to be used in subsequent subsections. Note that flux maps inherit the group operation from \(\mathrm{Hom}(\mathrm{PMap}(\Gamma),\mathbb{Z})\); pointwise addition.
**Proposition 3.9**.: _Let \(\mathcal{E}\subset E_{\ell}\) be a nonempty proper clopen subset of \(E_{\ell}\), where \(|E_{\ell}|\geq 2\). Let \(A,B\) and \(B^{\prime}\) be nonempty proper clopen subsets of \(E_{\ell}\), such that \(A\) and \(B\) are disjoint, and \(B\) is a proper subset of \(B^{\prime}\). Then the following hold:_
1. \(\Phi_{\mathcal{E}^{c}}=-\Phi_{\mathcal{E}}\)_._
2. \(\Phi_{A\sqcup B}=\Phi_{A}+\Phi_{B}\)_._
3. \(\Phi_{B^{\prime}\setminus B}=\Phi_{B^{\prime}}-\Phi_{B}\)_._
Proof.: We first note that (iii) follows from (ii), noting that \(B^{\prime}\setminus B\) and \(B\) are disjoint. Hence, it suffices to prove (i) and (ii).
1. Let \(f\in\mathrm{PPHE}(\Gamma)\) and \(\mathcal{E}\subset E_{\ell}\) be a nonempty proper clopen subset. Choose \(g\in\mathrm{PPHE}(\Gamma)\) to be a proper homotopy inverse of \(f\). Take \(\Gamma_{L}\) and \(\Gamma_{R}\) with \(\Gamma_{L}\subset\Gamma_{R}\) to be an admissible pair of graphs for \(f\) and \(g\) with respect to \(\mathcal{E}.\) Fixing \(\Gamma_{L}\), we can enlarge \(\Gamma_{R}\) so that \((\Gamma\setminus\Gamma_{L},\Gamma\setminus\Gamma_{R})\) is an admissible pair for \(f\) with respect to \(\mathcal{E}^{c}\). Note \((\Gamma_{R},\Gamma_{L})\) is still an admissible pair of graphs for \(f\) with respect to \(\mathcal{E}\). In summary, we have: * \(f(\Gamma_{L})\subset\Gamma_{R},\quad g(\Gamma_{L})\subset\Gamma_{R}\) * \(f(\Gamma\setminus\Gamma_{R})\subset\Gamma\setminus\Gamma_{L}\), * \(\mathrm{cork}(\pi_{1}(\Gamma_{R}),\pi_{1}(\Gamma_{L}))<\infty,\quad\mathrm{ cork}(\pi_{1}(\Gamma_{R}),f_{*}(\pi_{1}(\Gamma_{L})))<\infty.\) * \(\mathrm{cork}(\pi_{1}(\Gamma_{R}),g_{*}(\pi_{1}(\Gamma_{L})))<\infty.\) * \(\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),\pi_{1}(\Gamma\setminus\Gamma _{R}))<\infty,\quad\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),f_{*}(\pi_ {1}(\Gamma\setminus\Gamma_{R})))<\infty.\) Because \(f_{*}\) is a \(\pi_{1}\)-isomorphism, we have the following three different free factor decompositions of \(\pi_{1}(\Gamma)\): \[\pi_{1}(\Gamma) =f_{*}(\pi_{1}(\Gamma_{R}))*f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{ R})),\] \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{R})*\pi_{1}(\Gamma\setminus\Gamma_{R}),\text{ and}\] \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{L})*\pi_{1}(\Gamma\setminus\Gamma_{L}).\] We also have the free factor decompositions \[f_{*}(\pi_{1}(\Gamma_{R})) =\pi_{1}(\Gamma_{L})*B,\text{ and}\] \[\pi_{1}(\Gamma\setminus\Gamma_{L}) =f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{R}))*C,\] for some free factors \(B\) and \(C\) of \(\pi_{1}(\Gamma)\). Putting together these decompositions, we get: \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{L})*B*f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{R}))\] \[\pi_{1}(\Gamma) =\pi_{1}(\Gamma_{L})*f_{*}(\pi_{1}(\Gamma\setminus\Gamma_{R}))*C.\] Therefore, we have \(\mathrm{rk}(B)=\mathrm{rk}(C)\). Translating these equalities, we compute: \[\Phi_{\mathcal{E}^{c}}(f) =\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),f_{*}(\pi_{1}( \Gamma\setminus\Gamma_{R})))-\mathrm{cork}(\pi_{1}(\Gamma\setminus\Gamma_{L}),\pi_{1}(\Gamma\setminus\Gamma_{R}))\] \[=\mathrm{cork}(f_{*}(\pi_{1}(\Gamma_{R})),\pi_{1}(\Gamma_{L}))- \mathrm{cork}(\pi_{1}(\Gamma_{R}),\pi_{1}(\Gamma_{L}))\] \[=\mathrm{cork}(\pi_{1}(\Gamma_{R}),g_{*}(\pi_{1}(\Gamma_{L})))- \mathrm{cork}(\pi_{1}(\Gamma_{R}),\pi_{1}(\Gamma_{L}))\] \[=\Phi_{\mathcal{E}}(g)=-\Phi_{\mathcal{E}}(f),\] where the last equation follows from that \(g\) is a proper inverse of \(f\) and \(\Phi_{\mathcal{E}}\) is a homomorphism.
2. Let \(f\in\mathrm{PPHE}(\Gamma)\). Choose an \(x_{0}\) that determines a partition that is compatible with both \(A^{c}\) and \(B^{c}\) as in the beginning of this section. Then there exist admissible pairs \((\Gamma_{R_{A^{c}}},\Gamma_{L_{A^{c}}})\) and \((\Gamma_{R_{B^{c}}},\Gamma_{L_{B^{c}}})\) of \(f\) with respect to \(A^{c}\) and \(B^{c}\) respectively. By taking small enough \(\Gamma_{L_{A^{c}}}\) and \(\Gamma_{L_{B^{c}}}\), we can ensure that \(\Gamma_{R_{A^{c}}}\) and \(\Gamma_{R_{B^{c}}}\) have contractible intersection in \(\Gamma\); See Figure 2. Then we observe that \((\Gamma_{R_{A^{c}}}\cup\Gamma_{R_{B^{c}}},\Gamma_{L_{A^{c}}}\cup\Gamma_{L_{B^{ c}}})\) is an admissible pair for \(f\) with respect to \(A^{c}\cap B^{c}=(A\sqcup B)^{c}\) (still with the basepoint \(x_{0}\)). We then have a free decomposition \[\pi_{1}(\Gamma_{R_{A^{c}}}\cup\Gamma_{R_{B^{c}}},x_{0})\cong\pi_{1}(\Gamma_{R_ {A^{c}}},x_{0})*\pi_{1}(\Gamma_{R_{B^{c}}},x_{0}),\]
and the same for \(\pi_{1}(\Gamma_{L_{A^{c}}}\cup\Gamma_{L_{B^{c}}},x_{0})\). Finally, we compute
\[\Phi_{(A\sqcup B)^{c}} =\operatorname{\mathrm{cork}}\left(A_{R_{A^{c}}}*A_{R_{B^{c}}},f_{* }(A_{L_{A^{c}}}*A_{L_{B^{c}}})\right)-\operatorname{\mathrm{cork}}\left(A_{R_{A^ {c}}}*A_{R_{B^{c}}},A_{L_{A^{c}}}*A_{L_{B^{c}}}\right)\] \[=(\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},f_{*}(A_{L_{A^{c}}})) +\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},f_{*}(A_{L_{B^{c}}})))\] \[\qquad-(\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},A_{L_{A^{c}}}) +\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},A_{L_{B^{c}}}))\] \[=(\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},f_{*}(A_{L_{A^{c}}}) )-\operatorname{\mathrm{cork}}(A_{R_{A^{c}}},A_{L_{A^{c}}}))\] \[\qquad+(\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},f_{*}(A_{L_{B^ {c}}}))-\operatorname{\mathrm{cork}}(A_{R_{B^{c}}},A_{L_{B^{c}}}))\] \[=\Phi_{A^{c}}+\Phi_{B^{c}}.\]
Finally we apply Part (i) to see that
\[\Phi_{A\sqcup B}=-\Phi_{(A\sqcup B)^{c}}=-\Phi_{A^{c}}-\Phi_{B^{c}}=\Phi_{A}+ \Phi_{B}.\qed\]
**Remark 3.10**.: We remark that by Proposition 3.9 (i) and Proposition 3.9 (ii), we can even formally define the flux map with respect to the empty set or the whole set \(E_{\ell}\):
\[\Phi_{\emptyset}:=\Phi_{A}-\Phi_{A}\equiv 0,\qquad\Phi_{E_{\ell}}:=\Phi_{A}+ \Phi_{A^{c}}\equiv 0.\]
This allows us to define a flux map for any clopen \(\mathcal{E}\subset E\) by \(\Phi_{\mathcal{E}}=\Phi_{\mathcal{E}\cap E_{\ell}}\).
### Flux zero maps
In this section we will prove the following characterization of flux zero maps.
**Theorem 3.11**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(|E_{\ell}(\Gamma)|\geq 2\), and \(f\in\operatorname{\mathrm{PMap}}(\Gamma)\). Then \(f\in\overline{\operatorname{\mathrm{PMap}}_{c}(\Gamma)}\) if and only if \(\Phi_{\mathcal{E}}(f)=0\) for every clopen subset \(\mathcal{E}\) of \(E(\Gamma)\)._
We have proved the forward direction already in a previous paper.
Figure 2. Illustration of the choices of subgraphs for the proof of Proposition 3.9 (ii). Here the paths from \(x_{0}\) to each subgraph are omitted. We can choose pairs of graphs \((\Gamma_{R_{A}^{c}},\Gamma_{L_{A}^{c}})\) and \((\Gamma_{R_{B}^{c}},\Gamma_{L_{B}^{c}})\) such that the graphs from different pairs have contractible intersections.
**Proposition 3.12** ([14, Proposition 7.13]).: _If \(f\in\overline{\mathrm{PMap}_{c}(\Gamma)}\), then \(\Phi_{\mathcal{E}}(f)=0\) for every clopen subset \(\mathcal{E}\) of \(E(\Gamma)\)._
We will first assume that \(\Gamma\) is a core graph, i.e., \(E_{\ell}(\Gamma)=E(\Gamma)\). For brevity, we will temporarily drop the subscript \(\ell\) for \(E_{\ell}\) while we work under this assumption. To leverage the algebraic information (flux \(0\)) to obtain topological information (homotopy equivalence), we need the following fact:
**Lemma 3.13** ([14, Proposition 1B.9]).: _Let \(X\) be a connected CW complex and let \(Y\) be \(K(G,1)\). Then every homomorphism \(\pi_{1}(X,x_{0})\to\pi_{1}(Y,y_{0})\) is induced by a continuous map \((X,x_{0})\to(Y,y_{0})\) that is unique up to homotopy fixing \(x_{0}\)._
Recall that a graph is \(K(F,1)\) for \(F\) a free group (the fundamental group of the graph). Now we prove a preliminary lemma to construct a compact approximation of a proper homotopy equivalence.
**Lemma 3.14**.: _Let \(\mathcal{E}\subset E(\Gamma)\) be a nonempty proper clopen subset and \(f\in\mathrm{PMap}(\Gamma)\). If \(\Phi_{\mathcal{E}}(f)=0\), then given any compact \(K\subset\Gamma\), there exists \(\psi\in\mathrm{PMap}(\Gamma)\) such that_
1. **(Compact approximation)**__\(\psi f^{-1}\in\mathcal{V}_{K}\)_,_
2. **(Truncation)** _there exist disjoint subgraphs_ \(\Gamma_{\mathcal{E}}\)_, and_ \(\Gamma_{\mathcal{E}^{c}}\) _of_ \(\Gamma\) _with end spaces_ \(\mathcal{E}\) _and_ \(\mathcal{E}^{c}\) _respectively, such that_ \(\psi|_{\Gamma_{\mathcal{E}}}=\mathrm{id}\) _and_ \(\psi|_{\Gamma_{\mathcal{E}^{c}}}=f|_{\Gamma_{\mathcal{E}^{c}}}\)_, and_
3. **(Same flux)**__\(\Phi_{\eta}(\psi)=\Phi_{\eta}(f)\) _for every clopen subset_ \(\eta\subset E(\Gamma)\setminus\mathcal{E}\)_._
Proof.: Let \(\{\Gamma_{n}\}_{n\in\mathbb{Z}}\) be as in the definition of \(\Phi_{\mathcal{E}}\), for some choice of basepoint \(x_{0}\). Now, given \(f\in\mathrm{PMap}(\Gamma)\) and any \(n\) there is some \(m_{n}>n\) that makes \((m_{n},n)\) into an admissible pair for \(f\). See Figure 3.
Since \(\Phi_{\mathcal{E}}(f)=0\) we have
( \[*\] ) \[\mathrm{cork}(\pi_{1}(\Gamma_{m_{n}},x_{0}),\pi_{1}(\Gamma_{n},x_{0}))= \mathrm{cork}(\pi_{1}(\Gamma_{m_{n}},f(x_{0})),f_{*}(\pi_{1}(\Gamma_{n},x_{0})))\]
for each \(n\in\mathbb{Z}\). This allows us to define an isomorphism \(\Psi_{n}:\pi_{1}(\Gamma,x_{0})\to\pi_{1}(\Gamma,f(x_{0}))\) for each \(n\). Here we use the notation \(G\left\backslash\!\right\rangle\!H\) to denote the complementary free factor of
\(H\) in \(G\). Define
\[\Psi_{n}=\begin{cases}\operatorname{Id}&\text{on }\pi_{1}(\Gamma,x_{0})\setminus \text{$\pi_{1}(\Gamma_{m_{n}},x_{0})$},\\ \sigma_{n}&\text{on }\pi_{1}(\Gamma_{m_{n}},x_{0})\setminus\text{$\pi_{1}( \Gamma_{n},x_{0})$},\\ f_{*}&\text{on }\pi_{1}(\Gamma_{n},x_{0}),\end{cases}\]
where \(\sigma_{n}:\pi_{1}(\Gamma_{m_{n}},x_{0})\setminus\text{$\pi_{1}(\Gamma_{n},x_ {0})$}\to\pi_{1}(\Gamma_{m_{n}},f(x_{0}))\setminus\text{$f_{*}(\pi_{1}(\Gamma _{n},x_{0}))$}\) is any isomorphism. Such \(\sigma_{n}\) is guaranteed to exist by \((*)\).
Now by Lemma 3.13, for each \(n\) there exists a homotopy equivalence \(\psi_{n}:(\Gamma,x_{0})\to(\Gamma,f(x_{0}))\) such that
\[\psi_{n}=\begin{cases}\operatorname{Id}&\text{on }\Gamma\setminus\Gamma_{m_{n}},\\ f&\text{on }\Gamma_{n}.\end{cases}\]
Also note \(\psi_{n}\) is a _proper_ homotopy equivalence, as it can be defined in pieces as proper maps. Further, \(\psi_{n}\) fixes the ends of \(\Gamma\), because \(f\) does and \(\Gamma_{m_{n}}\setminus\Gamma_{n}\) is compact. One can similarly define its proper homotopy inverse. Hence, for each \(n\) we have \([\psi_{n}]\in\operatorname{PMap}(\Gamma)\).
The subgraphs \(\{\Gamma_{n}\}_{n\in\mathbb{Z}}\) form an exhaustion of \(\Gamma\), so \(\psi_{n}\to f\) in \(\operatorname{PMap}(\Gamma)\). Therefore, for a compact \(K\subset\Gamma\), there exists an \(n^{\prime}\in\mathbb{Z}\) such that \(\psi_{n^{\prime}}f^{-1}\in\mathcal{V}_{K}\). Take \(\psi=\psi_{n^{\prime}}\) and set \(\Gamma_{\mathcal{E}^{c}}=\Gamma_{n^{\prime}}\) and \(\Gamma_{\mathcal{E}}=\overline{\Gamma\setminus\Gamma_{m_{n^{\prime}}}}\). This gives (i) and (ii) by construction.
We now check that (iii) follows from (ii). Let \(\eta\) be a clopen subset of \(E(\Gamma)\) that is disjoint from \(\mathcal{E}\). We will actually check that \(\Phi_{\eta^{c}}(\psi)=\Phi_{\eta^{c}}(f)\). This will imply (iii) by Proposition 3.9 (i).
Note \(\eta\subset\mathcal{E}^{c}\). Now let \(\Gamma_{m}\) be a subgraph from the definition of \(\Phi_{\eta^{c}}\) so that \(\Gamma_{m}\subset\Gamma_{\mathcal{E}^{c}}\). Then there exists \(n\leq m\) such that \((m,n)\) is admissible for \(\psi\) with respect to the flux map \(\Phi_{\eta^{c}}\). Since \(f=\psi\) on \(\Gamma_{n}\subset\Gamma_{m}\subset\Gamma_{\mathcal{E}^{c}}\) by (ii), we see that \(\Phi_{\eta^{c}}(\psi)=\Phi_{\eta^{c}}(f)\) with the admissible pair of graphs \((\Gamma_{m},\Gamma_{n})\).
**Remark 3.15**.: The reader may wonder why in the proof above we chose to define this sequence of maps and argue via convergence in place of constructing the map \(\psi\) by hand as in [1]. While it is not too difficult to construct a \(\psi\) so that \(\psi f^{-1}\) is the identity on a given compact \(K\), it is significantly more finicky to guarantee that \(\psi f^{-1}\) preserves the complementary components of \(K\). The convergence argument given above allows us to avoid the messy details of this.
**Proposition 3.16**.: _Let \(\Gamma\) be a locally finite, infinite graph with \(E(\Gamma)=E_{\ell}(\Gamma)\), \(|E(\Gamma)|\geq 2\), and \(f\in\operatorname{PMap}(\Gamma)\). If \(\Phi_{\mathcal{E}}(f)=0\) for every clopen subset \(\mathcal{E}\) of \(E(\Gamma)\), then \(f\in\overline{\operatorname{PMap}_{c}(\Gamma)}\)._
Proof.: Assume \(f\in\operatorname{PMap}(\Gamma)\) has \(\Phi_{\mathcal{E}}(f)=0\) for every nonempty proper clopen subset \(\mathcal{E}\) of the end space \(E(\Gamma)\). Given any compact \(K\subset\Gamma\) we will find \(\psi\in\operatorname{PMap}_{c}(\Gamma)\) such that \(\psi f^{-1}\in\mathcal{V}_{K}\).
Without loss of generality we may enlarge \(K\) so that it is connected, has at least two complementary components, and every complementary component of \(K\) is infinite. Then the complement of \(K\) induces a partition of the ends. Write
\[\mathcal{P}_{K}=\mathcal{E}_{1}\sqcup\ldots\sqcup\mathcal{E}_{n}\]
for this partition.
Apply Lemma 3.14 to \(f\) using \(\mathcal{E}_{1}\) to obtain \(\psi_{1}\). Note that by (iii) we still have \(\Phi_{\mathcal{E}_{2}}(\psi_{1})=\Phi_{\mathcal{E}_{2}}(f)=0\). Thus we can apply the lemma again to \(\psi_{1}\) using \(\mathcal{E}_{2}\) to obtain a \(\psi_{2}\). Continue this process recursively to obtain \(\psi_{n}\).
Now, by (i) of Lemma 3.14, there exist \(v_{1},\dots,v_{n}\in\mathcal{V}_{K}\) such that
\[\psi_{i}=\begin{cases}v_{i}\psi_{i-1}&\text{for $1<i\leq n$},\\ v_{1}f&\text{for $i=1$}.\end{cases}\]
Putting these together gives \(\psi_{n}f^{-1}=v_{n}v_{n-1}\cdots v_{1}\in\mathcal{V}_{K}\) as \(\mathcal{V}_{K}\) is a subgroup.
It remains to check that \(\psi_{n}\in\operatorname{PMap}_{c}(\Gamma)\). However, by (ii), we have that \(\psi_{n}\) is equal to the identity on \(\bigcup_{i=1}^{n}\Gamma_{\mathcal{E}_{i}}\). This exactly covers all of the ends of \(\Gamma\) as \(\mathcal{P}_{K}\) was a partition of the ends. Therefore we see that \(\psi_{n}\) is supported on \(\overline{\bigcap_{i=1}^{n}\Gamma\setminus\Gamma_{\mathcal{E}_{i}}}\), a compact set. Taking \(\psi=\psi_{n}\) gives the desired compact approximation of \(f\).
Finally, since the \(K\) above was taken to be arbitrary, starting with a compact exhaustion of \(\Gamma\) we can apply the above to obtain a sequence of compactly supported maps that converge to \(f\).
Now we turn to the case where \(\Gamma\) is not necessarily a core graph.
Proof of Theorem 3.11.: The forward direction follows from Proposition 3.12.
For the backward direction, we first homotope \(f\) so that it fixes the vertices of \(\Gamma\). Then we see that we can write \(f=f_{T}f_{c}\) where \(f_{T}\) has support on \(\Gamma\setminus\Gamma_{c}\) and \(f_{c}\) has support on \(\Gamma_{c}\).
We can see that \(f_{T}\in\overline{\operatorname{PMap}_{c}(\Gamma)}\). Indeed, enumerate the components of \(\Gamma\setminus\Gamma_{c}\) as \(\{R_{i}\}_{i\in I}\) where each \(R_{i}\) is a tree and \(I\) is either finite or \(I=\mathbb{N}\). Then we can decompose \(f_{T}=\prod_{i\in I}f_{i}\) where each \(f_{i}\) has compact support on \(R_{i}\). Indeed, each has compact support as \(f_{T}\) is proper and thus the pre-image of the cutpoint \(\overline{R_{i}}\cap\Gamma_{c}\) is compact and \(f_{i}\) can be homotoped to have support contained within the convex hull of this full pre-image of the cutpoint. Furthermore, all of the \(f_{i}\) pairwise commute as each \(f_{i}\) can be homotoped so that it is totally supported away from the support of each other \(f_{j}\). Thus, we see that \(f_{T}\in\overline{\operatorname{PMap}_{c}(\Gamma)}\) as it is realized as the limit of partial products of the \(f_{i}\).
This also shows that given any flux map \(\Phi_{\mathcal{E}}\) we must have that \(\Phi_{\mathcal{E}}(f_{T})=0\), again by Proposition 3.12. Therefore, given an \(\mathcal{E}\) with \(\Phi_{\mathcal{E}}(f)=0\) we must have that \(\Phi_{\mathcal{E}}(f_{c})=0\) as \(\Phi_{\mathcal{E}}\) is a homomorphism. We can then apply Proposition 3.16 to conclude the desired result.
### Space of flux maps
Before we can prove Theorem B we need to endow the set of flux maps with an algebraic structure. In the surface case, [1] could utilize the first integral (co)homology of separating curves on the surface to give structure to the flux maps they defined. Here we will be using the group of locally constant \(\mathbb{Z}\)-valued functions on \(E_{\ell}(\Gamma)\) in place of the homology of separating curves. We remark that this is really the zeroth Cech cohomology of \(E_{\ell}(\Gamma)\) with coefficients in the constant sheaf \(\mathbb{Z}\). In Section 3.6 we observe that this perspective also works in the surface case.
For a topological space \(X\), we denote by \(\check{C}(X)\) the group of locally constant \(\mathbb{Z}\)-valued functions on \(X\). The group operation is given by addition of functions. We let \(\check{C}(X)=\check{C}(X)/\mathbb{Z}\), the quotient obtained by identifying the constant functions with zero. We will now give a collection of some facts about \(\check{C}(E)\) when \(E\) is a compact, totally disconnected, and metrizable space (i.e. a closed subset of a Cantor set).
We identify the Cantor set, \(\mathcal{C}=2^{\mathbb{N}}=\{0,1\}^{\mathbb{N}}\), with the set of countable binary sequences. A countable basis of clopen sets for the topology is then given by the cylinder sets
\[C_{a_{1}\cdots a_{k}}:=\{(x_{n})\in 2^{\mathbb{N}}\ |\ x_{i}=a_{i},\ i=1, \dots,k\}\]
where \(a_{1}\cdots a_{k}\) is some finite binary sequence of length \(k\). Say such a cylinder set has **width**\(k\). For \(E\) a closed subset of the Cantor set \(\mathcal{C}\), a **cylinder set** of \(E\) is the intersection
of a cylinder set for \(\mathcal{C}\) with \(E\), i.e., a set of the form \(C_{w}\cap E\) where \(w\in 2^{k}\) for some \(k\geq 0\). The standard tree model for the Cantor set is the usual rooted binary tree, and for an arbitrary closed subset \(E\subset\mathcal{C}\) we take the subtree with the end space \(E\). Given a subset, \(A\), of a topological space we let \(\chi_{A}\) denote the indicator function on \(A\).
**Theorem 3.17** (Countable Basis for \(\hat{C}(E)\)).: _Let \(E\) be a compact, totally disconnected, and metrizable space. There exists a countable collection \(\mathcal{A}=\{A_{i}\}_{i\in I}\) of cylinder sets of \(E\) so that_
1. _Any cylinder set_ \(C\) _of_ \(E\) _that is not in_ \(\mathcal{A}\) _can be written as_ \(C=A_{0}\setminus(A_{1}\sqcup\dots\sqcup A_{n})\) _for some_ \(A_{0}\in\mathcal{A}\)_, and some_ \(A_{j}\in\mathcal{A}\)_, with_ \(A_{j}\subset A_{0}\) _and_ \(A_{j}\cap A_{k}=\emptyset\) _for all distinct_ \(j,k\in\{1,\dots,n\}\)_,_
2. \(\mathcal{B}=\{\chi_{A_{i}}\}_{i\in I}\) _is a free basis for_ \(\hat{C}(E)\)_. In particular,_ \(\hat{C}(E)=\oplus_{i\in I}\mathbb{Z}\)_, and_
3. _for_ \(T\) _the standard tree model of the end space_ \((E,\emptyset)\)_, there exists an injective map_ \(\iota:\mathcal{A}\to T\) _so that_ \(\iota\) _maps into the interior of edges and_ \(\iota(\mathcal{A})\) _cuts the graph into a collection of one ended graphs._
Proof.: Note that if \(|E|=n<\infty\) then the result is immediate by taking \(\mathcal{A}\) to be the collection of all individual ends except one. Hence, we will assume that \(E\) is infinite.
We first prove the result for \(E=\mathcal{C}\) the Cantor set. We define \(\mathcal{A}^{\prime}\) to be the set of all cylinder sets consisting of cylinders of the form \(C_{a_{1}\cdots a_{k-1}0}\) together with the whole space \(\mathcal{C}\). That is,
\[\mathcal{A}^{\prime}=\{\mathcal{C},C_{0},C_{00},C_{10},C_{000},C_{100},C_{010 },C_{110},\dots\}\]
We claim that \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) forms a free basis for \(\check{C}(\mathcal{C})\). We first have
**Claim**.: _For every \(f\in\check{C}(\mathcal{C})\), there exist finitely many disjoint clopen subsets \(B_{1},\dots,B_{n}\) and integers \(b_{1},\dots,b_{n}\) such that_
\[f=\sum_{j=1}^{n}b_{j}\chi_{B_{j}}.\]
Proof.: Suppose \(f\) is a locally constant function on \(\mathcal{C}\) with _infinitely many_ distinct \(\mathbb{Z}\)-values \(b_{1},b_{2},\dots\). Then \(\{f^{-1}(b_{j})\}_{j=1}^{\infty}\) forms a clopen cover of \(\mathcal{C}\) which does not have a finite subcover, contradicting the compactness of \(\mathcal{C}\). Therefore, \(f\) can assume at most finitely different values in \(\mathbb{Z}\), and taking \(B_{j}=f^{-1}(b_{j})\) proves the claim.
Thus we can check that \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) generates \(\check{C}(\mathcal{C})\) by verifying that for an arbitrary clopen set \(B\) of \(\mathcal{C}\), we can write \(\chi_{B}\) as a finite linear combination of elements from \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\). Since the cylinder sets form a clopen basis for the topology, we only need to check when \(B\) is a cylinder set. Take \(B=C_{a_{1}\cdots a_{k}}\) for some \(k>0\) and \(a_{1}\cdots a_{k}\in 2^{k}\). Then we have either \(B\in\mathcal{A}^{\prime}\) or \(a_{k}=1\). Supposing the latter, let
\[m=\begin{cases}0&\text{if }a_{1}=\dots=a_{k}=1,\\ \max\{j|a_{j}=0\}&\text{otherwise}\end{cases}\]
Then we can write
\[\chi_{B}=\chi_{C_{a_{1}\cdots a_{k}}}=\chi_{C_{a_{1}\cdots a_{m}}}-\left(\sum _{j=m}^{k-1}\chi_{C_{a_{1}\cdots a_{j}0}}\right),\]
where we take \(a_{1}\cdots a_{m}\) as an empty sequence when \(m=0\). Thus we see that \(\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) generates \(\check{C}(\mathcal{C})\). This also shows that property (1) holds.
Next we verify that the set \(\mathcal{B}^{\prime}:=\{\chi_{A}\}_{A\in\mathcal{A}^{\prime}}\) is linearly independent. Suppose
\[0=\sum_{j=1}^{n}a_{j}\chi_{A_{j}},\]
for some distinct \(A_{1},\ldots,A_{n}\in\mathcal{A}^{\prime}\). We will proceed by induction on \(n\). The case when \(n=1\) is straightforward. Now let \(n>1\) and without loss of generality we can assume that \(A_{n}\) is of minimal width. Let \(w\) be the word defining \(A_{n}\), i.e. \(A_{n}=C_{w}\). Note that \(w\) may be the empty word (when \(A_{n}=\mathcal{C}\)). Consider the sequence \(w\bar{1}\) consisting of the starting word \(w\) followed by the constant infinite sequence of \(1\)s. Then by minimality of \(w\), we have
\[0=\sum_{j=1}^{n}a_{j}\chi_{A_{j}}(w\bar{1})=a_{n}.\]
Therefore, we have \(0=\sum_{j=1}^{n}a_{j}\chi_{A_{j}}=\sum_{j=1}^{n-1}a_{j}\chi_{A_{j}}\) so by the induction on \(n\) we see that \(a_{j}=0\) for all \(j\). Thus we see that \(\mathcal{B}^{\prime}\) is a free basis for \(\hat{C}(\mathcal{C})\). Taking \(\mathcal{A}:=\mathcal{A}^{\prime}\setminus\{\mathcal{C}\}=\{C_{0},C_{00},C_{1 0},C_{000},C_{100},C_{010},C_{110},\ldots\}\), the free basis \(\mathcal{B}^{\prime}\) for \(\hat{C}(\mathcal{C})\) descends (allowing for a slight abuse of notation) to a free basis \(\mathcal{B}:=\{\chi_{A}\}_{A\in\mathcal{A}}\) for \(\hat{C}(\mathcal{C})\), proving (2).
Finally, we can define \(\iota:\mathcal{A}\to T\) by using the labels on each of the cylinder sets to map each cylinder set to the midpoint of its corresponding edge in the standard binary tree model of the Cantor set. See Figure 4 for a picture of the map. The components of \(T\setminus\iota(\mathcal{A})\) each contains one end from \(T\).
Now to go from the Cantor set to a general infinite end space we identify \(E\) with a subspace of \(\mathcal{C}\) and take \(\mathcal{A}=\{C_{0}\cap E,C_{00}\cap E,C_{10}\cap E,\ldots\}\), deleting duplicated sets if necessary. Then the set \(\{\chi_{A}\}_{A\in\mathcal{A}}\) will still determine a free basis for \(\hat{C}(E)\).
Apply this theorem to \(E_{\ell}(\Gamma)\) in order to obtain the set \(\mathcal{A}=\{A_{i}\}_{i\in I}\). We now define the homomorphism
\[\Pi:\operatorname{PMap}(\Gamma) \to\prod_{i\in I}\mathbb{Z}\] \[f \mapsto(\Phi_{A_{i}}(f))_{i\in I}.\]
Figure 4. The image of the map \(\iota:\mathcal{A}\to T\) is given in blue.
We will check that this map is surjective and has kernel exactly \(\overline{\operatorname{PMap}_{c}(\Gamma)}\), i.e. it forms the following short exact sequence:
\[1\longrightarrow\overline{\operatorname{PMap}_{c}(\Gamma)}\longrightarrow \operatorname{PMap}(\Gamma)\stackrel{{\Pi}}{{\longrightarrow}}\prod_{ i\in I}\mathbb{Z}\longrightarrow 1.\]
**Lemma 3.18**.: _Let \(\mathcal{E}\) be a clopen subset of \(E(\Gamma)\) so that \(\mathcal{E}\cap E_{\ell}(\Gamma)\) is a proper nontrivial subset. If \(f\in\operatorname{PMap}(\Gamma)\) satisfies \(\Phi_{A}(f)=0\) for all \(A\in\mathcal{A}\), then \(\Phi_{\mathcal{E}}(f)=0\) as well._
Proof.: We first note that \(\mathcal{E}\) can be written as a disjoint union of finitely many cylinder sets. Thus, by Proposition 3.9 (ii) it suffices to check when \(\mathcal{E}\) is a cylinder set \(C\) of \(E(\Gamma)\). Assume that \(f\in\operatorname{PMap}(\Gamma)\) satisfies \(\Phi_{A_{i}}(f)=0\) for all \(i\in I\). Then \(C\cap E_{\ell}(\Gamma)\) is again a cylinder set of \(E_{\ell}\). Applying property (1) of Theorem 3.17 we have either \(C\in\mathcal{A}\), or \(C=A_{0}\setminus(\bigsqcup_{j=1}^{n}A_{j})\) for some \(A_{0}\in\mathcal{A}\) and \(A_{j}\in\mathcal{A}\). If \(C\in\mathcal{A}\), then we conclude \(\Phi_{C}(f)=0\). For the other case, we can apply Proposition 3.9 (ii) and Proposition 3.9 (iii) to write
\[\Phi_{C}(f)=\Phi_{A_{0}}(f)-\sum_{j=1}^{n}\Phi_{A_{j}}(f)=0-0=0.\]
**Corollary 3.19**.: _For \(\Gamma\) and \(\Pi\) as above, \(\ker(\Pi)=\overline{\operatorname{PMap}_{c}(\Gamma)}\)._
Proof.: The forward direction of Theorem 3.11 implies \(\ker(\Pi)\supset\overline{\operatorname{PMap}_{c}(\Gamma)}\). On the other hand, Lemma 3.18 together with the backward direction of Theorem 3.11 imply the other containment \(\ker(\Pi)\subset\overline{\operatorname{PMap}_{c}(\Gamma)}\).
Next, we will build a section to show \(\Pi\) is surjective, and more importantly, this sequence splits. This gives us our desired semidirect product decomposition in Theorem B.
**Proposition 3.20**.: _There exists an injective homomorphism \(\hat{\iota}:\prod_{i\in I}\mathbb{Z}\to\operatorname{PMap}(\Gamma)\) so that \(\Pi\circ\hat{\iota}\) is the identity on \(\prod_{i\in I}\mathbb{Z}\)._
Proof.: Let \(T\) be the maximal tree of the graph \(\Gamma_{c}\) in standard form. Note that the end space of \(T\) is homeomorphic to \(E_{\ell}(\Gamma)\) and let \(\mathcal{A}=\{A_{i}\}_{i\in I}\) be the set obtained from (2) of Theorem 3.17 applied to the set \(E_{\ell}(\Gamma)\) and \(\iota:\mathcal{A}\to T\) be the map given by property (3) of Theorem 3.17. The closure in \(\Gamma_{c}\) of every complementary component of \(\iota(\mathcal{A})\) is a one-ended subgraph with infinite rank. Call one such component \(\Gamma^{\prime}\). It has at most a countably infinite number of half edges coming from the points of \(\iota(\mathcal{A})\). Now we will modify \(\Gamma^{\prime}\) via a proper homotopy equivalence that fixes \(\partial\Gamma^{\prime}\) so that the new graph has a "grid of loops" above \(\partial\Gamma^{\prime}\). See Figure 5 for how this replacement is done. Such a replacement by a proper homotopy equivalence is possible by the classification of infinite graphs.
After replacing each component of \(\Gamma_{c}\setminus\iota(\mathcal{A})\) we obtain a new graph that is proper homotopy equivalent to the original \(\Gamma_{c}\). We can also extend this proper homotopy equivalence to the entire graph \(\Gamma\), as our proper homotopy equivalence fixes the boundary points of each complementary component of \(\iota(\mathcal{A})\). Now for each \(i\), there are exactly two complementary components whose closures in \(\Gamma_{c}\) contain \(\iota(A_{i})\). Let \(\ell_{i}\in\operatorname{PMap}(\Gamma)\) be the loop shift supported on the two columns of loops sitting above \(\iota(A_{i})\) in these components. Orient the loop shift so that it is shifting towards the end in \(A_{i}\).
Note that each \(\ell_{i}\) has total support disjoint from each other \(\ell_{j}\) so that \(\ell_{i}\ell_{j}=\ell_{j}\ell_{i}\) for all \(i,j\in I\). Therefore, \(\prod_{i\in I}\langle\ell_{i}\rangle<\operatorname{PMap}(\Gamma)\), and we can define the homomorphism
\(\hat{\iota}:\prod_{i\in I}\mathbb{Z}\rightarrow\mathrm{PMap}(\Gamma)\) by
\[\hat{\iota}\left((n_{i})_{i\in I}\right):=\prod_{i\in I}\ell_{i}^{n_{i}}.\]
It remains to check that \(\Pi\circ\hat{\iota}\) is the identity on \(\prod_{i\in I}\mathbb{Z}\). By the construction of the loop shifts, \(\ell_{i}\) crosses exactly one of the clopen subsets in \(\mathcal{A}\), namely \(A_{i}\). Therefore, we
Figure 5. The new replacement graphs for each component of \(T\setminus\iota(\mathcal{A})\). The top picture shows the case when a component has infinitely many cut points and the bottom for finitely many. Note that above each cut point one sees a “column of loops” within the grid.
have
\[\Phi_{A_{j}}(\ell_{i})=\delta_{ij}:=\begin{cases}1&\text{if }i=j,\\ 0&\text{if }i\neq j.\end{cases}\]
Now, given any tuple \((n_{i})_{i\in I}\in\prod_{i\in I}\mathbb{Z}\) we compute
\[(\Pi\circ\hat{\iota})\left((n_{i})_{i\in I}\right)=\Pi\left(\prod_{i\in I}\ell _{i}^{n_{i}}\right)=\left(\Phi_{A_{j}}\left(\prod_{i\in I}\ell_{i}^{n_{i}} \right)\right)_{j\in I}=(n_{i})_{i\in I}.\qed\]
Proof of Theorem B.: Corollary 3.19 and Proposition 3.20 above give the desired splitting short exact sequence \(1\longrightarrow\overline{\mathrm{PMap}_{c}(\Gamma)}\longrightarrow\mathrm{ PMap}(\Gamma)\longrightarrow\mathbb{Z}^{\alpha}\longrightarrow 1\), with \(\alpha=|\mathcal{A}|-1\).
### The rank of integral cohomology
As pointed out in Remark 3.10, we define \(\Phi_{\emptyset}=\Phi_{E_{\ell}}\equiv 0\).
**Lemma 3.21**.: _Let \(\{A_{i}\}_{i\in I}\) be a clopen collection of subsets of \(E_{\ell}(\Gamma)\) such that \(\mathcal{B}=\{\chi_{A_{i}}\}_{i\in I}\) is a free basis for \(\mathring{C}(E_{\ell})\) as in Theorem 3.17. Then the map_
\[\Theta:\mathring{C}(E_{\ell}(\Gamma)) \longrightarrow H^{1}(\mathrm{PMap}(\Gamma);\mathbb{Z}),\] \[\sum_{i\in I}n_{i}\chi_{A_{i}} \longmapsto\sum_{i\in I}n_{i}\Phi_{A_{i}}.\]
_is a well-defined injective homomorphism._
Proof.: Since \(\mathcal{B}\) is a free basis for \(\mathring{C}(E_{\ell})\), the map \(\chi_{A_{i}}\mapsto\Phi_{A_{i}}\) on \(\mathcal{B}\) extends to a well-defined homomorphism on the whole group \(\mathring{C}(E_{\ell})\). To see \(\Theta\) is injective, suppose \(\Theta(\sum_{i}n_{i}\chi_{A_{i}})=\sum_{i}n_{i}\Phi_{A_{i}}=0\) for \(\chi_{A_{i}}\in\mathcal{B}\). Then for each \(j\) that arises as an index of the summation, we evaluate the sum at the loop shift \(\ell_{j}\) constructed in the proof of Proposition 3.20:
\[0=\sum_{i}n_{i}\Phi_{A_{i}}(\ell_{j})=n_{j}\Phi_{A_{j}}(\ell_{j})=n_{j},\]
which implies that \(\sum_{i}n_{i}\chi_{A_{i}}\equiv 0\), concluding that \(\Theta\) is injective.
Here we collect relevant results on the first homology of the pure mapping class group of graphs of rank \(n\) with \(s\) rays.
**Fact 3.22** ([13, Theorem 1.1]).: \(H_{1}(\mathrm{Aut}(F_{n});\mathbb{Q})=0\) _for all \(n\geq 1\)._
**Fact 3.23** ([13, Section 4]).: _For \(n\geq 3\) and \(s\geq 1\),_
\[H_{1}(F_{n}^{s-1}\rtimes\mathrm{Aut}(F_{n});\mathbb{Z})\cong H_{1}(F_{n}^{s} \rtimes\mathrm{Aut}(F_{n});\mathbb{Z}).\]
_This still holds for \(n=1,2\) if \(s\geq 2\)._
**Proposition 3.24**.: \(H^{1}(\mathrm{PMap}_{c}(\Gamma);\mathbb{Z})=0\) _for every locally finite, infinite graph \(\Gamma\)._
Proof.: Let \(\{\Gamma_{k}\}\) be a compact exhaustion of \(\Gamma\). Then \(\mathrm{PMap}_{c}(\Gamma)\) is a direct limit of \(\mathrm{PMap}(\Gamma_{k})\)'s, each of which is isomorphic to \(F_{n_{k}}^{e_{k}}\rtimes\mathrm{Aut}(F_{n_{k}})\) for some \(e_{k}\geq 0\) and \(n_{k}\geq 1\) (Recall Remark 2.4). Since the direct limit commutes with \(H^{1}(-;\mathbb{Z})\equiv\mathrm{Hom}(-,\mathbb{Z})\), it suffices to show that groups of the form \(F_{n}^{e}\rtimes\mathrm{Aut}(F_{n})\) have trivial first cohomology. We first show \(H^{1}(\mathrm{Aut}(F_{n});\mathbb{Z})=0\). By the universal coefficient theorem for cohomology,
\[0\longrightarrow\mathrm{Ext}\left(H_{0}(\mathrm{Aut}(F_{n});\mathbb{Q}), \mathbb{Z}\right)\longrightarrow H^{1}(\mathrm{Aut}(F_{n});\mathbb{Z}) \longrightarrow\mathrm{Hom}(H_{1}(\mathrm{Aut}(F_{n});\mathbb{Q}),\mathbb{Z}) \longrightarrow 0\]
where \(\mathrm{Ext}\left(H_{0}(\mathrm{Aut}(F_{n});\mathbb{Q});\mathbb{Z}\right)=0\) as \(H_{0}(\mathrm{Aut}(F_{n});\mathbb{Q})\cong\mathbb{Q}\) is free. Also, \(H_{1}(\mathrm{Aut}(F_{n});\mathbb{Q})=0\) by Fact 3.22, so it follows that \(H^{1}(\mathrm{Aut}(F_{n});\mathbb{Z})=0\).
On the other hand, repeatedly applying Fact 3.23 together with the universal coefficient theorem for homology shows that for \(n\geq 3\),
\[H_{1}(F_{n}^{s}\rtimes\operatorname{Aut}(F_{n});\mathbb{Q})=H_{1}(F_{n}^{s-1} \rtimes\operatorname{Aut}(F_{n});\mathbb{Q})=\ldots=H_{1}(\operatorname{Aut}( F_{n});\mathbb{Q})=0.\]
The last equality comes from Fact 3.22. For \(n=1,2\), the argument is the same, except we reduce the problem of showing \(H^{1}(F_{n}^{s-1}\rtimes\operatorname{Aut}(F_{n});\mathbb{Z})=0\) to checking \(H_{1}(F_{n}\rtimes\operatorname{Aut}(F_{n});\mathbb{Q})=0\). One can check \(\mathbb{Z}\rtimes\mathbb{Z}_{2}\) and \(F_{2}\rtimes\operatorname{Aut}(F_{2})\) have finite abelianization to conclude this. (See e.g. [1, Corollary 2] for a finite presentation of \(\operatorname{Aut}(F_{2})\).) This completes the proof of \(H^{1}(\operatorname{PMap}_{c}(\Gamma);\mathbb{Z})=0\).
**Theorem 3.25**.: _The map \(\Theta\) in Lemma 3.21 is an isomorphism._
Proof.: We only need to check the surjectivity of \(\Theta\). Pick \(\phi\in H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})=\operatorname{Hom}( \operatorname{PMap}(\Gamma),\mathbb{Z})\). By Proposition 3.24, we have \(\phi(\operatorname{PMap}_{c}(\Gamma))=\{0\}\). By Dudley's automatic continuity [1], \(\phi\) is continuous, so \(\phi(\overline{\operatorname{PMap}_{c}(\Gamma)})=\{0\}\). Recall the semidirect product decomposition \(\operatorname{PMap}(\Gamma)\cong\overline{\operatorname{PMap}_{c}(\Gamma)}\rtimes L\) from Theorem B, where \(L\cong\prod_{i\in I}\langle\ell_{i}\rangle\), the product of commuting loop shifts. Furthermore, these loop shifts are dual to the collection of \(\{\Phi_{A_{i}}\}_{i\in I}\subset H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\) so that \(\Phi_{A_{j}}(\ell_{i})=\delta_{ij}\). Since \(\phi\) is zero on the \(\overline{\operatorname{PMap}_{c}(\Gamma)}\)-factor, it follows that \(\phi\) is completely determined by its value on \(L\). Note also that \(L\cong\prod_{i\in I}\mathbb{Z}\) so that \(H^{1}(L;\mathbb{Z})\cong\oplus_{i\in I}\mathbb{Z}\) where a basis for \(H^{1}(L;\mathbb{Z})\) is given exactly by the set \(\{\Phi_{A_{i}}\}_{i\in I}\), as in Theorem 3.17(2). Hence, \(\phi=\phi|_{L}\in H^{1}(L;\mathbb{Z})\) can be described by a finite linear combination of \(\Phi_{A_{i}}\)'s. Such a finite linear combination is the image of a finite linear combination of \(\chi_{A_{i}}\) under \(\Theta\), so \(\Theta\) is surjective.
**Corollary 3.26** (Corollary C, revisited).: _For every locally finite, infinite graph \(\Gamma\),_
\[\operatorname{rk}\big{(}H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\big{)}= \begin{cases}0&\text{if }|E_{\ell}|\leq 1\\ n-1&\text{if }2\leq|E_{\ell}|=n<\infty\\ \aleph_{0}&\text{otherwise}.\end{cases}\]
Proof.: This follows from the isomorphism \(\Theta:\mathring{C}(E_{\ell}(\Gamma))\cong H^{1}(\operatorname{PMap}(\Gamma); \mathbb{Z})\) in Theorem 3.25.
### Relation to surfaces
Aramayona-Patel-Vlamis in [1] obtain a result similar to Theorem 3.25 in the infinite-type _surface_ case using the homology of separating curves in place of \(\mathring{C}(E_{\ell}(\Gamma))\). Here we show that these approaches can be unified, as they each rely solely on the subspace of ends accumulated by loops or genus. Let \(S\) be an infinite-type surface and let \(\hat{S}\) be the surface obtained from \(S\) by forgetting the planar ends of \(S\). Let \(H_{1}^{\text{sep}}(\hat{S};\mathbb{Z})\) be the subgroup of \(H_{1}(\hat{S};\mathbb{Z})\) generated by homology classes that have separating simple closed curves of \(\hat{S}\) as representatives. Note that when \(S\) has only planar ends, \(H_{1}^{\text{sep}}(\hat{S};\mathbb{Z})\) is trivial.
**Theorem 3.27** ([1, Theorem 4] for genus \(\geq 2\), [1, Theorem 1.1] for genus \(1\)).: _Let \(S\) be an infinite-type surface of genus at least one. Then_
\[H^{1}(\operatorname{PMap}(S);\mathbb{Z})\cong H_{1}^{\text{sep}}(\hat{S}; \mathbb{Z}).\]
Let \(E_{g}(S)\) denote the space of ends of \(S\) accumulated by genus (i.e., the non-planar ends).
**Proposition 3.28**.: _Let \(S\) be an infinite-type surface. Then_
\[H_{1}^{\text{sep}}(\hat{S};\mathbb{Z})\cong\mathring{C}(E_{g}(S)).\]
Proof.: We first note that by definition, \(E_{g}(S)=E(\hat{S})\). Let \(v\in H^{sep}_{1}(\hat{S};\mathbb{Z})\) be a primitive element, i.e. \(v\) has a representative \(\gamma\) that is an oriented and separating simple closed curve. Now \(v\) determines a partition of \(E(\hat{S})\) into two clopen subsets, \(v^{+}\), those ends to the right of \(\gamma\), and \(v^{-}\), those ends to the left of \(\gamma\). Note that these are proper subsets if and only if \(v\neq 0\) if and only if \(\chi_{v^{+}}\neq 0\) in \(\hat{C}(E)\). Define
\[\Xi(v):=\chi_{v^{+}}\in\hat{C}(E),\]
for each nonzero primitive element \(v\) of \(H^{sep}_{1}(\hat{S};\mathbb{Z})\). This linearly extends to define an isomorphism \(\Xi:H^{sep}_{1}(\hat{S};\mathbb{Z})\xrightarrow{\sim}\hat{C}(E_{g}(S))\).
**Corollary 3.29**.: _Let \(S\) be an infinite-type surface of genus at least one and \(\Gamma\) be a locally finite, infinite graph. If \(E_{g}(S)\) is homeomorphic to \(E_{\ell}(\Gamma)\), then there is a natural isomorphism between \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) and \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\)._
Proof.: We first note that if \(E_{g}(S)\) is empty (i.e. \(S\) has finite genus), then \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) is trivial by [1, Theorem 1] and [1, Theorem 1.1]. Similarly, if \(E_{\ell}(S)\) is empty, then \(H^{1}(\operatorname{PMap}(S);\mathbb{Z})\) is trivial by Proposition 3.2 and Proposition 3.24.
Otherwise, the isomorphism is obtained by composing the maps from Theorem 3.25, Theorem 3.27, and Proposition 3.28:
\[H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\stackrel{{\Theta}}{ {\cong}}\hat{C}(E_{\ell}(\Gamma))\cong\hat{C}(E_{g}(S))\stackrel{{ \Xi}}{{\cong}}H^{sep}_{1}(\hat{S};\mathbb{Z})\cong H^{1}( \operatorname{PMap}(S);\mathbb{Z}).\]
## 4. CB generation classification
As an application of Theorem 3.11 and Theorem B, in this section we obtain Theorem A, the classification of infinite graphs with CB generating sets. We revisit the theorem for convenience.
**Theorem 4.1** (Theorem A, revisited).: _Let \(\Gamma\) be a locally finite, infinite graph. Then \(\operatorname{PMap}(\Gamma)\) is CB generated if and only if either \(\Gamma\) is a tree, or satisfies the following:_
1. \(\Gamma\) _has finitely many ends accumulated by loops, and_
2. _there is no accumulation point in_ \(E\setminus E_{\ell}\)_._
The only if direction of Theorem A comes from [1]:
**Proposition 4.2** ([1, Theorem 6.1]).: _Let \(\Gamma\) be a locally finite, infinite graph. If \(\operatorname{rk}(\Gamma)>0\) and \(E\setminus E_{\ell}\) has an accumulation point, then \(\operatorname{PMap}(\Gamma)\) is not CB-generated._
**Proposition 4.3** ([1, Theorem 8.2]).: _Let \(\Gamma\) be a locally finite, infinite graph. If \(\Gamma\) has infinitely many ends accumulated by loops, then \(\operatorname{PMap}(\Gamma)\) is not locally CB. In particular, \(\operatorname{PMap}(\Gamma)\) is not CB-generated._
Now we show that those conditions are also sufficient for CB-generation. First, recall by Proposition 2.6 that when \(\Gamma\) is a tree, \(\operatorname{PMap}(\Gamma)\) is the trivial group. We proceed to show (1) and (2) are sufficient to show that \(\operatorname{PMap}(\Gamma)\) is CB generated. We start with the case where \(\Gamma\) has finite rank and satisfies Condition (2):
**Proposition 4.4**.: _Let \(\Gamma\) be a locally finite, infinite graph. If \(\Gamma\) has finite rank with no accumulation point in \(E\), then \(\operatorname{PMap}(\Gamma)\) is finitely generated._
Proof.: Note in this case \(E_{\ell}\) is the empty set, so having no accumulation point in \(E\setminus E_{\ell}\) is equivalent to having a finite end space. Hence \(\operatorname{PMap}(\Gamma)\) is isomorphic to one of \(\operatorname{Out}(F_{n}),\operatorname{Aut}(F_{n})\) or \(F_{n}^{e}\rtimes\operatorname{Aut}(F_{n})\) for some \(e=|E|-1\geq 1\), all of which are finitely generated, concluding the proof.
Now assume \(\Gamma\) has infinite rank but finitely many ends accumulated by loops with no accumulation point in \(E\setminus E_{\ell}\). As in Remark 1.1, \(\Gamma\) can be realized as a finite wedge sum of rays, Loch Ness monster graphs (infinite rank graph with end space \((E,E_{\ell})\cong(\{*\},\{*\})\)), and Millipede monster graphs (infinite rank graph with end space \((E,E_{\ell})\cong(\mathbb{N}\cup\{\infty\},\{\infty\})\)). Then \(\Gamma\) is characterized by the triple \((r,l,m)\) where \(r\) is the number of ray summands, \(l\) is the number of Loch Ness monster summands, and \(m\) is the number of Millipede monster summands. Then \(\Gamma\) is as in Figure 6. Note that this triple is not unique, in fact, if \(m>0\) then we do not need to keep track of \(r\) as any additional ray can simply be moved via a proper homotopy into a Millipede monster summand. However, in order to avoid a case-by-case analysis we prove that \(\operatorname{PMap}(\Gamma)\) is CB-generated for _any_ triple \((r,l,m)\). Note that we already know by Theorem 2.16 that both the Loch Ness monster graph, \((0,1,0)\), and the Millipede monster graph, \((0,0,1)\), have CB and thus CB-generated pure mapping class groups. Therefore we will ignore these two graphs throughout this section.
The foundation for our choice of CB-generating set for \(\operatorname{PMap}(\Gamma)\) will be the set \(\mathcal{V}_{K}\), where \(K\) is the wedge point as in Figure 6. Recall that an appropriate choice of a compact set \(K\) provides a CB neighborhood of the identity certifying that \(\operatorname{PMap}(\Gamma)\) is locally CB.
Figure 6. The graphs \(\Gamma\) that we prove have a CB-generating pure mapping class group. Each such \(\Gamma\) has a single wedge point \(K\) and \(\Gamma\setminus K\) has \(r\) ray components, \(l\) Loch Ness monster components, and \(m\) Millipede monster components.
**Proposition 4.5** ([4, Proposition 8.3]).: _Let \(\Gamma\) be a locally finite, infinite graph with finitely many ends accumulated by loops. Then \(\operatorname{PMap}(\Gamma)\) is locally CB if and only if \(\Gamma\setminus\Gamma_{c}\) has only finitely many components whose ends space is infinite. Moreover, for any choice of connected compact subgraph \(K\) whose complementary components are either trees or monster graphs, \(\mathcal{V}_{K}\) is a CB neighborhood of the identity in \(\operatorname{PMap}(\Gamma)\)._
We remark that the moreover statement is absent in [4, Proposition 8.3]; however, it can be deduced readily from the proof. We thus have that our choice of \(\mathcal{V}_{K}\) is CB. This is the starting point for our CB generating set; we now describe how to choose the remaining elements.
Enumerate each of the ray summands of \(\Gamma\) as \(R_{1},\ldots,R_{r}\), the Loch Ness monster summands as \(L_{1},\ldots,L_{l}\), and the Millipede monster summands as \(M_{1},\ldots,M_{m}\) (skip the enumeration if there are no summands of a given type). We also sequentially label the loops in \(L_{i}\) by \(a_{i,j}\) where \(a_{i,1}\) is the loop closest to \(K\). We similarly label the loops in \(M_{i}\) by \(b_{i,j}\). For each \(R_{i}\) let \(I_{i}\) be an interval in the interior of \(R_{i}\). Then we have the following finite collection of word maps:
\[W:=\{\phi_{(a_{1,1},I_{i})}\}_{i=1}^{r}.\]
If \(l=0\) then we use \(W:=\{\phi_{(b_{1,1},I_{i})}\}_{i=1}^{r}\) instead. Note we cannot have \(l=m=0\) as \(\Gamma\) has infinite rank. If \(r=0\), we set \(W:=\emptyset\).
Next, we have the following finite collection of loop swaps:
\[B:= \{\alpha_{ij}:=\text{swaps }a_{i,1}\leftrightarrow a_{j,1}\ |\ 1 \leq i<j\leq l\}\] \[\cup\{\beta_{ij}:=\text{swaps }b_{i,1}\leftrightarrow b_{j,1}\ |\ 1 \leq i<j\leq m\}\] \[\cup\{\gamma_{ij}:=\text{swaps }a_{i,1}\leftrightarrow b_{j,1}\ |\ 1 \leq i\leq l,\ 1\leq j\leq m\}.\]
In words, \(B\) is the collection of all loop swaps between loops that are adjacent to \(K\).
Finally, we need a finite collection of loop shifts. The graph \(\Gamma\) has only finitely many ends accumulated by loops, so by Corollary C, \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\) has finite rank. Let \(H\) be a finite collection of primitive loop shifts dual to a finite basis of \(H^{1}(\operatorname{PMap}(\Gamma);\mathbb{Z})\).
We claim that the set
\[\mathcal{S}:=\mathcal{V}_{K}\cup W\cup B\cup H\]
is a CB generating set for \(\operatorname{PMap}(\Gamma)\). Note that \(\mathcal{S}\) is CB since \(\mathcal{V}_{K}\) is CB by Proposition 4.5 and each of \(W,B\), and \(H\) is simply a finite set. Thus we only need to verify that \(\mathcal{S}\) is a generating set for \(\operatorname{PMap}(\Gamma)\). We will first check that \(\mathcal{S}\) generates \(\operatorname{PMap}_{c}(\Gamma)\).
**Lemma 4.6**.: _If \(K^{\prime}\subset\Gamma\) is any connected compact subset of \(\Gamma\), then \(\operatorname{PMap}(K^{\prime})\subset\langle\mathcal{S}\rangle\)._
Before we give the proof of this lemma we review finite generating sets for \(\operatorname{Aut}(F_{n})\). Let \(F_{n}\) be a free group of rank \(n\), and denote by \(a_{1},\ldots,a_{n}\) its free generators. In 1924, Nielsen [10] proved a finite presentation of \(\operatorname{Aut}(F_{n})\), with the generating set \(\{\tau_{i}\}_{i=1}^{n}\cup\{\sigma_{ij},\lambda_{ij},\rho_{ij}\}_{1\leq i \neq j\leq n}\), where:
\[\tau_{i}=\begin{cases}a_{i}\mapsto a_{i}^{-1},\\ a_{j}\mapsto a_{j}\qquad\text{for }j\neq i.\end{cases} \sigma_{ij}=\begin{cases}a_{i}\leftrightarrow a_{j},\\ a_{k}\mapsto a_{k}\quad\text{for }k\neq i,j.\end{cases}\] \[\lambda_{ij}=\begin{cases}a_{i}\mapsto a_{j}a_{i},\\ a_{k}\mapsto a_{k}\qquad\text{for }k\neq i,j.\end{cases} \rho_{ij}=\begin{cases}a_{i}\mapsto a_{i}a_{j},\\ a_{k}\mapsto a_{k}\qquad\text{for }k\neq i,j.\end{cases}\]
We call \(\tau_{i}\) a **flip**, \(\sigma_{ij}\) a **transposition**, and \(\lambda_{ij},\rho_{ij}\)**left/right Nielsen automorphisms** respectively. In fact, Armstrong-Forrest-Vogtmann [1, Theorem 1] reduced this generating set to consist only of involutions:
( \[\dagger\] ) \[\{\tau_{i}\}_{i=1}^{n}\cup\{\sigma_{i,i+1}\}_{i=1}^{n-1}\cup\{\tau_{2}\lambda_{ 12}\}.\]
Proof of Lemma 4.6.: Let \(K^{\prime}\) be a connected compact subset of \(\Gamma\). Without loss of generality, we can increase the size of \(K^{\prime}\) so that it is as in Figure 7. In particular, \(K^{\prime}\) satisfies the following:
* \(K^{\prime}\) contains at least two loops of \(L_{1}\) (or \(M_{1}\) if \(l=0\)),
* \(K^{\prime}\) contains at least one loop from every monster summand,
* the vertices in \(K^{\prime}\) are contained in its interior,
* every component of \(\Gamma\setminus K^{\prime}\) is infinite,
* \(K^{\prime}\) is connected and contains the wedge point \(K\).
Note that the last two properties imply that \(K^{\prime}\) contains a subsegment of every ray summand \(R_{i}\).
By Proposition 2.3 and Remark 2.4, we have that \(\operatorname{PMap}(K^{\prime})\cong F_{m}^{k}\rtimes\operatorname{Aut}(F_{m})\) for some \(k>0\) and \(m=\operatorname{rk}(K^{\prime})\). We first check that \(\langle\mathcal{S}\rangle\) contains an Armstrong-Forrest-Vogtmann generating set for \(\operatorname{Aut}(F_{m})\). Relabel the loops of \(K^{\prime}\) by \(a_{1},\dots,a_{m}\) in the following manner. The loop of \(L_{1}\) closest to \(K\) is labeled \(a_{1}\), the next loop in \(L_{1}\) is \(a_{2}\), continued until the loops of \(L_{1}\) are exhausted. Then the next loop, say \(a_{j+1}\), is the first loop on \(L_{2}\) etc., until all of the loops in all of the \(L_{i}\) are exhausted. Finally, continue relabeling by \(a_{\bullet}\)'s the loops in \(M_{1}\) through \(M_{m}\), in the same process. Note that when \(l=0\), then \(a_{1}\) and \(a_{2}\) are contained in \(M_{1}\).
Figure 7. Illustration of \(K^{\prime}\) in \(\Gamma\). Here we have \(l=2,m=3,r=3\) and \(\operatorname{PMap}(K^{\prime})\cong F_{9}^{11}\rtimes\operatorname{Aut}(F_{9})\). We may assume \(K^{\prime}\) contains: at least one loop from every monster summand, at least two loops from one of the monster summands, and initial segments of the ray summands, as well as \(K\). If needed, we can further enlarge \(K^{\prime}\) such that it contains the vertices in the interior, and it only contains the entirety of the loops.
Note that we immediately have \(\tau_{1},\ldots,\tau_{m},\lambda_{12}\in\mathcal{V}_{K}\subset\mathcal{S}\). Therefore it remains to check that \(\sigma_{i,i+1}\in\langle\mathcal{S}\rangle\) for all \(i=1,\ldots,m-1\). Each such \(\sigma_{i,i+1}\) either swaps two adjacent loops in a single component of \(K^{\prime}\setminus K\) or swaps the last loop in a component of \(K^{\prime}\setminus K\) with the first loop in the next component. In the former case we already have that \(\sigma_{i,i+1}\in\mathcal{V}_{K}\). For the latter case, let \(a_{t}\) be the first loop in the component of \(K^{\prime}\setminus K\) containing \(a_{i}\). Then consider the loop swap \(\sigma_{i,t}\) that swaps \(a_{i}\) with \(a_{t}\) (note those two loops could coincide, and then \(\sigma_{i,t}\) is the identity) and let \(\sigma_{t,i+1}\) be the loop swap that swaps \(a_{t}\) with \(a_{i+1}\), which is the first loop in the component of \(K^{\prime}\setminus K\) containing \(a_{i+1}\). Then we have that \(\sigma_{i,1}\in\mathcal{V}_{K}\), \(\sigma_{t,i+1}\in B\) and \(\sigma_{i,i+1}=\sigma_{i,t}\sigma_{t,i+1}\sigma_{i,t}\in\langle\mathcal{S}\rangle\). Thus we see that every Armstrong-Forrest-Vogtmann generator for the \(\operatorname{Aut}(F_{m})\) subgroup of \(\operatorname{PMap}(K^{\prime})\cong F_{m}^{k}\rtimes\operatorname{Aut}(F_{m})\) is contained in \(\langle\mathcal{S}\rangle\).
Finally we need to be able to obtain each of the \(k\) factors of \(F_{m}\) in \(\operatorname{PMap}(K^{\prime})\). Each \(F_{m}\) subgroup can be identified with the subgroup of the collection of word maps on an interval adjacent to the boundary of \(K^{\prime}\). Recall by Proposition 2.3 there are \(k+1\) such boundary adjacent intervals, so say \(I_{1},\ldots,I_{k+1}\). Since we have already generated the \(\operatorname{Aut}(F_{m})\) subgroup of \(\operatorname{PMap}(K^{\prime})\) with \(\mathcal{S}\) and we can change the word of the word map using Lemma 2.12 and Lemma 2.13, it suffices to show that a _single_ word map on each interval \(I_{j}\) that maps onto a generator of \(F_{m}\) is in \(\langle\mathcal{S}\rangle\). However, we even have one such word map in \(\mathcal{S}\) already. Indeed, if \(I_{j}\) is contained in some ray then we have already added a corresponding word map to \(W\). Otherwise, if \(I_{j}\) is contained in some monster summand, then there is an appropriate word map already in \(\mathcal{V}_{K}\) obtained by mapping \(I_{j}\) over the first loop of that summand. We can thus conclude that \(\operatorname{PMap}(K^{\prime})\cong F_{m}^{k}\rtimes\operatorname{Aut}(F_{m})\) in contained in \(\langle\mathcal{S}\rangle\).
We are now ready to prove Theorem A. Note that in the above lemma we never made use of the loop shifts in \(H\). They will now be used to push any random mapping class into the closure of the compactly supported mapping classes.
Proof of Theorem A.: As discussed in the beginning of the section, the only if direction comes from Proposition 4.2 and Proposition 4.3. Now we prove the if direction. When \(\Gamma\) is a tree, we have \(\operatorname{PMap}(\Gamma)=1\) by Proposition 2.6. If \(\Gamma\) has finite rank, \(\operatorname{PMap}(\Gamma)\) is finitely generated by Proposition 4.4. Also if \(\Gamma\) is either the Loch Ness monster or Millipede monster graph, then by Theorem 2.16, \(\operatorname{PMap}(\Gamma)\) is CB. Hence we may assume \(1\leq|E_{\ell}|<\infty\), there is no accumulation point in \(E\setminus E_{\ell}\), and \(\Gamma\) is neither the Loch Ness monster nor the Millipede monster graphs.
Let \(\mathcal{S}\) be as defined above; \(\mathcal{S}=\mathcal{V}_{K}\cup W\cup B\cup H\). We will show that \(\mathcal{S}\) generates \(\operatorname{PMap}(\Gamma)\). Let \(f\in\operatorname{PMap}(\Gamma)\). If \(|E_{\ell}|=1\), then \(\operatorname{PMap}(\Gamma)=\overline{\operatorname{PMap}_{c}(\Gamma)}\) by Theorem B, so we obtain \(f\in\overline{\operatorname{PMap}_{c}(\Gamma)}\). Otherwise, if \(|E_{\ell}|\geq 2\), then by postcomposing \(f\) with primitive loop shifts in \(H\), we may assume the flux of \(f\) is zero with respect to any \(2\)-partition of \(E_{\ell}\). By Theorem 3.11, we can assume \(f\in\overline{\operatorname{PMap}_{c}(\Gamma)}\) for this case as well.
Then there exists a compact set \(K^{\prime}\) containing \(K\), and \(g\in\operatorname{PMap}_{c}(\Gamma)\) such that \(g\) is totally supported in \(K^{\prime}\) and \(fg^{-1}\in\mathcal{V}_{K}\). Therefore, it suffices to show that \(g\) is contained in the group generated by \(\mathcal{S}\). Since \(g\) is totally supported in \(K^{\prime}\), the map \(g\) can be identified with an element in \(\operatorname{PMap}(K^{\prime})\), which is contained in \(\langle\mathcal{S}\rangle\) by Lemma 4.6. This concludes the proof that \(\operatorname{PMap}(\Gamma)\) is generated by \(\mathcal{S}\). Finally, \(\mathcal{S}\) is CB as it is the union of three finite sets \(W,B\), and \(H\) and the set \(\mathcal{V}_{K}\), which is CB by Proposition 4.5.
## 5. Residual finiteness
In this section, we prove Theorem D:
**Theorem 5.1** (Theorem D, revisited).: \(\operatorname{PMap}(\Gamma)\) _is residually finite if and only if \(\Gamma\) has a finite rank._
### Forgetful map
Throughout this section, we let \(\Gamma\) be a locally finite, infinite graph with no ends accumulated by loops. That is, \(E_{\ell}(\Gamma)=\emptyset\) but \(E(\Gamma)\neq\emptyset\). Fix an end \(\alpha_{0}\in E(\Gamma)\). Define \(E_{<\infty}(\Gamma)\) as the collection of finite subsets of \(E(\Gamma)\) containing \(\alpha_{0}\):
\[E_{<\infty}(\Gamma)=\{\mathcal{E}\subset E(\Gamma):\alpha_{0}\in\mathcal{E}, \text{ and }|\mathcal{E}|<\infty\}.\]
For each \(\mathcal{E}\in E_{<\infty}(\Gamma)\), we define the graph \(\Gamma_{\mathcal{E}}\) as a subgraph of \(\Gamma\) such that:
* \(\operatorname{rk}\Gamma_{\mathcal{E}}=\operatorname{rk}\Gamma\), and
* \(E(\Gamma_{\mathcal{E}})=\mathcal{E}\).
Note \(\Gamma_{\mathcal{E}}\) is properly homotopic to the core graph \(\Gamma_{c}\) of \(\Gamma\) with \(|\mathcal{E}|\) rays attached.
Now we use Proposition 2.3: \(\operatorname{PMap}(\Gamma)\cong\mathcal{R}\rtimes\operatorname{PMap}(\Gamma _{c}^{*})\) if \(E(\Gamma)\setminus E_{\ell}(\Gamma)\) is nonempty and compact. In our case \(\Gamma\) is of infinite type and has no ends accumulated by loops, so \(E(\Gamma)\setminus E_{\ell}(\Gamma)=E(\Gamma)\) is automatically nonempty and compact. Now we denote by \(\mathcal{R}_{\mathcal{E}}\) the \(\mathcal{R}\) subgroup for \(\Gamma_{\mathcal{E}}\). Then we have a map \(\rho_{\mathcal{E}}:\mathcal{R}\to\mathcal{R}_{\mathcal{E}}\) by'restricting' the domain to \(E(\Gamma_{\mathcal{E}})\). Namely, given a locally constant map \([f:E(\Gamma)\to\pi_{1}(\Gamma,\alpha_{0})]\in\mathcal{R}\), we define \(\rho_{\mathcal{E}}(f):=f|_{E(\Gamma_{\mathcal{E}})}\), where we note that \(f|_{E(\Gamma_{\mathcal{E}})}:E(\Gamma_{\mathcal{E}})\to\pi_{1}(\Gamma,\alpha_ {0})=\pi_{1}(\Gamma_{\mathcal{E}},\alpha_{0})\) is a locally constant map to \(\pi_{1}(\Gamma_{\mathcal{E}},\alpha_{0})\), so \(\rho_{\mathcal{E}}(f)\in\mathcal{R}_{\mathcal{E}}\).
**Lemma 5.2**.: _The restriction map \(\rho_{\mathcal{E}}:\mathcal{R}\to\mathcal{R}_{\mathcal{E}}\) is a group homomorphism._
Proof.: Recall the group operation in \(\mathcal{R}\) is the pointwise multiplication in \(\pi_{1}(\Gamma,\alpha_{0})\). Hence the restriction on \(f\cdot g\) for \(f,g\in\mathcal{R}\) is just the product of their restrictions:
\[\rho_{\mathcal{E}}(f\cdot g)=(f\cdot g)|_{E(\Gamma_{\mathcal{E}})}=f|_{E( \Gamma_{\mathcal{E}})}\cdot g|_{E(\Gamma_{\mathcal{E}})}=\rho_{\mathcal{E}}(f )\cdot\rho_{\mathcal{E}}(g).\qed\]
Observe \(\Gamma_{c}^{*}=(\Gamma_{\mathcal{E}})_{c}^{*}\). Hence, from the lemma we can build the homomorphism \(\mathcal{F}_{\mathcal{E}}\) on \(\operatorname{PMap}(\Gamma)\) to \(\operatorname{PMap}(\Gamma_{\mathcal{E}})\) by just letting \(\mathcal{F}_{\mathcal{E}}=\rho_{\mathcal{E}}\times\operatorname{Id}\) on the decomposition \(\operatorname{PMap}(\Gamma)=\mathcal{R}\rtimes\operatorname{PMap}(\Gamma_{c}^ {*})\) given in Proposition 2.3:
\[\mathcal{F}_{\mathcal{E}}:\operatorname{PMap}(\Gamma)\to\operatorname{PMap}( \Gamma_{\mathcal{E}}),\]
which we will call as the **forgetful homomorphism** to \(\mathcal{E}\subset E(\Gamma)\).
### Finite rank if and only if residually finite
Now we prove Theorem D. The following lemma is used for the only if direction of the proof. Denote by \(\operatorname{SAut}(F_{n})\) the unique index \(2\) subgroup of \(\operatorname{Aut}(F_{n})\).
**Fact 5.3** ([1, Theorem 9.1]).: _There exists a strictly increasing sequence of integers \(\{a_{n}\}_{n\geq 3}\) such that for \(n\geq 3\), every nontrivial finite quotient of \(\operatorname{SAut}(F_{n})\) has cardinality of at least \(a_{n}\)._
Proof of Theorem D.: Suppose that \(\Gamma\) has finite rank \(n\). If \(\Gamma\) has no ends then \(\operatorname{PMap}(\Gamma)\) is isomorphic to \(\operatorname{Out}(F_{n})\), which is residually finite by [10]. If \(\Gamma\) has only one end, then \(\operatorname{PMap}(\Gamma)\) is isomorphic to \(\operatorname{Aut}(F_{n})\), which is residually finite by [1]. If \(\Gamma\) has finitely many ends, then \(\operatorname{PMap}(\Gamma)\cong F_{n}^{|E|-1}\rtimes\operatorname{Aut}(F_{n})\) which is again residually finite as both factors are residually finite and \(F_{n}^{|E|-1}\) is finitely generated [15, Theorem 1].
Now we assume \(\Gamma\) has finite rank and infinitely many ends. The proof is similar to the proof for infinite-type surfaces; [14, Proposition 4.6]. Let \(f\in\operatorname{PMap}(\Gamma)\) be a nontrivial element. Since \(\Gamma\) is of finite rank and \(f\) is proper, it follows that \((\Gamma\setminus\Gamma_{c})\cap\operatorname{supp}(f)\) is compact. In particular, there exists some finite set \(\mathcal{E}\subset E\) such that \((\Gamma_{\mathcal{E}}\setminus\Gamma_{c})\cap\operatorname{supp}(f)\) is still nonempty. This implies that the forgetful map \(\mathcal{F}_{\mathcal{E}}:\operatorname{PMap}(\Gamma)\to\operatorname{PMap}( \Gamma_{\mathcal{E}})\) sends \(f\) to a nontrivial element \(\mathcal{F}_{\mathcal{E}}(f)\in\operatorname{PMap}(\Gamma_{\mathcal{E}})\). However, we know that \(\mathcal{E}\) has finite end space so \(\operatorname{PMap}(\Gamma_{\mathcal{E}})\) is residually finite by the previous paragraph. Therefore, there
exists a homomorphism \(\psi:\mathrm{PMap}(\Gamma_{\mathcal{E}})\to F\) for some finite group \(F\) so that \(\psi(\mathcal{F}_{\mathcal{E}}(f))\) is nontrivial. Thus \(\mathrm{PMap}(\Gamma)\) is residually finite.
Conversely, let \(\Gamma\) have infinite rank and assume it is in standard form. Let \(\{\Gamma_{k}\}\) be a compact exhaustion of \(\Gamma\) by connected subgraphs. Then there exist non-decreasing sequences \(\{n_{k}\},\{e_{k}\}\) such that \(\mathrm{PMap}(\Gamma_{k})\cong F_{n_{k}}^{e_{k}}\rtimes\mathrm{Aut}(F_{n_{k}})\). Here \(e_{k}+1\) is the number of boundaries of \(\Gamma_{k}\) (i.e. the size of \(\overline{\Gamma\setminus\Gamma_{k}}\cap\Gamma_{k}\)), and \(n_{k}\) is the rank of \(\Gamma_{k}\). As \(\Gamma\) has infinite rank, we have \(\lim_{k\to\infty}n_{k}=\infty.\) Also, note \(\mathrm{Aut}(F_{n_{k}})\) has the unique index \(2\) subgroup \(\mathrm{SAut}(F_{n_{k}})\) for each \(k\), whose isomorphic copy in \(\mathrm{PMap}(\Gamma_{k})\) we denote by \(G_{k}\). The group \(\mathrm{Aut}(F_{n_{k}})\) can be identified with the subgroup of mapping classes totally supported on the core graph of \(\Gamma_{k}\), and \(G_{k}\cong\mathrm{SAut}(F_{n_{k}})\) with the set of those mapping classes that preserve orientation. Since the core graph of \(\Gamma_{k}\) is contained in the core graph of \(\Gamma_{k+1}\), and orientation preserving mapping classes on \(\Gamma_{k}\) are orientation preserving on \(\Gamma_{k+1}\), it follows that we have the inclusion \(G_{k}\hookrightarrow G_{k+1}\). Hence the direct limit \(\mathrm{SAut}_{\infty}(\Gamma):=\varinjlim G_{k}\) is a well-defined subgroup of \(\mathrm{PMap}(\Gamma)\).
We claim that \(\mathrm{SAut}_{\infty}(\Gamma)\) has no finite quotients. To do this, suppose \(H\) is a proper normal subgroup of \(\mathrm{SAut}_{\infty}(\Gamma)\) with finite index \(r\geq 2\). Then as \(H\) is a proper subgroup of \(\mathrm{SAut}_{\infty}(\Gamma)\) and \(\varinjlim G_{k}=\mathrm{SAut}_{\infty}(\Gamma)\), it follows that there exists some \(k_{0}\) such that whenever \(k\geq k_{0}\), \(\widetilde{G_{k}}\) is not contained in \(H\). Hence \(H\cap G_{k}\) is a _proper_ normal subgroup of \(G_{k}\). Note
\[1\neq[G_{k}:G_{k}\cap H]\leq[\mathrm{SAut}_{\infty}(\Gamma):H]=r,\]
but the minimal finite index of proper subgroups of \(G_{k}\cong\mathrm{SAut}(F_{n_{k}})\) increases as \(k\) does by Fact 5.3. Therefore, \([G_{k}:G_{k}\cap H]\) cannot be uniformly bounded by \(r\); giving a contradiction. Therefore \(\mathrm{SAut}_{\infty}(\Gamma)\) has no nontrivial finite quotient, implying that both \(\mathrm{PMap}(\Gamma)\) and \(\mathrm{Map}(\Gamma)\) are not residually finite.
## 6. Tits alternative
In a series of three papers [1, 1, 1], Bestvina, Feighn, and Handel prove that \(\mathrm{Out}(F_{n})\) satisfies what we call the **strong Tits alternative**: Every subgroup either contains a nonabelian free group or is virtually abelian. The same was previously known for mapping class groups of compact surfaces by work of Ivanov, McCarthy, and Birman-Lubotzky-McCarthy [14, 15, 16]. However, it was shown by Lanier and Loving [13] that this is not the case for big mapping class groups. They prove that big mapping class groups _never_ satisfy the strong Tits alternative by showing that they always contain a subgroup isomorphic to the wreath product \(\mathbb{Z}\wr\mathbb{Z}\). In [1], the authors extend this idea to find many subgroups isomorphic to wreath products. Allcock [16] further showed that most big mapping class groups fail the (standard) Tits alternative by finding a poison subgroup that surjects onto a Grigorchuck group. A groups satisfies the **Tits alternative** if every subgroup either contains a nonabelian subgroup or is virtually solvable. Note that some references require subgroups be finitely generated, but we do not need to make that restriction.
### Infinite rank: Fails to satisfy TA
In this section, we find poison subgroups (analogous to the surface case) in \(\mathrm{PMap}(\Gamma)\) for graphs \(\Gamma\) of infinite rank.
**Theorem 6.1**.: _Let \(\Gamma\) be a locally finite graph of infinite rank. Then \(\mathrm{PMap}(\Gamma)\) contains a subgroup isomorphic to \(\mathrm{Aut}(F_{n})\wr\mathbb{Z}\) for all \(n\in\mathbb{N}\)._
Proof.: Recall that to define the wreath product \(G\wr\mathbb{Z}\), we need a \(\mathbb{Z}\)-indexed set of copies of \(G\), denoted by \(\{G_{i}\}_{i\in\mathbb{Z}}\). Then \(\mathbb{Z}\) acts on the index set by translation, so it also acts
on \(\oplus_{i\in\mathbb{Z}}G_{i}\) by translation on the indices. Now set \(G=\operatorname{Aut}(F_{n})\) and denote by \(\phi\) the translation action by \(\mathbb{Z}\) on the direct sum. We then define
\[\operatorname{Aut}(F_{n})\wr\mathbb{Z}:=\left(\bigoplus_{\mathbb{Z}} \operatorname{Aut}(F_{n})\right)\rtimes_{\phi}\mathbb{Z}.\]
To realize this group as a subgroup of \(\operatorname{PMap}(\Gamma)\), we will find \(\mathbb{Z}\) copies of \(\operatorname{Aut}(F_{n})\) together with a translation action.
For each \(n\in\mathbb{N}\), let \(\Delta_{n}\) be the graph obtained from a line identified with \(\mathbb{R}\) with a wedge of \(n\) circles attached by an edge at each integer point; see Figure 8. If \(\Gamma\) has at least two ends accumulated by loops, we can properly homotope \(\Gamma\) to have \(\Delta_{n}\) as a subgraph.
For each \(i\in\mathbb{Z}\), let \(R_{i}\) be the wedge of circles supported above the integer point \(i\) in \(\Delta_{n}\subset\Gamma\). Let \(G_{i}\) be the subgroup of elements of \(\operatorname{PMap}(\Gamma)\) which are totally supported on \(R_{i}\). Each \(G_{i}\) is isomorphic to \(\operatorname{Aut}(F_{n})\) (see Remark 2.4) and the \(G_{i}\)'s have disjoint total support, so \(\langle G_{i}\rangle_{i\in\mathbb{Z}}\cong\oplus_{\mathbb{Z}}\operatorname{ Aut}(F_{n})\). There is a shift map, call it \(h\), that translates along \(\Delta_{n}\) by \(+1\) on the line and maps \(R_{i}\) to \(R_{i+1}\) isometrically. Because \(h^{m}G_{i}=G_{i+m}h^{m}\), the subgroup of \(\operatorname{PMap}(\Gamma)\) generated by \(G_{0}\) and \(h\) is isomorphic to \(\operatorname{Aut}(F_{n})\wr\mathbb{Z}\).
In general, if \(\Gamma\) has at least one end accumulated by loops, we can embed a copy of \(\Delta_{n}\) into \(\Gamma\) where the images of the two ends of \(\Delta_{n}\) are not distinct. The corresponding "shift map" will no longer be shifting between distinct ends, but this does not affect the construction of an \(\operatorname{Aut}(F_{n})\wr\mathbb{Z}\) subgroup.
Theorem 6.1 immediately tells us that \(\operatorname{PMap}(\Gamma)\) fails the strong Tits alternative because \(\mathbb{Z}\wr\mathbb{Z}\) is a subgroup of \(\operatorname{Aut}(F_{n})\wr\mathbb{Z}\). In [11], Allcock shows that big mapping class groups of surfaces with infinite genus fail the (standard) Tits alternative. His idea is to find elements of the mapping class group that "look like" the action of the Grigorchuck group on a rooted binary tree. Because these elements are not of finite order, the resulting subgroup of the mapping class group is an extension of the Grigorchuck group. When this same idea is implemented in the pure mapping class group of a graph, we instead find an exact copy of Grigorchuck's group. Many graphs, such as an infinite binary tree, also contain Grigorchuck's group as a subgroup of their _full_ mapping class group in the obvious way.
**Theorem 6.2**.: _Let \(\Gamma\) be a locally finite graph of infinite rank. Then \(\operatorname{PMap}(\Gamma)\) contains a subgroup isomorphic to the Grigorchuck group._
Proof.: First, we define the proper homotopy equivalences called \(a,b,c,d\) on an infinite binary tree \(T\) as in Figure 9. Note only \(a\) swaps the level \(1\) branches. Each of the other three homotopy equivalences \(b,c,d\) misses \((3k+1),3k,(3k-1)\)-st branch swaps for \(k\geq 1\)
Figure 8. The graph \(\Delta_{4}\), with two ends accumulated by roses with \(4\) petals. It admits a translation of roses, denoted by the green dotted arrow. Up to proper homotopy, such graph arises as a subgraph of any graph with at least two ends accumulated by loops.
respectively, as well as the level-\(1\) swap. These four elements generate the Grigorchuck group, \(G\)[10, 11].
Now let \(\Delta\) be the infinite graph with one end accumulated by loops, constructed as in Figure 10. Specifically, we start with a ray and label the countably many vertices by \(v_{1},v_{2},\ldots\) etc. Attach a finite binary tree \(T_{i}\) of level \(i\) to \(v_{i}\) for each \(i\geq 1\). Then attach a single loop at each leaf of the trees. For any graph \(\Gamma\) with infinite rank, we can apply a proper homotopy equivalence so that \(\Delta\) is a subgraph. Hence, \(\operatorname{PMap}(\Delta)\leq\operatorname{PMap}(\Gamma)\), so it suffices to find a Grigorchuck group inside \(\operatorname{PMap}(\Delta)\).
Define a proper homotopy equivalence \(\hat{b}\) as the map on _finite_ binary trees \(T_{1},T_{2},\ldots\) by'mimicking' \(b\) defined on the _infinite_ binary tree \(T\). See Figure 10 for an illustration
Figure 9. Proper homotopy equivalences \(a,b,c\) and \(d\) on infinite binary tree \(T\). Each green arrow denotes the swap of the two subtrees induced by the swap of the two branches.
of \(\hat{b}\), denoted in green arrows. Similarly define \(\hat{a},\hat{c}\) and \(\hat{d}\) from \(a,c\) and \(d\). Denote by \(\widetilde{a},\widetilde{b},\widetilde{c}\) and \(\widetilde{d}\) the proper homotopy classes of \(\hat{a},\hat{b},\hat{c}\) and \(\hat{d}\) respectively. Following the same proof of [10, Lemma 4.1], we see that \(\widetilde{a},\widetilde{b},\widetilde{c},\widetilde{d}\) satisfy exactly the defining relations of \(G\), and \(\widetilde{G}:=\langle\widetilde{a},\widetilde{b},\widetilde{c},\widetilde{d}\rangle\) is isomorphic to the Grigorchuck group.
**Corollary 6.3**.: _Let \(\Gamma\) be a locally finite graph of infinite rank. Then \(\operatorname{PMap}(\Gamma)\) and \(\operatorname{Map}(\Gamma)\) fail the Tits alternative._
### Finite rank: Satisfies TA
On the other hand, when \(\Gamma\) has finite rank, we get the following contrasting result.
**Theorem 6.4**.: _Let \(\Gamma\) be a locally finite graph with finite rank. Then \(\operatorname{PMap}(\Gamma)\) satisfies the Tits alternative. That is, every finitely generated subgroup is either virtually solvable or contains a free group._
We first need the following stability property of the Tits alternative.
**Proposition 6.5**.: _Satisfying the Tits alternative is stable under subgroups, finite-index supergroups, and group extensions. More precisely,_
1. _Let_ \(H\leq G\)_. If_ \(G\) _satisfies the Tits alternative, then so does_ \(H\)_._
2. _Let_ \(H\leq G\) _with_ \([G:H]<\infty\)_. If_ \(H\) _satisfies the Tits alternative, then so does_ \(G\)_._
3. _(cf._ _[_10_, Proposition 6.3]__) Suppose the groups_ \(K,G,H\) _form a short exact sequence as follows:_ \[1\longrightarrow K\longrightarrow G\longrightarrow H\longrightarrow 1.\] _If_ \(K\) _and_ \(H\) _satisfy the Tits alternative, then so does_ \(G\)_._
Proof.: Claim (1) holds because every subgroup of \(H\) is a subgroup of \(G\). Claim (2) will follow from (3), as finite groups are virtually trivial so satisfy the Tits alternative.
Now we prove (3). Let \(L\leq G\) be a subgroup. Then we have the following commutative diagram:
Indeed, \(K\cap L\trianglelefteq L\) and \(q(L)\cong L/(K\cap L)\leq H\). By (1), both \(K\cap L\) and \(q(L)\) satisfy Tits alternative. If \(K\cap L\) has \(F_{2}\) as a subgroup, then \(L\) has \(F_{2}\) as a subgroup. If \(q(L)\) has \(F_{2}\) has a subgroup, then we can find a section of \(q\) to lift \(F_{2}\) inside \(L\). Hence, we may assume both \(K\cap L\) and \(q(L)\) are virtually solvable. In this case, the following fact finishes the proof.
**Fact 6.6** ([12, Lemma 5.5], see also [10, Lemme 6.1]).: _Suppose \(N\) is a normal subgroup of a group \(G\). If both \(N\) and \(G/N\) are virtually solvable, then \(G\) is virtually solvable._
Hence \(L\) is virtually solvable, concluding that \(G\) satisfies Tits alternative.
Now we are ready to prove Theorem 6.4.
Proof of Theorem 6.4.: Let \(\operatorname{rk}\Gamma=n.\) Then we have the following short exact sequence [1, Theorem 3.5]:
\[1\longrightarrow\mathcal{R}\longrightarrow\operatorname{PMap}(\Gamma) \longrightarrow\operatorname{Aut}(F_{n})\longrightarrow 1,\]
where \(\mathcal{R}\) is the group of locally constant functions from \(E\) to \(F_{n}\) with pointwise multiplication.
The subgroup of \(\operatorname{Out}(F_{n+1})\) fixing one \(\mathbb{Z}\) factor is naturally isomorphic to \(\operatorname{Aut}(F_{n})\). Recall that \(\operatorname{Out}(F_{n+1})\) satisfies the Tits alternative by [1], so \(\operatorname{Aut}(F_{n})\) does too. We will show that \(\mathcal{R}\) satisfies the (strong) Tits alternative, then Proposition 6.5 part (3), guarantees that \(\operatorname{PMap}(\Gamma)\) satisfies the Tits alternative as well.
**Claim**.: \(\mathcal{R}\) _satisfies the strong Tits alternative._
Consider a (not necessarily finitely generated) subgroup \(H\subset\mathcal{R}\). Suppose there exist \(\phi,\psi\in\mathcal{R}\) that do not commute; so there exists an \(e\in E\) such that \(\phi(e)\psi(e)\neq\psi(e)\phi(e)\). Now we use the Ping Pong lemma to prove \(\langle\phi,\psi\rangle\cong F_{2}\), which will conclude that \(\mathcal{R}\) satisfies the strong Tits alternative. Let \(X_{\phi(e)}\) and \(X_{\psi(e)}\) as the set of words in \(F_{n}\) that start with \(\phi(e)\) and \(\psi(e)\) respectively. We note \(X_{\phi(e)}\) and \(X_{\psi(e)}\) are disjoint as otherwise \(\phi(e)=\psi(e)\), contradicting the assumption \(\phi(e)\psi(e)\neq\psi(e)\phi(e)\). We consider the action of \(H\) on \(F_{n}\) as:
\[\phi\cdot w:=\phi(e)w,\qquad\text{for $\phi\in\mathcal{R}$ and $w\in F_{n}$.}\]
Then the same assumption \(\phi(e)\psi(e)\neq\psi(e)\phi(e)\) implies that \(\phi\cdot X_{\psi(e)}\subset X_{\phi(e)}\) and \(\psi\cdot X_{\phi(e)}\subset X_{\psi(e)}\). Therefore, by the Ping-Pong lemma, we conclude \(\langle\phi,\psi\rangle\cong F_{2}\) so \(\mathcal{R}\) satisfies the (strong) Tits alternative.
We now extend these results to determine which full mapping class groups satisfy the Tits alternative.
**Corollary 6.7**.: _Let \(\Gamma\) be a locally finite, infinite graph. Then \(\operatorname{Map}(\Gamma)\) satisfies the Tits alternative if and only if \(\Gamma\) has finite rank and finite end space._
Proof.: We divide into cases. First, if \(\Gamma\) has at least one end accumulated by loops, then \(\operatorname{Map}(\Gamma)\) fails the Tits alternative by Corollary 6.3. Otherwise, \(\Gamma\) has finite rank, and we continue to divide into cases. If \(\Gamma\) has finite end space, then \(\operatorname{Map}(\Gamma)\) is a finite extension of \(\operatorname{PMap}(\Gamma)\), so by Proposition 6.5 property (2), the full mapping class group \(\operatorname{Map}(\Gamma)\) satisfies the Tits alternative. If \(\Gamma\) has countable end space, then we can modify the proof of Theorem 6.2 by replacing the loops with rays, to realize Grigorchuck's group as a subgroup of \(\operatorname{Map}(\Gamma)\). If the end space of \(\Gamma\) is uncountable, then there is a closed subset of the ends which is homeomorphic to the whole Cantor set, so contains Grigorchuck's group in the natural way, and again \(\operatorname{Map}(\Gamma)\) fails the Tits alternative.
The strong Tits alternative is not stable under group extensions (consider \(\mathbb{Z}\wr\mathbb{Z}\)). So, the best we could conclude about \(\operatorname{PMap}(\Gamma)\) from the decomposition as \(\mathcal{R}\rtimes\operatorname{Aut}(F_{n})\) was Theorem 6.4. However, our proof that \(\mathcal{R}\) satisfies the strong Tits alternative actually shows the slightly stronger statement: Any two elements of \(\mathcal{R}\) which do not commute generate \(F_{2}\). This property could be useful in answering the question,
**Question 6.8**.: If \(\Gamma\) is a locally finite graph of finite rank, does \(\operatorname{PMap}(\Gamma)\) satisfy the strong Tits alternative?
|
2309.09670 | DGM-DR: Domain Generalization with Mutual Information Regularized
Diabetic Retinopathy Classification | The domain shift between training and testing data presents a significant
challenge for training generalizable deep learning models. As a consequence,
the performance of models trained with the independent and identically
distributed (i.i.d) assumption deteriorates when deployed in the real world.
This problem is exacerbated in the medical imaging context due to variations in
data acquisition across clinical centers, medical apparatus, and patients.
Domain generalization (DG) aims to address this problem by learning a model
that generalizes well to any unseen target domain. Many domain generalization
techniques were unsuccessful in learning domain-invariant representations due
to the large domain shift. Furthermore, multiple tasks in medical imaging are
not yet extensively studied in existing literature when it comes to DG point of
view. In this paper, we introduce a DG method that re-establishes the model
objective function as a maximization of mutual information with a large
pretrained model to the medical imaging field. We re-visit the problem of DG in
Diabetic Retinopathy (DR) classification to establish a clear benchmark with a
correct model selection strategy and to achieve robust domain-invariant
representation for an improved generalization. Moreover, we conduct extensive
experiments on public datasets to show that our proposed method consistently
outperforms the previous state-of-the-art by a margin of 5.25% in average
accuracy and a lower standard deviation. Source code available at
https://github.com/BioMedIA-MBZUAI/DGM-DR | Aleksandr Matsun, Dana O. Mohamed, Sharon Chokuwa, Muhammad Ridzuan, Mohammad Yaqub | 2023-09-18T11:17:13Z | http://arxiv.org/abs/2309.09670v1 | DGM-DR: Domain Generalization with Mutual Information Regularized Diabetic Retinopathy Classification
###### Abstract
The domain shift between training and testing data presents a significant challenge for training generalizable deep learning models. As a consequence, the performance of models trained with the independent and identically distributed (i.i.d) assumption deteriorates when deployed in the real world. This problem is exacerbated in the medical imaging context due to variations in data acquisition across clinical centers, medical apparatus, and patients. Domain generalization (DG) aims to address this problem by learning a model that generalizes well to any unseen target domain. Many domain generalization techniques were unsuccessful in learning domain-invariant representations due to the large domain shift. Furthermore, multiple tasks in medical imaging are not yet extensively studied in existing literature when it comes to DG point of view. In this paper, we introduce a DG method that re-establishes the model objective function as a maximization of mutual information with a large pretrained model to the medical imaging field. We re-visit the problem of DG in Diabetic Retinopathy (DR) classification to establish a clear benchmark with a correct model selection strategy and to achieve robust domain-invariant representation for an improved generalization. Moreover, we conduct extensive experiments on public datasets to show that our proposed method consistently outperforms the previous state-of-the-art by a margin of 5.25% in average accuracy and a lower standard deviation. Source code available at [https://github.com/BioMedIA-MEZUAI/DGM-DR](https://github.com/BioMedIA-MEZUAI/DGM-DR).
Keywords:Domain Generalization Diabetic Retinopathy Mutual Information Regularization
## 1 Introduction
Medical imaging has become an indispensable tool in diagnosis, treatment planning, and prognosis. Coupled with the introduction of deep learning, medical imaging has witnessed tremendous progress in recent years. Notwithstanding, a
major challenge in the medical imaging field is the domain shift problem, where the performance of a trained model deteriorates when for instance tested on a dataset that was acquired from a different device or patient population than the original dataset. This problem is especially prominent in tasks, where acquiring large-scale annotated datasets from one center is costly and time-consuming. Domain generalization (DG) [29] aims to alleviate this challenge by training models that can generalize well to new unseen domains, without the need for extensive domain-specific data collection and annotation.
DG in medical image analysis still requires extensive research, however there already exist a handful of works examining it. One of those works includes utilizing an adversarial domain synthesizer to create artificial domains using only one source domain to improve the generalizability of the model in downstream tasks [26]. Although such method can synthesize a wide range of possible domains, it usually suffers from the ability to mimic realistic domain shifts. Another method is applying test-time augmentations such that the target image resembles the source domain, thus reducing the domain shift and improving generalization [25]. Moreover, DRGen [3] combines Fishr [20] and Stochastic Weight Averaging Densely (SWAD) [6] to achieve domain generalization in Diabetic Retinopathy (DR) classification. In DRGen, Fishr [20] is used to make the model more robust to variations in the data by penalizing large differences in the gradient variances between in-distribution and out-of-distribution data, and SWAD [6] is used to seek flatter minima in the loss landscape of the model. DRGen is currently state-of-the-art in DR classification, however it has been evaluated using samples from the testing set which makes it harder to assess its true generalizability.
In natural images, the domain generalization problem has been explored extensively compared to medical imaging analysis. Some of the DG methods proposed over the past ten years include domain alignment [16], meta-learning [10], style transfer [28], and regularization methods [14]. More recently, the authors of [7] utilize a large pretrained model to guide a target model towards generalized feature representation through mutual information regularization. Another DG regularization method that can be applied orthogonally to many DG algorithms is SWAD [6], which improves domain generalizability by seeking flat minima in the loss landscape of the model. The flatter minima indicate that the loss is not changing significantly in any direction, thus reducing the risk of the model overfitting to domain biases [6]. However, when adapting a DG approach that demonstrates a good performance on natural images, there is no guarantee of a similar performance on medical imaging applications due to the typical complex nature of such problems.
DR is a complication of Diabetes Mellitus that affects the eyes and can lead to vision loss or blindness. It is caused by damage to the blood vessels in the retina due to high blood sugar levels, which often leads to blood leakage onto the retina [24]. This can cause swelling and distortion of vision. The prevalence of DR is increasing worldwide due to the growing number of people with diabetes. However, early detection and management of DR is critical to the prevention of vision deterioration or loss. DR can be classified into 4 classes: mild, moderate,
severe, and proliferative. Some of the visible features that are used to classify the first 3 classes include microaneurysms, retinal hemorrhages, intraretinal microvascular abnormalities (IRMA), and venous caliber changes, while pathologic preretinal neovascularization is used to classify proliferative DR [9].
In this paper, we propose DGM-DR, a Domain Generalization with Mutual information regularized Diabetic Retinopathy classifier. Our main contributions are as follows:
* We introduce a DG method that utilizes mutual information regularization with a large pretrained oracle model.
* We show the improvement of our proposed solution on the DR classification task over the previous state-of-the-art in both performance and robustness through rigorous investigations.
* We set a clear benchmark with the correct DG model selection method inline with standard DG protocols for the task of DR classification.
## 2 Methodology
Our work is inspired by [7], which aims to improve model generalizability when classifying natural images. In DGM-DR, we re-establish the domain generalization objective as a maximization of mutual information with a large pretrained model, named the oracle, to address DR classification. We aim to make the distribution of feature representations of the target model close to the generalized
Figure 1: Overview of the proposed method, DGM-DR. It consists of the oracle \(f^{0}\) and target \(f\) feature extractor, where \(f\) is initialized by the weights of \(f^{0}\). For each sampled mini-batch, feature representations are extracted using both feature extractors \(f^{0}\) and \(f\). Features \(Z_{f}\) are then passed to classifier \(g\). Lastly, the loss - a linear combination of cross entropy and mutual information regularization loss - is calculated, and \(f\) and \(g\) are updated.
one of the oracle by maximizing the mutual information between both. The oracle model is trained on a large-scale diverse dataset that contains information on many different domains in order to approximate it as closely as possible to a true oracle, which is a model that can generalize to any domain and is inaccessible in practice. Figure 1 shows an overview of DGM-DR's process. Initially, the oracle's weights are used to initialize the target model's feature extractor. Then, for each mini-batch, the oracle feature extractor \(f^{0}\) and the target feature extractor \(f\) are used to extract feature representations \(Z_{f^{0}}\) and \(Z_{f}\), respectively. The features \(Z_{f}\) are passed to the classifier \(g\) to produce the output. The oracle model is chosen as ImageNet pretrained ResNet-50 [13] for a realistic and fair comparison with other DG algorithms. It is shown in [5] that maximization of the lower bound of the mutual information between \(Z_{f^{0}}\) and \(Z_{f}\) is equivalent to minimization of the term 1
\[\mathbb{E}_{Z_{f^{0}},Z_{f}}\big{[}log|\Sigma(Z_{f})|+\|Z_{f^{0}}-\mu(Z_{f})\|_ {\Sigma(Z_{f})^{-1}}^{2} \tag{1}\]
The final loss is calculated using Equation 2:
\[\mathcal{L}(h)=\mathcal{E}_{S}(h)+\lambda\mathbb{E}_{Z_{f^{0}},Z_{f}}\big{[} log|\Sigma(Z_{f})|+\|Z_{f^{0}}-\mu(Z_{f})\|_{\Sigma(Z_{f})^{-1}}^{2}\big{]} \tag{2}\]
where \(\mathcal{E}_{S}(.)=\sum_{d=1}^{m}\mathcal{E}_{S_{d}}(.)\) is an empirical loss over \(m\) source domains, which was chosen as cross-entropy loss, \(\lambda\) is the regularization coefficient, and \(\|x\|_{A}=\sqrt{x^{T}Ax}\). The model \(h\) is modeled as a composition of a feature extractor \(f\) and a classifier \(g\), hence \(h=f\circ g\). Finally, the variational distribution that approximates the oracle model is modeled as a Gaussian distribution with mean vector \(\mu(Z_{f})\) and covariance matrix \(\Sigma(Z_{f})\). \(\|Z_{f^{0}}-\mu(Z_{f})\|\) enforces the mean feature representation \(Z_{f}\) to be as close as possible to the oracle feature representation \(Z_{f^{0}}\) when the variance term \(\Sigma(Z_{f})\) is low [7]. We anticipate that this optimization will yield robust representations, despite the substantial distribution shift between the oracle pretrained on natural images and the finetuning task involving retinal images. This is based on our hypothesis regarding the oracle's generalizability to any domain, owing to its extensive, diverse, and semantically rich features that surpass those found in any other medical dataset. The regularization term \(\lambda\) aims to minimize the variance in the target features and encourage similarity between the oracle and target features. This, in turn, facilitates the learning of domain-invariant representations that generalize well across different domains.
## 3 Experimental Setup
### Datasets
We utilize the four datasets used by [3], which are EyePACS [2], APTOS [1], Messidor and Messidor-2 [17]. The 4 datasets are composed of 5 classes of 5 grades from 0 to 4: No DR (Grade 0), mild DR (Grade 1), moderate DR (Grade 2), severe DR (Grade 3), and proliferative DR (Grade 4). These datasets were
acquired from various geographical regions, encompassing India, America, and France [1, 2, 17]. As a result, domain shift emerges, due to the variations in the employed cameras [2, 4], and the difference in population groups. Figure 2 shows example images for the 5 DR classes. A breakdown of the distribution of the classes is given in Table 4. In all 4 datasets, there is a high imbalance between the no DR class and the other 4 DR classes.
### Data Augmentations
All fundus images are resized to \(224\times 224\times 3\). We perform histogram equalization with a probability \(p=0.5\), horizontal flip and color jitter by a value of \(0.3\) in brightness, contrast, saturation, and hue with \(p=0.3\).
### Evaluation Methods
We utilize the DomainBed [11] evaluation protocols for fair comparison with DRGen [3] and other DG algorithms. The appropriate DG model selection method used is the training-domain validation set following DomainBed [11], in which we split each training domain into training and validation subsets, pool the validation subsets together to create an overall validation set, and finally choose the model that maximizes the accuracy on the overall validation set. We use 20% of the source training data for validation. We evaluate the performance scores using leave-one-domain-out cross validation, and average the cases where a specific domain is used as a target domain and the others as source domains.
We also perform comparisons of the proposed and existing DG approaches with the Empirical Risk Minimization (ERM) technique that aims to minimize in-domain errors. Interestingly, [11] argues that carefully training a model using ERM achieves a near state-of-the-art performance. This was tested on a range of baselines and was shown to outperform a few DG models.
### Implementation Details
We implement all our models using the PyTorch v1.7 framework. The experiments were run on 24GB Quadro RTX 6000 GPU. The backbone used is ResNet-50 pretrained on ImageNet. We use the Adam optimizer [15] with a learning rate
Figure 2: Sample images from different DR classes obtained from APTOS [1].
of \(5e-5\) and no weight decay, chosen experimentally. The model was trained in 5000 steps. The batch size was fixed to 32 images. The \(\lambda\) regularization coefficient was set to 1.0. Different values of lambda were experimented with, and the results are given in 5.
To compare against other DG methods, we reproduce the results of all algorithms using the same implementation details mentioned previously for a fair comparison. For Fishr [20], we set the Fishr lambda (\(\lambda\)) to 1000, penalty anneal iteration (\(\gamma\)) of 1500 and an exponential moving average of 0.95. For DRGen [3], we use SWAD as the model selection method as opposed to the test-domain validation used in the original paper [3], which is not suitable for DG evaluation. Moreover, we use the data augmentations in the official implementations of Fishr [20] and DRGen [3] for the respective algorithms, otherwise we use DGM-DR's augmentations. Finally, we use SWAD as the model selection method when combining DGM-DR with the SWAD [6] algorithm.
## 4 Results
Table 1 compares the performance of DGM-DR with three other methods, including the previous state-of-the-art DRGen. The experiments for the main results were repeated three times using three different seeds, and the average accuracy and standard deviation across the runs are reported. DGM-DR achieves \(>5\%\) increase in the average accuracy when compared with the other DG methods (Fishr and DRGen) and \(1\%\) increase compared to ERM-based model [22].
### Ablation Studies
**Changing the oracle pretraining datasets, methods, and backbones.** We investigate the effect of changing the oracle on the DR classification task and report the results in Table 2. We use ImageNet pretrained ResNet-50 using Barlow Twins [27] and MoCo [12], CLIP pretrained ResNet-50, and large-scale pretraining including CLIP pretrained ViT-B [8] and SWAG pretrained RegNetY-16GF [19]. All experiments were performed with the same implementation details mentioned previously, except for RegNetY-16GF, where the batch size was changed from 32 to 16 due to hardware limitations.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline
**Algorithm** & **APTOS** & **EyePACS** & **Messidor** & **Messidor-2** & **Average Accuracy** \\ \hline
**ERM[22]** & 62.83 & 73.01 & **66.88** & 65.26 & 66.99\(\pm\)4.3 \\
**Fishr[20]** & 56.49 & 68.24 & 61.53 & 62.11 & 62.09\(\pm\)4.8 \\
**DRGen[3]** & 54.53 & **73.87** & 52.03 & 69.13 & 62.39\(\pm\)9.3 \\ \hline
**DGM-DR** & **65.39** & 70.12 & 65.63 & **69.41** & 67.64\(\pm\)2.4 \\
**DGM-DR + SWAD[6]** & 65.15 & 71.92 & 65.66 & 68.96 & **67.92\(\pm\)3.2** \\ \hline \end{tabular}
\end{table}
Table 1: Multi-class classification results with ERM and DG methods averaged over three runs. The best accuracy (%) is highlighted in bold.
Binary classification of DR.
We study the effect of changing the multiclass classification task into a binary classification task, where fundus images are classified as _DR_ or _No DR_. The results of this experiment are reported in Table 3.
## 5 Discussion
In Table 1, we report the results of 4 different algorithms and show that DGM-DR outperforms all algorithms, including the previous state-of-the-art DRGen [3] by a significant margin of 5.25%. Additionally, DGM-DR demonstrates robustness with a relatively small standard deviation of 2.4 across three different experiments. As was concluded in [11], ERM-based methods can outperform a range of DG methods, if carefully trained. We show in 1 that the ERM method outperforms existing DG baselines that we compare with. On the other hand, we show that DGM-DR outperforms the ERM's performance for multiclass classification. We believe that even though the task of DR classification is challenging, the fundus images across all domains share common semantic structures, hence ERM is able to learn some domain-invariant features. However, the performance of DGM-DR is more stable, with a standard deviation being almost half that of ERM's. This can be attributed to DGM-DR's novel learning technique that aims to minimize a combination of cross entropy and mutual information regularization with an oracle, which enables it to learn more robust domain-invariant representations. Lastly, with the addition of SWAD to DGM-DR, performance further increases by a slight value (0.28%), consistent with previous literature (e.g. [21]) where accuracy is improved when combined with SWAD.
\begin{table}
\begin{tabular}{l|l|c c c|c} \hline
**Dataset** & **Pre-training** & **APTOS** & **EyePACS** & **Messidor** & **Messidor 2** & **Average Accuracy** \\ \hline \multirow{3}{*}{**ImageNet**} & ERM & **65.39** & 70.12 & 65.63 & **69.41** & **67.64\(\pm\)2.5** \\ \cline{2-6} & Barlow Twins & 60.66 & 73.45 & 55.57 & 61.18 & 62.71\(\pm\)7.6 \\ \cline{2-6} & MoCo v3 & 56.90 & 72.69 & 65.77 & 68.41 & 65.94\(\pm\)6.7 \\ \hline \multirow{2}{*}{**CLIP**} & CLIP (ResNet) & 61.01 & 73.33 & 62.44 & 58.10 & 63.72\(\pm\)6.7 \\ \cline{2-6} & CLIP (ViT) & 64.25 & 68.54 & **66.29** & 66.05 & 66.28\(\pm\)1.8 \\ \hline
**Instagram** & SWAG* (RegNet) & 63.12 & **75.38** & 62.96 & 64.61 & 66.52\(\pm\)6.0 \\ \hline \end{tabular}
\end{table}
Table 2: Results of changing the oracle pretraining datasets, methods, and backbones. SWAG* is using a batch size of 16, while the rest of the experiments are using a batch size of 32. The average accuracy and the standard deviation across the 4 domains in a single run are given, with the best accuracy (%) highlighted in bold.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline
**Algorithm** & **APTOS** & **EyePACS** & **Messidor** & **Messidor-2** & **Average Accuracy** \\ \hline
**ERM** & **95.42** & 74.70 & **86.98** & 77.47 & **83.63\(\pm\)9.5** \\
**Fishr** & 90.67 & 74.45 & 77.92 & **79.30** & 80.59\(\pm\)7.0 \\
**DRGen** & 82.05 & **75.41** & 81.67 & 72.42 & 77.89\(\pm\)4.7 \\
**DGM-DR** & 83.34 & 71.82 & 86.15 & 78.10 & 80.00\(\pm\)8.2 \\
**DGM-DR + SWAD** & 88.00 & 72.06 & 85.63 & 76.22 & 80.48\(\pm\)7.6 \\ \hline \end{tabular}
\end{table}
Table 3: Results of the binary classification task using different algorithms. The average accuracy and the standard deviation across the 4 domains in a single run are given, with the best accuracy (%) highlighted in bold.
In general, the performance of all algorithms on each of the datasets is consistent. This indicates that any decline or increase in performance of a dataset can be attributed to the distribution of the dataset itself, which is used as the target domain in the evaluation, and to the distribution of the combined source datasets on which the model is trained. For example, EyePACS [2] consistently performs better across all algorithms. A possible yet arguable hypothesis is it is highly imbalanced, as demonstrated in Table 4, with the majority of images belonging to _No DR_. Since the _No DR_ class is the majority in all datasets, the model is also biased towards it. Hence the model could be correctly classifying the _No DR_ case and randomly guessing in the other four cases.
In Table 2, we study the effect of changing the oracle pretraining datasets, methods, and backbones. The large SWAG* pretrained RegNetY-16GF oracle yields the best accuracy in this experiment, second only to our ResNet-50 with ImageNet ERM pretraining, possibly due to the smaller batch size and the limit of number of steps set for a fair comparison. In general, we observe that a larger oracle model trained on a bigger, more diverse dataset is able to guide the target model towards more generalized feature representations. However, it will require longer training time to converge.
In Table 3, we notice that ERM is doing a better job at binary classification of DR than DGM-DR. Since the binary classification problem is simpler, as is visually evident in Figure 2, DG algorithms tend to negatively impact the results as they are likely to introduce more complexity. Furthermore, the generalization gap [23] is typically smaller in binary classification than a multiclass setup. Therefore, ERM-based methods are likely to outperform DG-based methods in such scenarios.
The selection of the mutual information regularization coefficient \(\lambda\), which controls the balance between the cross entropy loss and the mutual information regularization loss, is related to how informative the oracle model's knowledge is for the target model's task. A large \(\lambda\) encourages the model to reduce the variance in the target features and enforce similarity between the target and oracle features. Thus, the model will focus on learning domain-invariant patterns originating from the oracle's knowledge, which is ImageNet in our main experiment. On the other hand, a small \(\lambda\) reduces the emphasis on domain-invariance and thus may potentially lead to overfitting.
In our case, as shown in [18], ImageNet initialization of deep learning models is beneficial in the context of medical imaging analysis, including fundus images. Therefore, we conclude that the best \(\lambda\) for the case of DR classification is 1.0 for the ImageNet pretrained ResNet-50, in contrast with that of natural images where \(\lambda\) is typically set to be \(\{0.001,0.01,0.1\}\) as in [7]. We believe that in the DR classification problem, the oracle model has a significant impact on training the target DG model due to its rich low level feature representations which cannot be easily learnt from scratch or from a small size dataset.
As a final note, a very important part of a domain generalization solution is the model selection method, as it simplifies fair assessments by disregarding differences in results due to inconsistent hyperparameter tuning that may be
attributed to the algorithms under study [11]. Furthermore, utilizing the test-domain validation set as a model selection method is inappropriate for a DG algorithm, which was done by DRGen [3] in DR classification. Hence, one goal of this paper is to set a clear benchmark for DR classification using training-domain validation, thus allowing easy comparison with future work.
## 6 Conclusion
In this paper, we introduce DGM-DR to tackle the problem of DR classification with domain generalization. Our use of a large pretrained model to guide the target model towards learning domain-invariant features across different DR datasets through mutual information regularization achieves superior performance over the previous-state-of-the art DG methods. We also establish a clear benchmark for the task using a DG-appropriate model selection algorithm, thus allowing future work to make comparisons with our work. Further investigation to understand when and why DG-based methods could be superior or inferior to ERM-based approaches in medical imaging is needed. Although we believe that our work pushes the horizons of the DG field in medical image analysis, several DG-related research questions are yet to be investigated e.g., unsupervised DG, interpretable DG, and performance evaluation to DG methods.
|
2309.12536 | Exceptional points in perturbed dielectric spheres: A resonant-state
expansion study | Exceptional points (EPs) in open optical systems are rigorously studied using
the resonant-state expansion (RSE). A spherical resonator, specifically a
homogeneous dielectric sphere in a vacuum, perturbed by two point-like defects
which break the spherical symmetry and bring the optical modes to EPs, is used
as a worked example. The RSE is a non-perturbative approach encoding the
information about an open optical system in matrix form in a rigorous way, and
thus offering a suitable tool for studying its EPs. These are simultaneous
degeneracies of the eigenvalues and corresponding eigenfunctions of the system,
which are rigorously described by the RSE and illustrated for perturbed
whispering-gallery modes (WGMs). An exceptional arc, which is a line of
adjacent EPs, is obtained analytically for perturbed dipolar WGMs. Perturbation
of high-quality WGMs with large angular momentum and their EPs are found by
reducing the RSE equation to a two-state problem by means of an orthogonal
transformation of a large RSE matrix. WGM pairs have opposite chirality in
spherically symmetric systems and equal chirality at EPs. This chirality at EPs
can be observed in circular dichroism measurements, as it manifested itself in
a squared-Lorentzian part of the optical spectra, which we demonstrate here
analytically and numerically in the Purcell enhancement factor for the
perturbed dipolar WGMs. | Kyle S. Netherwood, Hannah K. Riley, Egor A. Muljarov | 2023-09-21T23:23:58Z | http://arxiv.org/abs/2309.12536v3 | # Exceptional points in optical systems: A resonant-state expansion study
###### Abstract
Exceptional points (EPs) in open optical systems are rigorously studied using the resonant-state expansion (RSE). A spherical resonator, specifically a homogeneous dielectric sphere in a vacuum, perturbed by two point-like defects which break the spherical symmetry and bring the optical modes to EPs, is used as a worked example. The RSE is a non-perturbative approach encoding the information about an open optical system in matrix form in a rigorous way, and thus offering a suitable tool for studying its EPs. These are simultaneous degeneracies of the eigenvalues and corresponding eigenfunctions of the system, which are rigorously described by the RSE and illustrated for perturbed whispering-gallery modes (WGMs). An exceptional arc, which is a line of adjacent EPs, is obtained analytically for perturbed dipolar WGMs. Perturbation of high-quality WGMs with large angular momentum and their EPs are found by reducing the RSE equation to a two-state problem by means of an orthogonal transformation of a large RSE matrix. WGM pairs of opposite chirality away from EPs are shown to have the same chirality at EPs. This chirality can be observed in circular dichroism measurements, as it manifested itself in a squared-Lorentzian part of the optical spectra, which we demonstrate here analytically and numerically in the Purcell enhancement factor for the perturbed dipolar WGMs.
## I Introduction
An exceptional point (EP), originally named by Kato (1966) [1], is a simultaneous degeneracy of the eigenvalues and the corresponding eigenfunctions of a system. An EP of \(N\)th-order has \(N\) degenerate eigenvalues and eigenfunctions. EPs are a typical feature of open systems, which are characterized by the presence of gain and/or loss of energy and information, and can be described by non-Hermitian matrices which have generally complex eigenvalues [2].
Matrices allow a mathematically rigorous and simultaneously the most straightforward investigation of EPs as a special case of their eigenvalues and eigenvectors. To give a mathematical example of an EP, we introduce the \(2\times 2\) symmetric matrix
\[M=\begin{pmatrix}a&b\\ b&d\end{pmatrix} \tag{1}\]
where \(a\), \(b\), and \(d\) are complex numbers. The matrix \(M\) has the eigenvalues
\[\lambda=\frac{a+d}{2}\pm\frac{1}{2}\sqrt{(a-d)^{2}+4b^{2}}\,. \tag{2}\]
To find a point where the eigenvalues are degenerate, we let the square-root term in Eq.(2) vanish. This gives the degeneracy condition
\[b=\pm\frac{i(a-d)}{2}\,. \tag{3}\]
If \(b\neq 0\) and Eq.(3) is satisfied, \(a\), \(b\), and \(d\) are the matrix elements of \(M\) at an EP. If Eq.(3) is satisfied but \(b=0\), the degeneracy is called a diabolic point (DP) which is a degeneracy of eigenvalues but not eigenfunctions. DPs are equivalent to any degeneracies in a Hermitian system, but in a non-Hermitian system they are only the degeneracies that arise due to symmetry, and they generally do not have the characteristic shape of an EP. This characteristic shape along with other features of EPs can be demonstrated, for example, by setting the matrix elements of Eq.(1) to \(a=0\), \(b=ic\), and \(d=1\) where \(c\) is a real variable. Using Eq.(2), the eigenvalues of this example matrix around an EP at \(c=1/2\) are plotted in Fig.1.
Fig.1 shows the characteristic shape of the eigenvalues in the proximity of an EP. This shape is due to the fact that eigenvalues vary non-line
Figure 1: Eigenvalues of Eq.(1), where \(a=0\), \(b=ic\), and \(d=1\), varied against parameter \(c\), taking a value of \(c=1/2\) at an EP. |
2310.00051 | Kinematically constrained vortex dynamics in charge density waves | We build a minimal model of dissipative vortex dynamics in two spatial
dimensions, subject to a kinematic constraint: dipole conservation. The
additional conservation law implies anomalously slow decay rates for vortices.
We argue that this model of vortex dynamics is relevant for a broad range of
time scales during a quench into a uniaxial charge density wave state. Our
predictions are consistent with recent experiments on uniaxial charge density
wave formation in $\mathrm{LaTe}_3$. | Marvin Qi, Andrew Lucas | 2023-09-29T18:00:03Z | http://arxiv.org/abs/2310.00051v1 | # Kinematically constrained vortex dynamics in charge density waves
###### Abstract
We build a minimal model of dissipative vortex dynamics in two spatial dimensions, subject to a kinematic constraint: dipole conservation. The additional conservation law implies anomalously slow decay rates for vortices. We argue that this model of vortex dynamics is relevant for a broad range of time scales during a quench into a uniaxial charge density wave state. Our predictions are consistent with recent experiments on uniaxial charge density wave formation in LaTe\({}_{3}\).
## I Introduction
The dynamics of systems with multipolar symmetries and more general kinematic constraints have been the subject of intense study in recent years. Much of the interest in such systems derives from their natural relation to the study of fractons, which are quasiparticle excitations in many-body systems with restricted mobility [1; 2; 3; 4; 5; 6]. These mobility restrictions often originate from multipolar symmetries or gauged versions thereof [7; 8; 9]. Restricted mobility of microscopic degrees of freedom can lead to many observable consequences in dynamics, including ergodicity breaking [10; 11], Hilbert space fragmentation [12; 13; 14; 15; 16] and subdiffusive hydrodynamics [17; 18; 19; 20; 21; 22; 23; 24; 25; 26].
Given the unusual nature of the symmetries involved in fractonic theories, it is often challenging to realize the dynamical phenomena discovered above directly in experiment. One exception is the emergence of dipole conservation in tilted optical lattices [27; 28; 29]. Spin liquids and frustrated magnetism [30; 31; 32; 33] may also give rise to similar physics, though a conclusive experimental demonstration has not been found yet. The most "conventional" realization of such unusual symmetries in nature is in elastic solids: via fracton-elasticity duality [34; 35; 36; 37; 38], the charges and dipoles of a rank-two tensor gauge theory are mapped to disclination and dislocation defects of a two-dimensional crystal. The disclination is immobile in isolation while the dislocation can only move along its Burgers vector; these mobility constraints are shared respectively by the charge and dipole of the rank-two tensor gauge theory. Similar mobility constraints apply to defects of two-dimensional smectic liquid crystals [39; 40].
The purpose of this work is to show that in a rather related setting - the formation of (uniaxial) charge density waves (CDW) - emergent and approximate mobility constraints can have striking dynamical signatures that are experimentally observable. We will see that topological defects (vortices) have an anomalously long lifetime in uniaxial charge density wave formation. More concretely, we write down a minimal model for dissipative vortex dynamics in this setting, incorporating a dipolar kinematic constraint on vortex motion orthogonal to the density wave direction. Numerical simulations and analytical theory demonstrate that dissipative vortex dynamics of such constrained vortices is qualitatively different from usual dissipative vortex dynamics [41; 42; 43], which has been realized in e.g. thin films of superfluid helium [44].
Our predictions are quantitatively consistent with recent ultrafast experiments on LaTe\({}_{3}\)[45], which revealed anomalously slow subdiffusive vortex decay, incompatible with the existing Ginzburg-Landau model of uniaxial density wave formation. Hence, this work reveals a promising new avenue for studying "fractonic" dynamical universality classes in quantum materials.
## II Vortex dynamics in charge density waves
Topological defects of an order parameter naturally form when a system undergoes a quench from a disordered to an ordered phase [46; 47]. Relaxation toward the equilibrium steady state proceeds via annihilation of the topological defects. In a two dimensional uniaxial charge density wave, the topological defects are vortices, which correspond physically to dislocations of the CDW.
The _equilibrium_ properties of the transition into a uniaxial CDW are commonly described using the same Ginzburg-Landau (GL) theory describing the superfluid-insulator transition. However, we argue following [48] that the standard analysis of a dynamical GL theory will incorrectly describe dynamics of CDW vortices. Unlike superfluid vortices, vortices of the CDW are dislocations, which are subject to approximate kinematic constraints. If the layering occurs perpendicular to the \(\hat{x}\) axis, then local operators can translate vortices along the \(\hat{x}\) direction, as shown in Fig. 1(\(a\)). Motion along \(\hat{y}\) requires a non-local operator which translates the CDW layer, however, because translating the vortex in this direction requires _adding or removing charge_, which violates local charge conservation if the CDW is in its ground state. Hence, the simplest moves in the \(\hat{y}\) direction is of a pair of vortices: see Fig. 1(\(b\)) - (\(c\)). Such processes leave the \(\hat{y}\) dipole moment of the vortices unchanged.
At finite temperature, we expect a very small density of
mobile charged degrees of freedom thermally fluctuating on top of the CDW, which will give a single vortex a small mobility in the \(\hat{y}\) direction. In this Letter, we will focus on dynamics at short time scales, where this process can be neglected.
## III The model
We now develop a minimal model for vortex dynamics subject to the constraint above. The degrees of freedom are the positions \(\mathbf{r}^{\alpha}=(x^{\alpha},y^{\alpha})\) of the \(N\) vortices. Starting with the dissipationless component, we anticipate that this can be described by conventional point-vortex dynamics: after all, such dynamics _already_ conserves dipole [49]. The dissipationless dynamics is moreover Hamiltonian, if we define Poisson brackets
\[\{x^{\alpha},y^{\beta}\}=\frac{1}{\Gamma_{\alpha}}\delta_{\alpha\beta}. \tag{1}\]
Here \(\Gamma_{\alpha}\) is the vorticity of the \(\alpha\)-th vortex. Note that we do not sum over repeated indices. This can equivalently be written as
\[\{r_{i}^{\alpha},r_{j}^{\beta}\}=\frac{1}{\Gamma_{\alpha}}\delta_{\alpha\beta }\epsilon_{ij}. \tag{2}\]
The vortices interact via a logarithmic potential, so the Hamiltonian is (in dimensionless units)
\[\mathcal{H}=-\sum_{\alpha<\beta}\Gamma_{\alpha}\Gamma_{\beta}\,\log(|\mathbf{ r}_{\alpha}-\mathbf{r}_{\beta}|). \tag{3}\]
The corresponding Hamiltonian equations of motion are
\[\begin{split}\dot{x}^{\alpha}=\{x,\mathcal{H}\}=& \frac{1}{\Gamma_{\alpha}}\frac{\partial H}{\partial y^{\alpha}}\\ \dot{y}^{\alpha}=\{y,\mathcal{H}\}=-\frac{1}{\Gamma_{\alpha}} \frac{\partial H}{\partial x^{\alpha}}\end{split} \tag{4}\]
In this setting, dipole conservation is a consequence of translation invariance. Indeed, the Poisson brackets (1) mean that \(\Gamma_{\alpha}y^{\alpha}\) plays the role of "momentum" of the \(\alpha\)-th vortex in the \(\hat{x}\) direction, and similarly for \(-\Gamma_{\alpha}x^{\alpha}\) in the \(\hat{y}\) direction. The total dipole moments are therefore identified with the generators of translation, whose conservation follows from translation invariance of \(\mathcal{H}\).
The dipole conservation can be seen in the exactly solvable two-body dynamics of vortices. Pairs with equal vorticity travel in a circular orbit around their center of mass, while pairs of opposite vorticity move in a straight line perpendicular to their dipole moment; in each case dipole moment is conserved [49].
We now turn to the effects of dissipation. The standard model for dissipative dynamics of point vortices is
\[\begin{split}\dot{x}^{\alpha}&=\frac{1}{\Gamma_{ \alpha}}\frac{\partial H}{\partial y^{\alpha}}-\gamma\frac{\partial H}{ \partial x^{\alpha}}\\ \dot{y}^{\alpha}&=-\frac{1}{\Gamma_{\alpha}}\frac{ \partial H}{\partial x^{\alpha}}-\gamma\frac{\partial H}{\partial y^{\alpha}} \end{split}. \tag{5}\]
where \(\gamma\) term is the mutual friction responsible for dissipation. Note, however, that it breaks the conservation of dipole moment.
Indeed, one can see the effect of \(\gamma\) in the two-body dynamics of vortices. It causes same-sign vortices to drift apart, and opposite-sign vortices to approach each other and collide in finite time; dipole moment conservation is violated in the latter case.
A minimal model for dissipative vortex dynamics which conserve both components of the dipole moment is (see Appendix A for a derivation)
\[\begin{split}\dot{x}^{\alpha}&=\frac{1}{\Gamma_{ \alpha}}\frac{\partial H}{\partial y^{\alpha}}-\gamma^{\prime}\frac{\tilde{f} _{\alpha}}{\Gamma_{\alpha}^{2}}\frac{\partial H}{\partial x^{\alpha}}+\gamma^{ \prime}\sum_{\beta}\frac{f_{\alpha\beta}}{\Gamma_{\alpha}\Gamma_{\beta}} \frac{\partial H}{\partial x^{\beta}}\\ \dot{y}^{\alpha}&=-\frac{1}{\Gamma_{\alpha}}\frac{ \partial H}{\partial x^{\alpha}}-\gamma^{\prime}\frac{\tilde{f}_{\alpha}}{ \Gamma_{\alpha}^{2}}\frac{\partial H}{\partial y^{\alpha}}+\gamma^{\prime} \sum_{\beta}\frac{f_{\alpha\beta}}{\Gamma_{\alpha}\Gamma_{\beta}}\frac{ \partial H}{\partial y^{\beta}}\end{split}. \tag{6}\]
where \(f_{\alpha\beta}\coloneqq f(|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|)\) is a function which depends only on the distance between vortices \(\alpha\) and \(\beta\). The function \(f_{\alpha\beta}\) is not constrained by the EFT; we choose \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-1}\). When this dipole-conserving dissipative term is included, two vortices of opposite sign can approach each other and annihilate in the presence of a nearby third vortex. The motion of the third vortex compensates for the change of dipole moment caused by the annihilation of the two vortices, leaving total dipole moment unchanged. This process is depicted in Fig. 2(b). We find, however, that for our choice of \(f_{\alpha\beta}\), if the initial positions are sufficiently far apart then a vortex pair can simply escape off to infinity without annihilating.
We numerically simulate the \(N\)-body dynamics of dipole-conserving vortices given by the equations of motion (6). For the initial conditions we randomly sample \(N\) points uniformly from a box of size \(L\times L\), and randomly assign vorticities \(\pm 1\) to each. Dissipation causes vortices to come together and annihilate in finite time. Vortex annihilation is implemented in the simulation by manually removing pairs of opposite-sign vortices when
Figure 1: Kinematic constraints of dislocations of the charge density wave. Column \((a)\) shows a local process which translates the vortex freely in the \(\hat{x}\) direction. On the other hand, motion of same (resp. opposite) sign vortices along the \(\hat{y}\) direction only occurs in pairs, as depicted in \((b)\) (resp. \((c)\)). The pair process conserves dipole moment along the \(\hat{y}\) direction.
their distance decreases below a cutoff \(\epsilon\). We plot the average surviving fraction of vortices \(\langle n(t)\rangle\) as a function of (rescaled) time in Fig. 3 in blue for \(N=400\) and \(L=80\). This vortex relaxation process is well-described at early to intermediate times by a function of the form \((1+{\cal K}t)^{-\alpha_{\rm dipole}}\) with \(\alpha_{\rm dipole}=0.50\pm 0.01\), shown in orange in Fig. 3. At late times, vortex annihilation occurs much more slowly; this can be attributed to the annihilation process "freezing out" at sufficiently low density as alluded to above. This is qualitatively similar to the breakdown of thermalization in dipole-conserving systems found in [10; 11].
Vortices of the CDW order parameter (approximately) conserve dipole moment only along the layering axis, and are unconstrained transversely. Assuming layering along the \(y\) axis, this anisotropic mobility constraint is implemented by including the \(i=x\) terms of (14) and the \(i=y\) terms of (14) in the EFT Lagrangian. The resulting equations of motion are
\[\begin{split}\dot{x}^{\alpha}&=\frac{1}{\Gamma_{ \alpha}}\frac{\partial H}{\partial y^{\alpha}}-\gamma\frac{\partial H}{ \partial x^{\alpha}}\\ \dot{y}^{\alpha}&=-\frac{1}{\Gamma_{\alpha}}\frac{ \partial H}{\partial x^{\alpha}}-\gamma^{\prime}\frac{\tilde{f}_{\alpha}}{ \Gamma_{\alpha}^{2}}\frac{\partial H}{\partial y^{\alpha}}+\gamma^{\prime} \sum_{\beta}\frac{f_{\alpha\beta}}{\Gamma_{\alpha}\Gamma_{\beta}}\frac{ \partial H}{\partial y^{\beta}}\end{split}\cdot \tag{7}\]
Since motion along the \(\hat{x}\) axis is unconstrained, the three-body annihilation process of Fig. 2(b) is no longer the only process by which vortices can annihilate. A pair of vortices can annihilate via the process of Fig. 2(c) if the pair has net zero \(y\)-dipole moment. However, such a process is fine-tuned, as it requires the \(y\)-dipole moment to vanish exactly. Nevertheless, we expect that the dynamics should proceed more quickly than in the isotropic dipole-conserving case, as it is less constrained.
We follow the same procedure to simulate the \(N\)-body dynamics of (7) with vortex annihilation. The average surviving fraction of vortices is plotted in green in Fig. 3, for \(N=400\) and \(L=80\). The red dashed line shows the fit at early to intermediate times to \((1+{\cal K}t)^{-\alpha_{y-{\rm dipole}}}\), with \(\alpha_{y\text{-dipole}}=0.65\pm 0.02\). The relation \(\alpha_{y\text{-dipole}}\gtrsim\alpha_{\rm dipole}\) is consistent with faster dynamics as a consequence of fewer kinematic constraints.
In an isotropic theory, we can successfully estimate \(\alpha\) for both the dipole-conserving and non-conserving vortex dynamics based on evaluating the two-point correlator for a conserved (vortex) density \(n\):
\[\langle n(t)n(0)\rangle\sim\int\mathrm{d}k_{x}\mathrm{d}k_{y}\mathrm{e}^{-D_{ x}k_{x}^{\alpha_{x}t}-D_{y}k_{y}^{\alpha_{y}t}}\sim t^{-\alpha^{*}}, \tag{8}\]
with \(\alpha^{*}=1/a_{x}+1/a_{y}\), using \(a=4\) for dipole-conserving [17; 48] and \(a=2\) for dipole-non-conserving dissipative dynamics. If only \(y\)-dipole is conserved, this argument then suggests that \(\alpha=\frac{1}{2}+\frac{1}{4}=0.75\). The numerically observed \(\alpha_{y\text{-dipole}}=0.65\) is not consistent with this estimate. This suggests that the dynamical universality class observed is not fully captured by "mean field" scaling.
## IV Reaction-diffusion dynamics
To further justify our observation of \(\alpha=0.5\) scaling observed when both components of dipole moment are conserved, we employ the theory of stochastic reaction-diffusion equations [50; 51; 52; 53; 54; 55]. Our setup, however, contains some complicating features: the vortices experience long-range interactions given by the logarithmic potential (3), and their dynamics conserve the total dipole moment (along one or both axes). These features modify the kinetic rate equations and their scaling.
Moreover, in contrast to an ordinary \(A+B\to 0\) reaction-diffusion system, two isolated vortices are unable to annihilate each other, as doing so would change the dipole moment of the system. Rather, two vortices can only annihilate in the presence of another nearby vortex. In this case, the change in dipole moment arising from annilation of the vortex pair is compensated by
Figure 2: Few body dissipative dynamics of kinematically constrained vortices. (\(a\)) Two same-sign vortices will tend to spiral away from each other preserving dipole moment. (\(b\)) The minimal three-body process by which two vortices can annihilate. The change in dipole moment is offset by motion of the third vortex. (\(c\)) When only the \(y\) dipole moment is conserved, two vortices can annihilate if their \(y\) dipole moment exactly vanishes.
Figure 3: Average number of vortices \(\langle n(t)\rangle\) normalized as a fraction of the initial number of vortices. The blue and red curves show \(\langle n(t)\rangle\) for vortices which conserve dipole moment in both directions and only the \(\hat{x}\) direction, respectively. The dashed lines show their corresponding fits to the function \(1/(1+{\cal K}t)^{\alpha}\), with \(\alpha_{\rm dipole}=0.50\pm 0.01\) and \(\alpha_{x\text{-dipole}}=0.65\pm 0.02\). Note that the \(x\) axis is scaled to \({\cal K}t\); \({\cal K}\) is a fit parameter which is different for the two systems.
motion of the nearby vortex. See Fig. 2(b) for an illustration of the minimal process by which a pair of vortices can annihilate while preserving dipole moment in both directions. Hence, the allowed reactions are of the form \(A+B+x\to x\), for \(x=A,B\).
Letting \(\rho_{A,B}\) denote the densities of the two species (positive/negative vortices), we then postulate a kinetic rate equation
\[\frac{\mathrm{d}\rho_{A}}{\mathrm{d}t}=\frac{\mathrm{d}\rho_{B}}{\mathrm{d}t}- \mathcal{K}\rho_{A}\rho_{B}(\rho_{A}+\rho_{B}), \tag{9}\]
Defining the total vortex density \(n=\rho_{A}+\rho_{B}\) and the (signed) vorticity density \(\rho=\rho_{A}-\rho_{B}\), we have
\[\begin{split}\frac{\mathrm{d}n}{\mathrm{d}t}&=- \frac{\mathcal{K}}{2}(n^{2}-\rho^{2})n\\ \frac{\mathrm{d}\rho}{\mathrm{d}t}&=0\end{split} \tag{10}\]
The second equation is the statement that the total charge is a conserved quantity since only opposite-sign vortices can annihilate. When the initial charge density vanishes, (10) can be solved to give
\[n(t)=(1/n_{0}^{2}+\mathcal{K}t)^{-1/2} \tag{11}\]
which is in sharp contrast to the case where no dipole symmetry is imposed. The exponent is in excellent agreement with the numerical results of the isotropic dipole-conserving vortex model of the previous section.
We now include the effects of spatial fluctuations and long-range interactions in the reaction-diffusion dynamics. The equations of motion are modified to be
\[\begin{split}\partial_{t}n-D\nabla^{2}n&=-\frac{ \mathcal{K}}{2}[n^{2}-\rho^{2}]n-Q\nabla(\rho\nabla V)\\ \partial_{t}\rho-\tilde{D}\nabla^{4}\rho&=-Q\nabla( n\nabla V)\end{split} \tag{12}\]
where the potential \(V(\mathbf{r},t)\) is given by
\[V(\mathbf{r},t)=-\int\mathrm{d}^{2}\mathbf{r}^{\prime}\rho(\mathbf{r}^{\prime },t)\log|\mathbf{r}^{\prime}-\mathbf{r}| \tag{13}\]
and captures the drift of vortices due to the velocity fields created by the others. We have introduced (sub)diffusion coefficients \(D\) and \(\tilde{D}\) and a coefficent \(Q\sim\Gamma^{2}\) proportional to the square of the vorticity.
Following [56], we analyze these equations using a self-consistent approximation where the fluctuations of the total number density \(n\) are neglected while the fluctuations of the charge density \(\rho\) are kept. In other words, the number density \(n(\mathbf{r},t)=n(t)\) is approximated as a spatially independent function of time. Our goal is to determine the average number density \(n(t)\) averaged over an ensemble of initial conditions for \(\rho(\mathbf{r},0)\) taken to be
\[\begin{split}\langle\rho(\mathbf{r},0)\rangle&=0\\ \langle\rho(\mathbf{r}_{1},0)\rho(\mathbf{r}_{2},0)\rangle& =n_{0}^{2}\delta^{(2)}(\mathbf{r}_{1}-\mathbf{r}_{2})\end{split}. \tag{14}\]
Substituting \(n(\mathbf{r},t)\simeq n(t)\) into the first equation of (12) and averaging over all space and initial conditions, we obtain
\[\frac{\mathrm{d}n(t)}{\mathrm{d}t}+\frac{\mathcal{K}}{2}n(t)^{3}=\frac{ \mathcal{K}}{2}n(t)\int\mathrm{d}^{2}\mathbf{r}\ \langle\rho(\mathbf{r},t)^{2}\rangle. \tag{15}\]
with \(\langle\ldots\rangle\) denoting the average over initial conditions. Fourier transforming \(\rho(\mathbf{r},t)\) in space, the second equation of (12) becomes
\[\partial_{t}\rho(\mathbf{k},t)+Dk^{4}=-Qn(t)\rho(\mathbf{k},t) \tag{16}\]
away from \(k=0\) and \(\partial_{t}\rho(\mathbf{0},t)=0\) at \(k=0\). This equation can be solved exactly to give
\[\rho(\mathbf{k},t)=\rho(\mathbf{k},0)\exp\left(-\tilde{D}k^{4}t-Q\int_{0}^{t} \mathrm{d}t^{\prime}n(t^{\prime})\right). \tag{17}\]
Substituting the solution \(\rho(\mathbf{k},t)\) into (15) and performing the average over initial conditions (14), the equation of motion for \(n(t)\) becomes
\[\begin{split}\frac{\mathrm{d}n}{\mathrm{d}t}+\frac{\mathcal{K}} {2}n^{3}&=\frac{\mathcal{K}}{2}n\int\mathrm{d}^{2}\mathbf{k}\exp \left(-2\tilde{D}k^{4}t-2Q\int_{0}^{t}\mathrm{d}t^{\prime}n(t^{\prime})\right) \\ &\simeq\mathcal{K}n\exp\left(-2Q\int_{0}^{t}\mathrm{d}t^{\prime}n( t^{\prime})\right)\frac{1}{\sqrt{2Dt}}\end{split} \tag{18}\]
This equation reduces to the mean field solution (11) when the right hand side can be ignored. The self-consistency of the mean field approximation can be determined as follows. Substituting the mean field solution (11) into, the terms on the left hand side decay as \(t^{-3/2}\), while the right hand side decays superpolynomially as \(t^{-1}\exp(-\sqrt{t})\). We see that the mean field solution is valid asymptotically for any nonzero \(Q\). This justifies the agreement between the numerical simulations of the isotropic dipole-conserving model and the fit obtained from mean-field reaction-diffusion theory.
## V Comparison to Experiment on LaTe\({}_{3}\)
A recent experiment, which served as motivation for this work, observed anomalously slow dynamics of vortex strings in LaTe\({}_{3}\)[45], whose behavior was not explained by a Ginzburg-Landau type phenomenological theory. The experiment pulses the CDW state of LaTe\({}_{3}\) to photoinduce vortex strings and uses ultrafast x-ray scattering to resolve the time dynamics of the resulting non-equilibrium state. X-ray scattering measures the structure factor \(S(\mathbf{k},t)\), which takes the form
\[S(\mathbf{k},t)=g(t)F[kL(t)] \tag{19}\]
where \(F\) is a universal function and \(L(t)\sim t^{\beta}\) corresponds to the average distance between topological defects. They find a scaling exponent \(\beta=0.29\), which is inconsistent with the standard result \(\beta=0.5\) for diffusive
vortex decay in superfluid thin films [42] or superfluid ultracold atomic gases [57].
Because vortex strings are a codimension-two defect, the density of defects scales as \(n(t)\sim L(t)^{-2}\sim t^{-2\beta}\). While our model of uniaxial CDW vortex relaxation is strictly two-dimensional, it may capture similar phenomena to the experiment, so long as the bending of vortex strings does not play a qualitatively important role in the dynamics (note that 2d vs. 3d does not change \(\beta\) in ordinary superfluids). Above, we computed \(n(t)\) within our model of CDW vortex relaxation and found \(n(t)\sim t^{-\alpha}\) with \(\alpha=0.65\pm 0.02\). This yields \(\beta=0.325\pm 0.01\), which is quite close to the experimentally observed value. Importantly, our computation of \(\beta\) goes beyond the Ginzburg-Landau type phenomenological theory, which produces \(\beta=1/4\) and \(\beta=1/2\) for conserved and non-conserved dipole moments, respectively.
## 6 Conclusion
We have constructed a minimal dissipative model of kinematically-constrained vortices, relevant to dynamics over a large range of time scales in uniaxial CDWs. While our isotropic dipole-conserving model agrees well with simple mean-field-theoretic arguments, the experimentally relevant model where only one component of dipole is conserved exhibits anomalous exponents that are close to those observed in a recent experiment on LaTe\({}_{3}\)[45]. Our results therefore provide a quantitative theory for how experimentally-observed subdiffusive dynamics of solid-state defects follows from emergent mobility restrictions, with direct implications for experiment. Generalizing our theory to broader settings, including the constrained dynamics of topological defects in three dimensional charge density waves, remains an interesting open problem that is - at least in principle - straightforwardly tackled using the methods described here.
## Acknowledgements
We thank Leo Radzihovsky, Mariano Trigo and Gal Orenstein for useful discussions. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award number DESC0014415 (MQ), the Alfred P. Sloan Foundation through Grant FG-2020-13795 (AL), and the National Science Foundation through CAREER Grant DMR-2145544 (AL).
## Appendix A Review of effective field theory
### Formalism
We review the general effective field theory (EFT) for stochastic dynamics following [21]. Let \(\mathbf{q}=(q_{1},\ldots,q_{M})\) be the "slow" degrees of freedom that we keep track of in the EFT. To begin, we assume the existence of a stationary distribution
\[P_{eq}\propto e^{-\Phi[\mathbf{q}]}, \tag{10}\]
where \(\Phi[\mathbf{q}]\) is a function(al) of the degrees of freedom \(\mathbf{q}\). The EFT describes the evolution of the probability distribution \(P[\mathbf{q},t]\) as it relaxes toward the stationary distribution (10). The EFT is encoded by an action of the form
\[S=\int\mathrm{d}t\sum_{a}\left[\pi_{a}\partial_{t}q_{a}-H(\pi_{a},q_{a})\right] \tag{11}\]
where \(\pi_{a}\) are "noise" variables conjugate to \(q_{a}\). We will refer to \(H(\pi_{a},q_{a})\) as the EFT Hamiltonian.
There are several general conditions that \(H(\pi_{a},q_{a})\) must obey for the action to represent a sensible EFT. We present the general principles that the dynamics must obey and the corresponding consequences for \(H(\pi_{a},q_{a})\); for a derivation of the latter from the former, see [21]. First, conservation of probability enforces
\[H(\pi_{a}=0,q_{a})=0. \tag{12}\]
This ensures that all terms of \(H\) have at least one factor of \(\pi_{a}\). Stationarity of \(P_{eq}\) places another constraint on the allowed form of \(H(\pi_{a},q_{a})\). Define the generalized chemical potentials
\[\mu_{a}=\frac{\partial\Phi}{\partial q_{a}}. \tag{13}\]
Stationarity of \(P_{eq}\) implies that
\[H(\pi_{a}=i\mu_{a},q_{a})=0. \tag{14}\]
Finally, we assume that the degrees of freedom \(\mathbf{q}\) undergo stochastic dynamics whose fluctuations are bounded. This enforces the condition
\[\mathrm{Im}(H)\leq 0. \tag{15}\]
Finally, we may require that the dynamics respect a version of time-reversal symmetry. Let \(\mathbb{T}\) denote the time-reversal operation, which sends \(t\to-t\) and \(\mathbf{q}\to\mathbb{T}(\mathbf{q})\). On the conjugate noise variables \(\pi_{a}\), time reversal acts as
\[\pi_{a}\to-\mathbb{T}(\pi_{a})+i\mu_{a}. \tag{16}\]
Note that this is a \(\mathbb{Z}_{2}\) transformation.
The conditions on the EFT Hamiltonian \(H(\pi_{a},q_{a})\) significantly constrain the terms which can appear. At the quadratic level, the Hamiltonian must take the form
\[H(\pi_{a},q_{a})=\sum_{ab}i\pi_{a}\mathcal{Q}_{ab}(\pi_{b}-i\mu_{b}) \tag{17}\]
for matrix \(\mathcal{Q}_{ab}\). Decomposing \(\mathcal{Q}_{ab}=A_{ab}-\frac{1}{2}S_{ab}\) into its symmetric part \(-\frac{1}{2}S\) and antisymmetric part \(A\), we see
that \(S\) must be positive definite due to (10). Symmetric contributions to \(\mathcal{Q}_{ab}\) are dissipative, while antisymmetric contributions are non-dissipative.
We conclude the review of the formalism with a brief discussion of how additional conservation laws can be accounted for in the dynamics. Suppose we would like to enforce that a quantity \(F(\mathbf{q})\) is conserved. By Noether's theorem, this is equivalent to enforcing the shift symmetry
\[\pi_{a}\rightarrow\pi_{a}+\frac{\partial F}{\partial q_{a}} \tag{11}\]
on the EFT Hamiltonian \(H(\pi_{a},q_{a})\). Naturally, enforcing this symmetry leads to constraints on the allowed terms which can appear in the EFT Hamiltonian. We will encounter many examples of conservation laws below.
### Example: diffusion
Let us illustrate how diffusion of a single conserved density can be recovered within this framework. The only degree of freedom we keep track of is \(\mathbf{q}=\rho(x)\), which is a conserved density. Its corresponding conjugate field is \(\pi(x)\). We take the stationary distribution to be
\[\Phi[\rho(x)]=\int\mathrm{d}^{d}x\ \frac{1}{2}\chi\rho^{2}+\ldots \tag{12}\]
where the terms in \(\ldots\) are higher order in \(\rho\). The chemical potential to leading order is \(\mu=\chi\rho\). In addition to the aforementioned conditions on \(H\), we also demand that \(Q=\int\rho\) is conserved. Applying the continuum analogue of (11), we require that the EFT Hamiltonian is invariant under
\[\pi\rightarrow\pi+c(t). \tag{13}\]
To quadratic order \(H\) takes the form (10); the above symmetry fixes it to be
\[H(\pi,\rho)=-i\sigma\nabla\pi\nabla(\pi-i\mu)+\cdots. \tag{14}\]
This term is dissipative since it is a symmetric contribution to (10). The equation of motion from varying \(\pi\) (and then setting \(\pi\) to zero) is
\[\partial_{t}\rho-\nabla(D\nabla\rho)=0 \tag{15}\]
which is the diffusion equation with \(D=\chi\sigma\).
### Example: Hamiltonian mechanics
The formalism can also capture Hamiltonian mechanics in the presence of dissipation. We take our degrees of freedom to be \(\mathbf{q}=(x_{1},\ldots,x_{M},p_{1},\ldots,p_{M})=(\mathbf{x},\mathbf{p})\), with the usual Poisson brackets
\[\begin{split}\{x_{i},p_{j}\}&=\delta_{ij}.\\ \{x_{i},x_{j}\}&=\{p_{i},p_{j}\}=0\end{split} \tag{16}\]
We assume that there is a Hamiltonian \(\mathcal{H}(x,p)\) (to be distinguished from the \(H(\pi_{a},q_{a})\) of the effective field theory) which generates time evolution in the absence of noise and defines the equilibrium distribution
\[P_{eq}\propto e^{-\mathcal{H}(x,p)}. \tag{17}\]
Correspondingly the chemical potentials are \(\mathbf{\mu}=(\partial\mathcal{H}/\partial\mathbf{x},\partial\mathcal{H}/\partial\bm {p})\). Hamilton's equations in the absence of dissipation can be reproduced from the EFT action
\[S=\int\mathrm{d}t\sum_{a}\pi_{a}\partial_{t}q_{a}-\pi_{a}\{q_{a},q_{b}\}\mu_{b}. \tag{18}\]
The second term is an antisymmetric contribution to the EFT Hamiltonian (10), so it is dissipationless as expected. We can add dissipation by including in \(H(\pi_{a},q_{a})\) a term
\[-i\pi_{a}S_{ab}(\pi_{b}-i\mu_{b}) \tag{19}\]
where \(S_{ab}\) is a positive definite symmetric matrix.
Additional conservation laws are implemented by enforcing invariance under (11). In the absence of dissipation, this is equivalent to the condition
\[\{F,\mathcal{H}\}=0 \tag{20}\]
for a conserved quantity in Hamiltonian mechanics.
### Dipole-conserving vortices
We now derive the model of dipole-conserving vortices with dissipation presented in the main text. Recall that the dynamics without dissipation are characterized by the Poisson brackets (1), (2) and Hamiltonian (3). Following the previous subsection, we define the generalized chemical potentials \(\mu_{i}^{\alpha}=\partial\mathcal{H}/\partial r_{i}^{\alpha}\) and write the dissipationless contribution to the EFT Lagrangian as
\[\begin{split} L&=\sum_{\alpha,i}\left(\pi_{i}^{ \alpha}\partial_{t}r_{i}^{\alpha}-\pi_{i}^{\alpha}\sum_{\beta,j}\{r_{i}^{ \alpha},r_{j}^{\beta}\}\mu_{j}^{\beta}\right)\\ &=\sum_{\alpha,i}\left(\pi_{i}^{\alpha}\partial_{t}r_{i}^{\alpha }-\pi_{i}^{\alpha}\frac{1}{\Gamma_{\alpha}}\sum_{j}\epsilon_{ij}\mu_{j}^{ \alpha}\right).\end{split} \tag{21}\]
The ordinary mutual friction term which appears in dissipative models of vortices is recovered by including
\[\sum_{i\alpha}-i\frac{\gamma}{2}\pi_{i}^{\alpha}\left(\pi_{i}^{\alpha}-i\mu_{ i}^{\alpha}\right). \tag{22}\]
as a term in \(H(\pi_{a},q_{a})\). This is simply (19) where \(S_{ab}\) is diagonal. The resulting equations of motion are given by (5).
As noted in the main text, under these dynamics dipole moment is not conserved. The total dipole moment is given by \(D_{i}(\mathbf{q})=\sum_{\alpha}\Gamma_{\alpha}r_{i}^{\alpha}\), so conservation of dipole
moment corresponds via (10) to invariance under \(\pi_{i}^{\alpha}\to\pi_{i}^{\alpha}+\Gamma_{\alpha}\delta_{ik}\). It is straightforward to see that the term (11) is not invariant under this transformation.
To get a dissipative term which respects dipole conservation, the simplest term at quadratic order is given by
\[\sum_{i\alpha\beta}-i\frac{\gamma^{\prime}}{2}f_{\alpha\beta}\left(\frac{\pi_{ i}^{\alpha}}{\Gamma_{\alpha}}-\frac{\pi_{i}^{\beta}}{\Gamma_{\beta}}\right) \left(\left(\frac{\pi_{i}^{\alpha}-i\mu_{i}^{\alpha}}{\Gamma_{\alpha}}\right) -\left(\frac{\pi_{i}^{\beta}-i\mu_{i}^{\beta}}{\Gamma_{\beta}}\right)\right) \tag{12}\]
where \(f_{\alpha\beta}\coloneqq f(|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|)\) is a function which depends only on the distance between vortices \(\alpha\) and \(\beta\). This term is clearly invariant under the transformation \(\pi_{i}^{\alpha}\to\pi^{\alpha}+\Gamma_{\alpha}\delta_{ik}\). The function \(f_{\alpha\beta}\) is not constrained by the EFT; we choose \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-1}\). While the nonlocality of \(f_{\alpha\beta}\) may seem unnatural, we emphasize that any microscopic dynamics preserving dipole moment leads to an effective dissipative term of this form, up to a choice of \(f_{\alpha\beta}\). The resulting equations of motion are given in (6).
### On the choice of \(f_{\alpha\beta}\)
Given the freedom within the EFT to make different choices of \(f_{\alpha\beta}\) it is natural to ask what, if anything, singles out the choice \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-1}\). We argue via a simple scaling argument that this is the least nonlocal \(f_{\alpha\beta}\) which does not cause generic vortex dipoles to escape to infinity without annihilating.
Consider a vortex dipole together with a third vortex as in Fig. 2(b). Let us call the distance between the two vortices comprising the dipole \(d\) and the distance between the dipole and the third vortex \(R\). When dissipation is absent, an isolated dipole will travel at constant speed perpendicular to its dipole moment, so \(R\sim t\). In the presence of dissipation, these vortices will approach each other with speed \(\dot{d}\sim-\gamma\) where \(\gamma\) is the effective dissipation strength. Choosing \(f_{\alpha\beta}=|\mathbf{r}^{\alpha}-\mathbf{r}^{\beta}|^{-\eta}\), the effective dissipation scales as \(\gamma\sim 1/R^{\eta}\). Altogether, we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}d\sim-\frac{1}{t^{\eta}}. \tag{13}\]
When \(\eta>1\), \(d\) asymptotes to a constant as \(t\to\infty\), and the vortex dipole escapes to infinity without annihilating; when \(\eta<1\), the dipole annihilates in finite time. When \(\eta=1\), the dipole always annihilates eventually, but the time to annihilation is exponential in the initial separation \(d_{0}\). For \(\eta=1\) we therefore expect to see the dynamics slow down dramatically when the density drops to a point where inter-vortex separation is \(O(1)\) in our dimensionless distance units, which is indeed seen in our numerics.
That the vortex dipole escapes to infinity is not an issue if we perform our calculations using periodic boundary conditions; however, this complicates the problem significantly and requires substantially more computational resources. To avoid this issue we simply choose \(f_{\alpha\beta}\) so that dipoles don't escape to infinity, which results in the choice in the main text.
## Appendix B Review of two-species annihilation
In this appendix we review the reaction-diffusion model governing two-species annihilation, considering the cases with and without long-range interactions. We closely follow the discussion in [56].
Ordinary two-species annihilation processes are governed at the mean-field level by a kinetic rate equation
\[\frac{\mathrm{d}\rho_{A}}{\mathrm{d}t}=\frac{\mathrm{d}\rho_{B}}{\mathrm{d}t}= -\mathcal{K}\rho_{A}\rho_{B}\;, \tag{14}\]
which captures the fact that an annihilation process requires two species to be present at the same location. Let us introduce the number density \(n\) and the charge density \(\rho\) as
\[n =\rho_{A}+\rho_{B} \tag{15}\] \[\rho =\rho_{A}-\rho_{B}\;\;.\]
In terms of \(n\) and \(\rho\), the rate equation becomes
\[\frac{\mathrm{d}n}{\mathrm{d}t} =-\frac{\mathcal{K}}{2}(n^{2}-\rho^{2}) \tag{16}\] \[\frac{\mathrm{d}\rho}{\mathrm{d}t} =0\]
where the latter equation shows that charge is conserved. When there is an equal initial density of \(\rho_{A}\) and \(\rho_{B}\), _i.e._ at charge neutrality, the asymptotic behavior of this equation is given by
\[n(t)\sim(\mathcal{K}t)^{-1}. \tag{17}\]
The mean-field description is valid in dimensions above the critical dimension \(d_{c}=4\). Below the critical dimension, diffusive and stochastic effects become important, modifying the long-time behavior to
\[\rho(t)\sim(Dt)^{-d/4}. \tag{18}\]
Here \(D\) is the diffusion constant. This behavior and the critical dimension can be derived by considering the long-range interacting case discussed below and taking the limit where the interaction strength vanishes, as was shown in [56].
We now treat the case of an ordinary \(A+B\to 0\) reaction-diffusion system with long-range interactions, reviewing the calculation of [56]. In terms of the number density \(n\) and charge density \(\rho\), the equations of motion are
\[\partial_{t}n-D\nabla^{2}n =-\mathcal{K}\left[n^{2}-f^{2}\right]-Q\nabla\left(\rho\nabla V\right) \tag{19}\] \[\partial_{t}\rho-D\nabla^{2}\rho =-Q\nabla\left(n\nabla V\right)\]
where the potential \(V(\mathbf{r},t)\) is given by
\[V(\mathbf{r},t)=-\int\mathrm{d}^{2}\mathbf{r}^{\prime}\rho(\mathbf{r}^{\prime}, t)\log|\mathbf{r}^{\prime}-\mathbf{r}|. \tag{20}\]
We have introduced a diffusion coefficient \(D\) and a coefficent \(Q\sim\varGamma^{2}\) proportional to the square of the vorticity. The authors in [56] analyzed these equations using a self-consistent approximation where fluctuations of the total number density \(n\) are neglected while fluctuations of the charge density \(\rho\) are kept. In other words, the number density \(n(\mathbf{r},t)=n(t)\) is taken to be a spatially independent function of time. Our goal will be to determine the number density \(n(t)\) averaged over an ensemble of initial conditions for \(f(\mathbf{r},0)\), which we take to be
\[\begin{split}\langle\rho(\mathbf{r},0)\rangle&=0\\ \langle\rho(\mathbf{r}_{1},0)\rho(\mathbf{r}_{2},0)\rangle& =n_{0}^{2}\delta^{(2)}(\mathbf{r}_{1}-\mathbf{r}_{2})\end{split}. \tag{10}\]
We will normalize \(n_{0}=1\). Approximating \(n(\mathbf{r},t)\simeq n(t)\), we average the first equation of (10) over all space and over initial conditions to obtain
\[\frac{\mathrm{d}n(t)}{\mathrm{d}t}+\mathcal{K}n(t)^{2}=\frac{\mathcal{K}}{V} \int\mathrm{d}^{2}\mathbf{r}\;\langle\rho(\mathbf{r},t)^{2}\rangle\;. \tag{11}\]
with \(\langle\ldots\rangle\) denoting the average over initial conditions. Fourier transforming \(\rho(\mathbf{r},t)\) in space, the second equation of (10) gives
\[\partial_{t}\rho(\mathbf{k},t)+Dk^{2}\rho(\mathbf{k},t)=-Qn(t)\rho(\mathbf{k},t) \tag{12}\]
away from \(k=0\) and \(\partial_{t}\rho(\mathbf{0},t)=0\) at \(k=0\). The equation of motion for \(\rho(\mathbf{k},t)\) can be solved exactly to yield
\[\rho(\mathbf{k},t)=\rho(\mathbf{k},0)\exp\left(-Dk^{2}t-Q\int_{0}^{t}\mathrm{ d}t^{\prime}n(t^{\prime})\right). \tag{13}\]
Substituting the solution \(\rho(\mathbf{k},t)\) into (11) and performing the average (14) gives
\[\begin{split}\frac{\mathrm{d}n}{\mathrm{d}t}+\mathcal{K}n^{2}& =\mathcal{K}\int\mathrm{d}^{2}\mathbf{k}\exp\left(-2Dk^{2}t-2Q \int_{0}^{t}\mathrm{d}t^{\prime}n(t^{\prime})\right)\\ &=\mathcal{K}\exp\left(-2Q\int_{0}^{t}\mathrm{d}t^{\prime}n(t^{ \prime})\right)\frac{\pi}{2Dt}\end{split} \tag{14}\]
where in the second line we performed the integral over \(\mathbf{k}\). The mean field solution (10) was obtained from ignoring the RHS of the above equation. This is valid if, at long times, terms on the RHS decay more quickly than terms on the LHS. Substituting the mean field solution, we see that the terms on the LHS decay as \(t^{-2}\), while the RHS decays as \(t^{-\alpha}\) with \(\alpha=2Q/\mathcal{K}+1\). In other words, the mean field solution is valid for \(2Q/\mathcal{K}>1\).
Repeating the calculation for general spatial dimension \(d\) (while modifying \(V\) to be the Coulomb potential in \(d\) dimensions), one finds that the RHS decays as \(\alpha=d/2+2Q/\mathcal{K}\). This gives the critical dimension above which mean field theory is valid as \(d_{c}=4\left(1-Q/\mathcal{K}\right)\). For a system without long-range interaction (\(Q=0\)) as in the previous subsection, this reproduces the result for the upper dimension \(d_{c}=4\).
|
2309.15259 | SLIQ: Quantum Image Similarity Networks on Noisy Quantum Computers | Exploration into quantum machine learning has grown tremendously in recent
years due to the ability of quantum computers to speed up classical programs.
However, these efforts have yet to solve unsupervised similarity detection
tasks due to the challenge of porting them to run on quantum computers. To
overcome this challenge, we propose SLIQ, the first open-sourced work for
resource-efficient quantum similarity detection networks, built with practical
and effective quantum learning and variance-reducing algorithms. | Daniel Silver, Tirthak Patel, Aditya Ranjan, Harshitta Gandhi, William Cutler, Devesh Tiwari | 2023-09-26T20:33:26Z | http://arxiv.org/abs/2309.15259v1 | # Sliq: Quantum Image Similarity Networks on Noisy Quantum Computers
###### Abstract
Exploration into quantum machine learning has grown tremendously in recent years due to the ability of quantum computers to speed up classical programs. However, these efforts have yet to solve unsupervised similarity detection tasks due to the challenge of porting them to run on quantum computers. To overcome this challenge, we propose Sliq, the first open-sourced work for resource-efficient quantum similarity detection networks, built with practical and effective quantum learning and variance-reducing algorithms.
## Introduction
**Brief Overview and Motivation.** Rapid advancements in quantum machine learning (QML) have increasingly allowed researchers to leverage the benefits of quantum computing in solving ML problems. Although the degree of advantage for different ML tasks is being explored, the recent advances suggest that classification and solving high energy physics problems are among the most promising candidates [14, 17]. In particular, recent efforts have focused on developing high-quality classification circuits for quantum computers. While the resulting classifiers have been highly effective, they are restricted because classification, like all other supervised learning methods, requires labeled data. In many real-world scenarios, labeled data is either not readily available (e.g., diagnosing complex diseases based on medical images without quantifiable ground truth [16]) or not feasible (e.g., a visual sketch of a suspect) [20]. In such scenarios, comparison across unlabeled inputs is critical for learning, prediction, and ground truth generation - hence the popularity of similarity detection for various tasks including recommendation systems [21]. However, there is no QML circuit designed to predict similarity on unlabeled data currently available.
**Sliq: Solution and Approach.** To bridge this gap, a naive design for similarity detection might create pairs of similar and dissimilar inputs from the training dataset, much like classical Siamese and Triplet networks [15, 16] (e.g., Anchor-Positive and Anchor-Negative pairs), and pass them sequentially over a variational quantum circuit (VQC)[1] to minimize the loss function. While straightforward, this design is resource-inefficient and does not fully leverage the unique properties of quantum computing. To address this gap in unsupervised QML, we propose Sliq, which mitigates these challenges with multiple novel key design elements.
First, Sliq addresses resource inefficiency by training both images in the pair at once via a superimposed state. This provides multiple advantages: (1) it reduces the overall quantum resource requirements by reducing the number of runs, and (2) it allows the VQC to learn more effectively since the superposition provides explicit hints about the similarity between the data. Next, to take advantage of entanglement in quantum computing systems, Sliq interweaves the features of both inputs and explicitly entangles them at each layer in the VQC, decreasing the distance of corresponding features in Hilbert space. Sliq's design ensures that intertwoven features from different inputs are embedded into all physical qubits of the learning circuit so that the entanglement effects are captured in the measurement qubits. To ensure that Sliq is practical and effective on current error-prone quantum computers, Sliq keeps the number of parameters in the learning circuit minimal to mitigate the compounding noise effects on real quantum computers.
Unfortunately, incorporating superposition and entanglement properties creates new challenges. The identities of individual inputs in a pair (e.g., Anchor input in the Anchor-Positive pair) are indistinguishable due to entanglement, and the projection of the same input on the classical space is inconsistent across different runs. Sliq introduces a new training "loss" estimation and improves quantum embedding methods to reduce projection variance, resulting in more robust training and a network that is resilient to hardware errors on real quantum systems. Overall, Sliq demonstrates that the combination of training on entangled pairs and utilizing a projection variance-aware loss estimation yields effective similarity detection, even on current noisy quantum computers.
**Contributions of Sliq.**
**I.** To the best of our knowledge, Sliq is the first method to build a practical and effective _quantum learning circuit for similarity detection on NISQ-era quantum computers_. Sliq is available as open-source framework at [https://github.com/SilverEngineered/Sliq](https://github.com/SilverEngineered/Sliq).
**II.** SlIQ's design demonstrates how to exploit the superposition and entanglement properties of quantum computing systems for similarity detection. It builds a _resource-efficient training pipeline by creating interwoven, entangled input pairs on a VQC, and it applies new robust methods for the quantum embedding of classical inputs.
**III.** Our simulations and real-computer evaluations demonstrate that SlIQ achieves a 31% point improvement in similarity detection over a baseline quantum triplet network on a real-world, unlabeled dataset [2], while prior state-of-the-art works in QML only perform classification and require labeled input data [14, 15]. We also show that SlIQ performs competitively for classification tasks on labeled data, despite not being a primary objective a similarity network.
## Background
**Qubits, Quantum Gates, and Quantum Circuits.** A quantum bit (qubit) has the ability to attain a _superposition_ of its two basis states: \(\ket{\Psi}=\alpha_{0}\ket{0}+\alpha_{1}\ket{1}\). Here, \(\ket{0}\) and \(\ket{1}\) are the two basis states, \(\alpha_{0}\), and \(\alpha_{1}\) are normalized, complex-valued coefficients and \(\Psi\) is the overall qubit state in superposition. For an \(n\)-qubit computing system, the overall system state is represented as: \(\ket{\Psi}=\sum_{k=0}^{k=2^{n}-1}\alpha_{k}\ket{k}\). When this state is _measured_, the _superposition collapses_, and the system is observed in state \(\ket{k}\) with probability \(\norm{\alpha_{k}}^{2}\).
A qubit can be put in arbitrary superpositions using the \(R3(p_{1},p_{2},p_{3})\)_quantum gate_. The single-qubit \(R3\) gate has three parameters \((p_{1},p_{2},p_{3})\) that can be adjusted to achieve the desired state [1]. Multiple qubits can be _entangled_ together to form an \(n\)-qubit system using two-qubit gates (e.g., the \(CX\) gate). These non-parameterized two-qubit gates and tunable \(R3\) gates may be combined to achieve any \(n\)-qubit computation.
A sequence of quantum gates applied to a system of qubits forms a _quantum circuit_, at the end of which the qubits are measured to obtain the circuit's output. Fig. 1 shows an example of a quantum circuit in the "Variational Quantum Circuit (VQC)" box. The horizontal lines represent five qubit states, to which the \(R3\) and two-qubit gates are applied over time, and the measurement gates are applied at the end.
**Variational Quantum Circuits and Quantum Machine Learning.** Whereas the gates in a normal quantum circuit are deterministic and predefined, a _Variational Quantum Circuit (VQC)_ is a quantum circuit that utilizes parameterized gates that are tuned to optimize a certain objective [11]. This objective can take many forms, from finding the minimal energy of a molecule's Hamiltonian to maximizing the rate of return of a financial portfolio or, in SlIQ's case, optimizing the loss function of a _quantum machine learning (QML)_ task [12, 13]. These circuits are "variational" because the gates' parameter values vary during the optimization process. These gates are adjusted according to a classical optimizer running on a classical machine, while the variational circuit itself is executed on a quantum computer. Fig. 1 demonstrates this hybrid feedback approach between quantum execution and classical optimization.
Although the optimization of VQC parameters is performed classically, a quantum advantage can be obtained from the circuit's execution on quantum hardware, which means the circuit has far fewer parameters to optimize compared to the classical version of the same algorithm [2]. This advantage is gained by utilizing the superposition and entanglement properties on quantum computers that are not available on classical computers. For example, a classical image classification neural network typically consists of millions of parameters while a well-designed quantum network only requires hundreds [11]. However, the accuracy of such quantum networks has been limited due to prevalent noise on current quantum computers [12], especially for unsupervised learning [11] - this paper aims to overcome this barrier.
**Noise on NISQ Computers.** Contemporary quantum computers suffer from noise during program execution due to various imperfections in the hardware of physical qubits, causing errors in the program output. SlIQ aims to achieve effective results in the face of these challenges.
## SlIQ: Design and Implementation
In this section, we discuss the design of SlIQ and its key design elements. Before we discuss the design details of SlIQ, we first describe a base design of a quantum machine learning circuit for similarity detection. We refer to this design as _"Baseline"_. First, we discuss how the baseline design leverages widely-used variational quantum circuits to build a quantum learning network to perform the similarity detection task for labeled/unlabeled data. Next, we discuss the baseline design's resource inefficiency and its inability to exploit the power of superposition and entanglement. SlIQ's design addresses those limitations and provides superior performance, as discussed in our evaluation.
**Baseline Design.** The baseline design has three major components. The first component is generating the input which consists of triplets of Anchor, Positive, and Negative inputs - similar to classical Siamese-based and Triplet models which are widely used in the classical similarity networks [23, 14, 15]. Then the encoding of the input features to the physical qubits is performed. To achieve this, we perform amplitude embedding [16] on the inputs one by one
Figure 1: Hybrid quantum-classical procedure of executing and optimizing a variational quantum circuit (VQC).
for all three inputs (Fig. 2). The amplitude embedding procedure embeds classical data as quantum data in a Hilbert Space. Although recent prior works have utilized principal component analysis (PCA) prior to amplitude embedding for feature encoding [2], the baseline does employ PCA because higher performance is observed with keeping features intact and padding 0's as necessary to maintain the full variance of features. The second component is to feed these encoded features to a VQC for training and optimizing the VQC parameters to minimize the training loss (Fig. 2). The training loss is estimated via calculating the distance between the projection of inputs on a 2-D space - the projection is obtained via measuring two qubits at the end of the VQC circuit (this is the third and final component). The loss is calculated as the squared distance between the projection of the anchor, \(A\), to the positive projection, \(P\), taking the absolute distance between the anchor projection and negative projection \(N\). This is an \(L2\) variant of Triplet Embedding Loss and is formally defined as
\[L_{2}=\Big{\{}(A_{x}-P_{x})^{2}+(A_{y}-P_{y})^{2}\Big{\}}-\Big{\{}(A_{x}-N_{x} )^{2}+(A_{y}-N_{y})^{2}\Big{\}} \tag{1}\]
We note that the effectiveness of training can be adapted to any choice of dimension and shape of the projection space (2-D square box bounded between -1 and 1, in our case) as long as the choice is consistent among all input types (Anchor, Positive, and Negative inputs). A more critical feature is the repeating layers of the VQC which the baseline design chooses to be the same as other widely-used VQCs to make it competitive [11, 13].
### SliQ: Key Design Elements
SliQ builds off the baseline design and introduces multiple novel design aspects to mitigate the limitations of the baseline design. First, SliQ introduces the concept of training an input pair in the same run to leverage the superposition and entanglement properties of quantum computing systems.
**Input Feature Entanglement and Interweaving.** Recall that in the baseline design, each image type (Anchor, Positive, Negative) traverses through the variational quantum circuit one-by-one. The corresponding measurements at the end of each image-run produce coordinates on a 2-D plane that allows us to calculate similarity distance between A-P and A-N inputs. This allows us to calculate the loss that is targeted to be minimized over multiple runs during training. Unfortunately, this procedure requires performing three runs before the loss for a single (A, P, N) triplet input can be estimated, which is resource inefficient.
SliQ's design introduces multiple new elements to address this resource inefficiency. The first idea is to create two training pairs (Anchor-Positive and Anchor-Negative), and each pair is trained in a single run (demonstrated visually in Fig. 3). This design element improves the resource-efficiency of the training process - instead of three runs (one for each image type), SliQ requires only two runs. Note that, theoretically, it is possible to combine all three input types and perform combined amplitude embedding. However, in practice, this process is not effective because it be
Figure 3: SliQ’s procedure of combining A-P and A-N inputs into two runs, interweaving their feature space, and updating the variational structure to leverage the properties of quantum superposition and entanglement to reduce the number of runs and generate better results.
Figure 2: The baseline design inputs one image at a time, requiring three separate runs for A, P, and N.
comes challenging for the quantum network to learn the distinction between the positive and negative input relative to the anchor input. Creating two pairs provides an opportunity for the quantum circuit to learn the similarity and dissimilarity in different pairs without dilution.
The second idea is to interweave the two input types in a pair before performing the amplitude embedding, and then feeding the output of the amplitude embedding circuit to the quantum circuit (Fig. 3). Interweaving provides the advantage of mapping features from different inputs to different physical qubits. This is particularly significant to mitigate the side-effects of noise on the current NISQ-era quantum machines where different physical qubits suffer from different noise levels [16]. If interweaving of images is not performed, we risk the network not learning direct comparison between positionally equivalent features. SlIQ's interweaving mitigates this risk to make it effective on NISQ-era quantum computers which we found when compared to layering images.
As a final remark, we note that all these ideas are combined to leverage the power of entanglement and superposition of quantum systems - by training multiple inputs together, interweaving them, and creating superposition, and entangling them. While SlIQ's design to exploit superposition and entanglement is useful, it creates new challenges too. Next, we discuss the challenges of attaining projection invariance and novel solutions to mitigate the challenge.
**Projection Variance Mitigation (PVM).** Recall that in the baseline design, we measure two qubits and project the input's position in a 2-D space. Over three runs, we receive three separate coordinates in 2-D space, which we can use to calculate the loss - as shown in Fig. 4 (left). Our objective is to minimize the overall loss, defined as below:
\[L_{obj}=\left(\left|A_{x}-P_{x}\right|+\left|A_{y}-P_{y}\right|\right)-\left( \left|A_{x}-N_{x}\right|+\left|A_{y}-N_{y}\right|\right) \tag{2}\]
Optimizing for the above objective function is relatively straightforward. However, this loss function becomes non-trivial when SlIQ introduces the idea of training input pairs. Recall that the inputs are interweaved (Fig. 3), and hence, our measurements need to capture the outputs of anchor and positive/negative features separately. SlIQ resolves these issues by increasing the number of qubits we measure. Instead of two qubits per run, SlIQ measures four qubits. In Fig. 3, these qubits are denoted at the end of both runs. To distinguish the anchor input, SlIQ enforces two apriori-designated qubits measurements to correspond to the anchor in both runs. We note it is not critical which qubits are chosen to "represent" the anchor input as long as our choice is treated as consistent. For example, qubits 1 and 2 could be tied to the anchor image, or qubits 1 and 3. So long as the choice does not change through training and evaluation, these options are computationally identical. However, this idea creates a major challenge - the coordinates corresponding to the anchor input may not project to the same point in our 2-D space. This is visually represented by two points \((A_{NX},A_{NY})\) and \((A_{PX},A_{PY})\) in Fig. 4. Ideally, these two points should project on the same coordinates.
The baseline design inherently has zero projection variance because it only has one measurement corresponding to the anchor input, and the loss for the positive and negative input was calculated from this absolute pivot. To mitigate this challenge, SlIQ designs a new loss function that accounts for minimizing this projection variance over training. As shown below, SlIQ's novel loss function has two components: (1) the traditional loss between the positive/negative input and the anchor input, and (2) new consistency loss. Consistency loss enforces positional embeddings to separate the entangled images at the time of measurement.
\[L_{pvm}=\left|A_{px}-A_{nx}\right|+\left|A_{py}-A_{ny}\right| \tag{3}\]
\[L_{total}=\alpha*L_{obj}+\beta*L_{pvm} \tag{4}\]
In Eq. 4, the parameters \(\alpha\) and \(\beta\) are hyperparameters that denote weights for the objective function to balance the objective of accurate embeddings and precise embeddings. For \(L_{obj}\), we use \((A_{px},A_{py})\) and (\(A_{nx},A_{ny})\) for the positive and negative anchor values respectively. Additionally, to ensure robustness across the circuit, the samples are reversed for the negative case. The pair (A,P) is run along with the pair (N,A). The consistency is then applied between the mappings of the anchor image which now lie on different parts of the circuit. This additional measure ensures robustness by making the entire output of the circuit comply with the decided-upon separability as opposed to just a few qubits. This technique also enables scaling to entanglement of more than 2 images on a single machine.
In Fig. 5, we show an overview for the design of SlIQ. The first step is to take the input dataset and create pairs of the form (Anchor, Positive) and (Anchor, Negative) for training and testing. Once these are formed, the network trains on the dataset classically as part of the hybrid quantum-classical model used in VQCs. Once the data is
Figure 4: While the baseline design has zero projection variance, SlIQ has to take additional steps to mitigate it.
Figure 5: Overview of the design of SlIQ including the steps of preparing the data for training, training the quantum network, and using the network for inference post-training.
trained, SlIQ performs similarity detection by performing inference on new pairs of data. This inference can be used to identify the most similar samples to one another, the most distant samples, and even serve as a classifier if an unsupervised clustering algorithm is used to generate clusters.
## Experimental Methodology
**Training and Testing Datasets.**SlIQ is evaluated on NIH AIDS Antiviral Screen Data Kramer, De Raedt, and Helma (2001), MNIST Deng (2012), Fashion-MNIST Xiao, Rasul, and Volglgraf (2017), and Flickr Landscape Chen, Lai, and Liu (2018). These datasets are chosen because (1) they cover both unlabeled (e.g., Flickr Landscape) and labeled datasets (e.g., AIDS, MNIST, Fashion-MNIST), (2) they represent different characteristics (e.g., medical dataset, images, handwritten characters) and have real-world use cases (e.g., AIDS detection). We note that the size of the Flickr Landscape dataset is \(\approx\)25\(\times\) larger than the commonly used quantum machine learning image datasets of MNIST and Fashion-MNIST Xiao, Rasul, and Vollgraf (2017); Silver, Patel, and Tiwari (2022). This presents additional scaling challenges that we mitigate with the scalable design of SlIQ.
Flickr Landscape is an unlabeled dataset consisting of 4300 images of different landscapes spanning general landscapes, mountains, deserts, seas, beaches, islands, and Japan. The images are different sizes, but are cropped to 80\(\times\)80\(\times\)3 for consistency. The center is kept intact with all color channels. This dataset is unlabeled and serves to show show SlIQ performs on unlabeled data. NIH AIDS Antiviral Screen Data contains 50,000 samples of features alongside a label to indicate the status of the Aids virus ("CA" for confirmed active, "CM" for confirmed moderately active, and "CI" for confirmed inactive). The MNIST dataset is a grayscale handwritten digit dataset Deng (2012) where we perform binary classification on '1's and '0's. The Fashion-MNIST dataset contains the same number of pixels and classes as MNIST, but the images are more complex in nature. In all datasets, 80% of the data is reserved for training and 20% reserved for testing.
**Experimental Framework.**The environment for SlIQ is Python3 with Pennylane Bergholm et al. (2018) and Qiskit Aleksandrowicz et al. (2019) frameworks. Our quantum experiment evaluations are simulated classically, and the inference results which are collected on the real IBM quantum machines are specified in the text. For setting up the datasets for testing and training, triplets are created in the form of an anchor image, a positive image, and a negative image.
For the unlabeled dataset used, three images are selected at random. The color frequency is evaluated for each image on R, G, and B channels individually, then placed into eight bins for each color. The 24 total bins for each image are used to establish a ground truth similarity between the images, where the first image is the anchor and the images closest in L1 norm to the anchor is set as the positive image and the further away image is the negative image. Once the triplets are created, the pixels within the images are interwoven with one another. The image is then padded with 0s to the nearest power of 2 for the amplitude embedding process.
In the case of the labeled datasets, the anchor and positive are chosen to be from the same class, while the negative image is chosen at random from another class. For evaluation, an anchor and positive are passed into the embedding and the four dimensions are used in a Gaussian Mixture Model to form clusters which are then evaluated for accuracy. We use a batch size of 30 for all experiments with a Gradient-DescentOptimizer and a learning rate of \(0.01\). We train for 500 epochs on a four-layer network. The size of the network changes based on the dataset used, a four-qubit circuit is used for the Aids dataset, 14 for Flickr Landscape, and 11 for MNIST and Fashion-MNIST. For the baseline, one less qubit is used for all circuits as it does not necessitate the additional qubit to store an additional sample on each run.
**Competing Techniques.** Our baseline scheme is the same as described earlier: it is a quantum analogue of a triplet network, where weights are shared and use a single network for training and inference. Although SlIQ is not designed for classification tasks on labeled data, we provide a quantitative comparison with also the state-of-the-art quantum machine learning classifier: Projected Quantum Kernel (PQK) (published in Nature'2021) Huang et al. (2021) and Quilt (published in AAAI'2022) Silver, Patel, and Tiwari (2022). The former, PQK, trains a classical network on quantum features generated by a specially designed quantum circuit. Datasets are modified to have quantum features that show the quantum advantage in training. While not feasible to implement on current quantum computers, we modify PQK's architecture to use fewer intermediate layers to reduce the number of gates and training time. The other comparison used, Quilt, is
Figure 6: SlIQ’s loss normalized against ground truth distance between color frequencies. The anchor image was compared to 100 images. _Only the top 3 correlations are shown._ The steep drop in correlation metric shows that the baseline is not effective.
an image classifier built on an ensemble of variational quantum circuits which uses the ensemble to mitigate noise in the NISQ-era of quantum computing.
**Figures of Merit.** We categorize the figures of merit by dataset type: unlabeled or labeled. As image similarity is inherently visual, in addition to quantitative success, we demonstrate qualitative success by including a few relevant snapshots. For our unlabeled results, we also examine quantitatively how well the overall predictions match with the ground truth similarities, showing how well SliQ learns over a distribution. The specific details of the metrics are described near the description of the results. For qualitative success in the unlabeled dataset, we show the model's predicted images for most similar to a given anchor image. For our labeled datasets, we report the accuracy of SliQ. Because SliQ performs an embedding and does not classify, accuracy can not be obtained directly from the output. For accuracy metrics, Gaussian Mixture Models are used to assign clusters for classification.
## 5 Evaluation and Analysis
**SliQ effectively detects the similarity of samples in unlabeled datasets using quantum networks; SliQ is the first work to target this area.** As SliQ is the first work to target quantum similarity detection for unlabeled data, we compare against the results for the baseline quantum triplet model. Using the Flickr Landscape dataset, we rank the image similarity based on triplets formed from color frequency histograms. For each image in a set of 100 images, we treat the image as an anchor and compute the ground truth distance in color frequency between the anchor and every other image. We compare this ground truth distance to the distance identified by the model and correlate the rankings.
We use Spearman correlation [14], which performs ranked correlation of two random variables, to interpret the relationship between ground truth and the model's estimations. Spearman correlation is commonly used to perform this type of analysis, for example [16] uses Spearman correlation in ranking sentence embedding in similarity networks. SliQ _has much better correlations over the baseline triplet model, with a median Spearman correlation of 0.36 compared to a median Spearman correlation of 0.05, which shows that SliQ is 0.31 or 31% points more accurately correlated than Baseline._ In Table 1, we show the distribution of Spearman correlations for SliQ compared to the baselines. At every percentile measured, SliQ has notable improvements in similarity detection, which demonstrates SliQ's overall improvement over an entire distribution.
This trend is also visually observed in Fig. 6(a) for Baseline and Fig. 6(b) for SliQ. The x-axis denotes the ground truth distance, where the points further to the left indicate true similarity. The y-axis denotes the calculated loss of the model, indicating SliQ's ground truth distance estimate. Points closer to the diagonal line denote accurate estimations. Fig. 6 show only the top-3 correlation examples for easier visual interpretation. We note that SliQ tends to cluster more around the ground truth correlation line, and its top-3 correlation drop from 0.68 to 0.64; in contrast, the baseline, drops from 0.63 to 0.46.
Additionally, we show some of the closest predictions identified in Fig. 7. These demonstrate the different scenes of the Landscape dataset. For example, the demonstrated landscapes include mountains, aerial perspectives, forests, and oceans. SliQ is able to match similar landscapes efficiently, demonstrating its effectiveness at similarity detection - better than the baseline's relative picks.
**Although SliQ was not designed to act as a classifier, it is effective at detecting the similarity of samples in labeled datasets and is competitive with prior state-of-the-art quantum classifiers.** On its own, SliQ performs embeddings, not classification, but we can use SliQ as a classifier by performing reasonable clustering on its output. To demonstrate its classification ability, we compare against state-of the art quantum classification methods [10, 17]. Our results (Fig. 8) indicate that SliQ yields competitive accuracy compared to the advanced quantum classifiers on a task that SliQ was
\begin{table}
\begin{tabular}{|c|c c||c|c c|} \hline Percentile & Baseline & SliQ & Percentile & Baseline & SliQ \\ \hline \hline \(25^{th}\) & \(-0.06\) & \(0.23\) & \(75^{th}\) & \(0.19\) & \(0.48\) \\ \hline \(50^{th}\) & \(0.05\) & \(0.36\) & \(100^{th}\) & \(0.63\) & \(0.68\) \\ \hline \end{tabular}
\end{table}
Table 1: Spearman correlation results show that SliQ outperforms the baseline in similarity detection. SliQ’s median Spearman correlation is 0.36 vs. the Baseline’s 0.05.
Figure 8: SliQ performs competitively against the comparative classification techniques for all datasets, despite classification not being a learned objective of SliQ.
Figure 7: Anchor image, Baseline-identified similar image, and SliQ-identified similar image for the same anchor image, using the unlabeled Flickr Dataset. SliQ is effective in identifying images with similar color frequencies.
never trained on (classification) and demonstrates the broad applicability of embeddings. To perform this clustering, we employ a Gaussian Mixture Model on our embeddings. The model is initialized with the true number of classes in the dataset and is fit to 1000 samples to form clusters. For classification, each permutation of the clusters to labels is considered, as these models do not provide labels. The permutation with the highest accuracy is considered to be the correct label. With these techniques, our results show SliQ performs well on a variety of datasets, averaging up to 96.44% accuracy on binary MNIST classification. We show the full classification accuracy results in Fig. 8 for different techniques.
In Table 3, we show how SliQ performs on real quantum computers today. SliQ achieves a 68.8% accuracy on the AIDS dataset, running on IBM Oslo. SliQ _significantly outperforms the state-of-the-art quantum classifier (QUILT)_, even though SliQ was not originally designed for classification._ The reason is because SliQ's design is noise-aware to be effective on error-prone NISQ computers. In particular, SliQ is designed with few parameters for current NISQ computers, where error compounds at each step and quickly explodes. Other quantum machine learning architectures tend to have higher degradation in accuracy on real computers as they require larger architectures with ample opportunity for compounding error. For the AIDS dataset, SliQ has 48 tunable parameters, while Quilt requires 375, and PQK requires 1,633 parameters. As a result of more parameters, the hardware error compounds, explaining the degradation shown above.
**Why does SliQ perform effectively?** By mitigating projection variance, SliQ is able to map the anchor input at the same position to a close location regardless of the second input in the pair. This is necessary as the images get entangled together throughout the entire circuit and will not be separated in the output unless a constraint is put in place to enforce this. This separability can be demonstrated in Fig. 9 where SliQ is compared to a trained version without consistency loss. SliQ has more precise outputs throughout the entire CDF, evaluated over 1000 MNIST test images. Enforcing consistency amounts to an average decrease of 80% in projection variance when changing the order of the inputs - demonstrating the effectiveness of SliQ's projection invariance method. As shown in Fig. 9, interweaving increasing robustness, leading to a decrease in projection variance.
## Related Work
**Classical Similarity Networks.** Siamese networks and triplet networks are commonly-used classical similarity networks Johnson et al. (2021); Koch et al. (2015); Schroff et al. (2015); Roy et al. (2019); Li et al. (2020); Gichane et al. (2020); Li et al. (2020); Hoffer and Ailon (2015); Patel et al. (2022), as they known are to be the best choice for complex datasets Chicco (2021). As an instance, using the Riemannian geometry to train the Siamese network, Roy et al. (Roy et al., 2019) get effective results for image classification, while FaceNet Schroff et al. (2015) achieves representational efficiency using an embedding for face recognition and clustering. On the other hand, TrimNet Li et al. (2020) uses a graph-based approach toward enabling a triplet network to learn molecular representations. However, while these works are effective classically, quantum theory enables the enhancement of machine learning workloads by reducing their size and speeding them up Aaronson (2015); Daley et al. (2022).
**Quantum Machine Learning.** Extensive work has been performed toward porting a wide variety of machine learning tasks to quantum computing Huang et al. (2021); Li and Kais (2021); Tiwari and Melucci (2019); Li et al. (2021); Khairy et al. (2020); Lockwood and Si (2020); Heidari et al. (2022); Lloyd et al. (2020); Radha and Jao (2022); Nandy Pal et al. (2022). This includes porting workloads such as generalized neural networks Beer et al. (2020), convolutional neural networks Hur et al. (2022), and even application-specific networks such as models used to learn the metal-insulator transition of VO\({}_{2}\)Li and Kais (2021).
**Quantum Image Similarity Detection**Liu et al. (2022) and Liu et al. (2019) have also worked on quantum image similarity detection; notably, these did not take a machine-learning approach to identify similarity.
## Conclusion
In this work, we present SliQ, a resource-efficient quantum similarity network, which is the first method to build a practical and effective quantum learning circuit for similarity detection on NISQ-era computers. We show that SliQ improves similarity detection over a baseline quantum triplet network by 31% points for Spearman correlation. SliQ is available at: [https://github.com/SilverEngineered/SliQ](https://github.com/SilverEngineered/SliQ).
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Environment & Baseline & PQK & Quilt & SliQ \\ \hline \hline Simulation & 60.8\% & 81.6\% & 51\% & 71.54\% \\ \hline Real Computer & 64.4\% & N/A & 21.4\% & 68.8\% \\ \hline \end{tabular}
\end{table}
Table 2: SliQ’s real quantum computer results for the AIDS dataset are consistent with the simulation results, showing its resilient to hardware noise due to its low number of parameters. The PQK column is denoted as N/A circuit is prohibitively deep for it to be feasible and run on error-prone NISQ computers (30\(\times\) more parameters than SliQ).
Figure 9: SliQ achieves a lower projection variance compared to when it is run without projection variance mitigation or without interweaving of images.
## Acknowledgements
We thank the anonymous reviewers for their constructive feedback. This work was supported in part by Northeastern University and NSF Award 2144540.
|
2310.20381 | A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical
Image Analysis | This work conducts an evaluation of GPT-4V's multimodal capability for
medical image analysis, with a focus on three representative tasks of radiology
report generation, medical visual question answering, and medical visual
grounding. For the evaluation, a set of prompts is designed for each task to
induce the corresponding capability of GPT-4V to produce sufficiently good
outputs. Three evaluation ways including quantitative analysis, human
evaluation, and case study are employed to achieve an in-depth and extensive
evaluation. Our evaluation shows that GPT-4V excels in understanding medical
images and is able to generate high-quality radiology reports and effectively
answer questions about medical images. Meanwhile, it is found that its
performance for medical visual grounding needs to be substantially improved. In
addition, we observe the discrepancy between the evaluation outcome from
quantitative analysis and that from human evaluation. This discrepancy suggests
the limitations of conventional metrics in assessing the performance of large
language models like GPT-4V and the necessity of developing new metrics for
automatic quantitative analysis. | Yingshu Li, Yunyi Liu, Zhanyu Wang, Xinyu Liang, Lei Wang, Lingqiao Liu, Leyang Cui, Zhaopeng Tu, Longyue Wang, Luping Zhou | 2023-10-31T11:39:09Z | http://arxiv.org/abs/2310.20381v5 | # A Comprehensive Study of GPT-4V's Multimodal Capabilities in Medical Imaging
###### Abstract
This paper presents a comprehensive evaluation of GPT-4V's capabilities across diverse medical imaging tasks, including Radiology Report Generation, Medical Visual Question Answering (VQA), and Visual Grounding. While prior efforts have explored GPT-4V's performance in medical image analysis, to the best of our knowledge, our study represents the first quantitative evaluation on publicly available benchmarks. Our findings highlight GPT-4V's potential in generating descriptive reports for chest X-ray images, particularly when guided by well-structured prompts. Meanwhile, its performance on the MIMIC-CXR dataset benchmark reveals areas for improvement in certain evaluation metrics, such as CIDEr. In the domain of Medical VQA, GPT-4V demonstrates proficiency in distinguishing between question types but falls short of the VQA-RAD benchmark in terms of accuracy. Furthermore, our analysis finds the limitations of conventional evaluation metrics like the BLEU scores, advocating for the development of more semantically robust assessment methods. In the field of Visual Grounding, GPT-4V exhibits preliminary promise in recognizing bounding boxes, but its precision is lacking, especially in identifying specific medical organs and signs. Our evaluation underscores the significant potential of GPT-4V in the medical imaging domain, while also emphasizing the need for targeted refinements to fully unlock its capabilities.
## 1 Introduction
Large Language Models (LLMs) have consistently demonstrated remarkable ability across various domains and tasks (Touvron et al., 2023; OpenAI, 2023; Anil et al., 2023). The ongoing pursuit of enhancing LLMs' capacity for visual comprehension has spurred the emergence of a new research area: Large Multimodal Models (LMMs) (Ye et al., 2023; Li et al., 2023; Awadalla et al., 2023). The basic approach has been to either fine-tune the visual encoder to align with a fixed pre-trained LLM or to use a vision-language model to convert visual input into textual descriptions that can be understood by the LLM. These applications are all based solely on the use of the LLM and do not really explore the visual capabilities of the LLM. GPT-4V, a cutting-edge Large Multimodal Model (LMM) incorporating visual understanding capabilities, is constructed as an evolution of state-of-the-art Large Language Models (LLMs). This model is trained on an extensive corpus of multimodal data. Yang et al. conducted a comprehensive case study to assess GPT-4V's performance in general-purpose scenarios, revealing its robust visual comprehension ability (Yang
et al., 2023). Meanwhile, LMMs have been widely used in the medical field (Wang et al., 2023; Singhal et al., 2023). The introduction of visual capabilities into GPT-4V opens up opportunities for an in-depth examination of its potential in the domain of medical multimodality. In light of this, therefore this paper will evaluate multi-modal tasks in the medical image analysis field based on GPT-4V.
The main contribution of this paper is to explore the capabilities of GPT-4V on medical image analysis. We selected the 3 main medical multimodal tasks, **Radiology Report Generation**, **Medical Visual Question Answering**, and **Medical Visual Grounding**, to assess GPT-4V's performance in the context of medical images. Our evaluation encompassed _standard benchmarks_ and comparative analysis against current state-of-the-art models. Furthermore, we conducted in-depth case studies using representative examples for each task, enhancing our comprehension of GPT-4V's capabilities in medical image understanding.
## 2 Related Work
### Radiology Report Generation
Radiology report generation has emerged as a prominent research area within the domain of medical image analysis in recent years. While similar to image captioning (Vinyals et al., 2015; Xu et al., 2015; Pan et al., 2020), this task presents heightened complexity due to the extended length of medical reports and the increased difficulty in identifying medical anomalies within images, due to data imbalance issues. Numerous research has relied on encoder-decoder architectures to address this task. The research can be grouped into two primary research directions. The first direction concentrates on enhancing the model's architecture to facilitate improved extraction of visual features and the generation of high-quality medical reports. For example, Li et al. used a hierarchical architecture to generate reports with normality and abnormality respectively (Li et al., 2018). Similarly, Liu et al. employed a hierarchical structure to initially generate topics and subsequently produce related sentences (Liu et al., 2019). With the prevailing of the transformer (Vaswani et al., 2017), Chen et al. introduced a transformer-based model, enhancing it with relational memory and memory-driven conditional layer normalization to enhance image feature recognition and capture crucial report patterns (Chen et al., 2020). Another research direction is to solve the data bias problem by incorporating external knowledge information. Zhang et al. constructed a predefined medical knowledge graph to augment the model's ability to capture valuable medical information (Zhang et al., 2020). To further enrich this supplementary knowledge, Li et al. developed a dynamic approach that enables real-time updates to the knowledge graph (Li et al., 2023).
Furthermore, in recent times, there has been a surge in radiology report generation methods leveraging Large Language Models (LLMs). These approaches leverage the capabilities of large language models to generate long-text content and utilize abundant knowledge sources to enhance the quality of radiology reports. Wang et al. employs Llama2 (Touvron et al., 2023) to elevate the quality of the generated reports. To achieve effective image-text alignment, the image embeddings are mapped to the feature space of the Llama2 (Touvron et al., 2023) via a visual mapper to ensure uniform dimensionality (Wang et al., 2023).
### Visual Question Answering
Visual Question Answering (VQA) is a crucial field that has gained significant importance, as demonstrated by various studies, including those by (Jiang et al., 2020; Wu et al., 2019) et al. The goal of VQA is to teach machines to understand images and answer questions about them using natural language. Given a pair comprising an input image and a correlated question, the VQA model is engineered to generate the corresponding answer. A plethora of previous scholarly works have delved into VQA, revealing four critical components within these models: the image encoder, the text encoder, a fusion method, and either a generator or a classifier, contingent upon the model's architectural design. The nature of the posed questions bifurcates into two categories based on the answer type: the close-end type (Nguyen et al., 2019; Finn et al., 2017; Eslami et al.,
2021) and the open-end (Ambati and Dudyala, 2018; Khare et al., 2021) type. Predominantly, models address these two categories distinctly; they typically employ a classification-based approach for close-end types, whereas for open-end types, a generation-based method is utilized. Nevertheless, a select number of studies have attempted to integrate both question types within a singular model (Ren and Zhou, 2020). A notable example is the Q2ATransformer (Liu et al., 2023), which simultaneously tackles both varieties of questions, amalgamating the strengths of classification-based and generation-based methodologies, and subsequently achieving exemplary accuracy across both question types.
With the emergence of Large Language Models (LLMs), there has been a substantial influx of research leveraging LLMs to augment the linguistic inferencing capabilities of VQA (Li et al., 2023). Moreover, certain studies have pioneered the use of LLMs for facilitating continuous questioning in VQA. The introduction of models such as GPT-3.5 has led to the generation of more LLM-based datasets, mitigating the issue of data scarcity (Pellegrini et al., 2023). The advent of GPT-4V marks a significant milestone, as it incorporates image comprehension capabilities directly into the LLM framework. This eliminates the need for VQA systems to translate all tasks into a language understandable by traditional LLMs. With the ability to process multimodal inputs seamlessly, the evolution of LLMs has opened new horizons for research and development in VQA. This paper endeavors to elucidate the application of GPT-4V in the realm of medical image-based VQA, exploring its potential and implications in this specialized field.
### Visual Grounding
Visual grounding (Kamath et al., 2021) stands as a pivotal field at the crossroads of computer vision and natural language processing. Essentially, this task requires interpreting an image while taking into account a relevant textual description of an object, which could range from a single sentence or caption to a more elaborate description. The end goal is to produce a bounding box that accurately outlines the designated object. Given its critical role in integrating visual and textual information, visual grounding has established itself as a crucial application in the realm of multimodal interactions.
With the emergence of extensive language modeling, there has been a noteworthy blend of visual grounding techniques with Large Language Models (LLMs) (Peng et al., 2023; Zhao et al., 2023). In a conventional setup, data from bounding boxes, obtained through visual grounding, is fed into the LLM as a segment of the prompt. This approach steers the LLM towards making the right assessments. However, the debut of GPT-4V marks a significant transformation in this workflow. It eliminates the requirement for crafting prompts manually, allowing users to directly input images and text, and in turn, directly obtain the related bounding box outputs. This advancement simplifies the process, removing the need for extra steps and intermediaries.
Most research on visual grounding mainly deals with regular, everyday images, and only a small number of studies focus on images from the medical field. This might be because there are not many datasets available in medical image. There is a recent published visual grounding dataset which is the MS-CXR dataset, makes some improvement of medical image visual grounding application. Some publications (Huang et al., 2023; Sun et al., 2023; 20) come out base on it. Nevertheless, even as this dataset becomes more widely recognized, there remains a limited body of academic work exploring its potential and applications, highlighting a crucial area for future research and development.
In this paper, we will embark on a comprehensive review of GPT-4V's applications within the domain of medical visual grounding, exploring its capabilities, impact, and potential avenues for future research and development.
## 3 Evaluation of Radiology Report Generation
The exponential growth of radiological imaging data has imposed an escalating burden on radiologists, leading to a heightened risk of diagnostic errors with potentially severe consequences. Consequently, there is a growing demand for automated radiology report generation, which is anticipated to alleviate the workload of radiologists and mitigate diagnostic inaccuracies. The rapid advancements in artificial intelligence, particularly in the domains of computer vision and natural language processing, have made automated medical report generation a feasible reality (e.g., Chen et al., 2020, 2021; Liu et al., 2021; Wang et al., 2023a). A prominent challenge in automated medical report generation is long text generation. Presently, large language models (LLMs) (e.g., Touvron et al., 2023; Chowdhery et al., 2022) have gained widespread prominence and demonstrate a strong proficiency in generating long text. Furthermore, LLM-based large multi-modal models (LMMs) (e.g., Zhu et al., 2023; Wu et al., 2023) possess a notable capability for multi-modal content generation. While LMMs show potential in multi-modal content generation, their efficacy in specialized tasks like radiology report generation is yet to be fully explored. The accuracy and reliability of such reports are paramount, making it crucial to evaluate LMMs in this domain rigorously. In the following sections, we examined the GPT-4V's capability for generating radiology reports using distinct prompt strategies and the dataset.
### Evaluation
This section presents an evaluation of the GPT-4V model's capacity for medical report generation. We employ the MIMIC-CXR dataset (Johnson et al., 2019) for assessment. The model is tasked with generating diagnostic reports for given medical images. To facilitate comparison with established methodologies (e.g., Chen et al., 2020; Yang et al., 2021; Liu et al., 2021; Wang et al., 2022b; 2023a), we employ widely recognized metrics, specifically BLEU scores (Papineni et al., 2002), ROUGE-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and CIDEr (Vedantam et al., 2015), to gauge the quality of the generated reports.
Our evaluation focuses on the model's performance with the MIMIC-CXR test set. Each evaluation instance comprises a single medical image coupled with a carefully crafted text prompt as the input.
#### 3.1.1 Dataset
MIMIC-CXR, the largest publicly available dataset in this domain, includes both chest radiographs and unstructured textual reports. This dataset comprises a total of 377,110 chest X-ray images and 227,835 corresponding reports, obtained from 64,588 patients who underwent examinations at the Beth Israel Deaconess Medical Center between 2011 and 2016. To facilitate fair and consistent comparisons, we followed the official partitioning provided by MIMIC-CXR, resulting in a test set containing 3,858 samples.
#### 3.1.2 Prompt Design Strategies
Our primary objective is to evaluate the baseline performance of GPT-4V in medical report generation. To better activate the capabilities of GPT-4V, we explored various prompt design strategies, including the zero-shot and few-shot approaches. In the **zero-shot** scenario, we provided a prompt without reference reports. For the few-shot approach, we investigated three settings: (1) two normal reports **(Few-shot normal examples prompt)**, (2) two abnormal reports **(Few-shot abnormal examples prompt)**, and (3) one normal report paired with one abnormal report **(Few-shot mixed examples prompt)**.
Our comprehensive evaluation unveiled that the inclusion of both a normal and an abnormal example consistently resulted in higher-quality report generation. Thus, we primarily employed the **Few-shot mixed examples prompt** to evaluate GPT-4V on the MIMIC-CXR benchmark. While our focus was on the **zero
shot prompt** and the **few-shot prompt**, we avoided complex techniques such as chain-of-thought (Wei et al., 2022b) or ensembling strategies (Wang et al., 2022a).
Illustrative examples of our prompt design strategies can be found in Appendix A.1. A detailed analysis of the reports generated by GPT-4V under various prompts will be presented in Section 3.2.
**Zero-shot Prompt Scenario** The zero-shot prompt is employed to assess GPT-4V's capacity to autonomously generate reports without external guidance. To facilitate a comprehensive comparison with the Ground Truth report, we tasked GPT-4V with generating both the expression and findings sections.
**Few-Shot Prompts Scenario** In-context few-shot learning represents a crucial methodology for enhancing the capabilities of large language models (Tsimpoukelli et al., 2021; Wei et al., 2022a; Dai et al., 2022). It enables the model to acquire the necessary output format by providing a set of examples. In contrast to fine-tuning, this method empowers the model to generate desired results without any parameter updating at inference time. We evaluated the in-context few-shot learning capability of GPT-4V using diverse prompt examples. Within the scope of our evaluation, we employ contextual learning to facilitate GPT-4V in generating responses that closely align with the form of ground truth, facilitating meaningful comparisons.
In our investigation of few-shot prompts for the GPT-4V, we conducted experiments with a range of prompt strategies designed for GPT-4V. Specifically, we explored diverse compositions:
* Exclusively using normal examples (**Few-shot normal examples prompt**);
* Exclusively using abnormal examples (**Few-shot abnormal examples prompt**);
* Combining one normal and one abnormal example (**Few-shot mixed examples prompt**).
The details of example reports in prompts are shown in Appendix A.1. Our observations highlighted the substantial impact of prompt type on the model's output. Depending on the chosen prompt, the model displayed a clear preference either for generating normal reports or abnormal reports. Details will be discussed in Section 3.2.
#### 3.1.3 Comparison with SOTA
Table 1 presents a performance comparison between the GPT-4V model and state-of-the-art methods using the MIMIC-CXR dataset (Johnson et al., 2019). The methods encompasses standard image captioning techniques, including Show-Tell (Vinyals et al., 2015), Att2in (Xu et al., 2015), AdaAtt (Lu et al., 2017), Transformer (Vaswani et al., 2017), and M2Transformer (Cornia et al., 2020). Additionally, the evaluation methods also have medical report generation methods, specifically R2Gen (Chen et al., 2020), R2GenCMN Chen et al. (2021), MSAT (Wang et al., 2022b), and METransformer (Wang et al., 2023a). To provide fair comparisons, we employ the exact same prompting structure (**Few-shot mixed examples prompt**) to help GPT-4V generate the medical report.
From Table 1, it's clear that medical report generation models such as METransformer, MSAT, and R2Gen showcase top-tier performance. Nevertheless, GPT-4V's capability to generate medical reports is impressive, even if it is designed as a general-purpose model. Leveraging the advantages of an extensive pre-training dataset, GPT-4V excels in several metrics, including BLEU, ROUGE, and METEOR. Meanwhile, when compared to models specifically trained on MIMIC-CXR, GPT-4V exhibits a gap, particularly evident in the CIDEr metric. This discrepancy arises because the CIDEr metric assigns varying score weights to words based on their occurrence frequencies, potentially limiting GPT-4V's performance in generating certain MIMIC-CXR-specific words, consequently yielding relatively lower scores.
Furthermore, our testing has revealed that GPT-4V possesses the capacity to generate information that is absent in the ground truth but is visually evident in the image. This phenomenon contributes to GPT-4V's
relatively lower performance on metrics such as BLEU, which primarily assesses word-match rates. One example is shown in Figure 1.
### Case Study
#### 3.2.1 Zero-shot Behavior
In the zero-shot scenario, through a series of tests on multiple chest X-ray images, we observed that GPT-4V consistently generates reports with a focus on various anatomical organs. This phenomenon is illustrated in
\begin{table}
\begin{tabular}{l l|c c c c c c c} \hline \hline Dataset & Methods & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & ROUGE & METEOR & CIDEr \\ \hline \hline \multirow{8}{*}{MIMIC-CXR} & Show-Tell (Vinyals et al., 2015) & 0.308 & 0.190 & 0.125 & 0.088 & 0.256 & 0.122 & 0.096 \\ & Att2in (Xu et al., 2015) & 0.314 & 0.198 & 0.133 & 0.095 & 0.264 & 0.122 & 0.106 \\ & AdaAtt (Lu et al., 2017) & 0.314 & 0.198 & 0.132 & 0.094 & 0.267 & 0.128 & 0.131 \\ & Transformer (Vaswani et al., 2017) & 0.316 & 0.199 & 0.140 & 0.092 & 0.267 & 0.129 & 0.134 \\ & M2Transformer (Cornia et al., 2020) & 0.332 & 0.210 & 0.142 & 0.101 & 0.264 & 0.134 & 0.142 \\ & R2Gen (Chen et al., 2020) & 0.353 & 0.218 & 0.145 & 0.103 & 0.277 & 0.142 & - \\ & R2GenCMN (Chen et al., 2021) & 0.353 & 0.218 & 0.148 & 0.106 & 0.278 & 0.142 & - \\ & PPKED (Liu et al., 2021) & 0.360 & 0.224 & 0.149 & 0.106 & 0.284 & 0.149 & 0.237 \\ & GSK (Yang et al., 2021b) & 0.363 & 0.228 & 0.156 & 0.115 & 0.284 & - & 0.203 \\ & MSAT (Wang et al., 2022b) & 0.373 & 0.235 & 0.162 & 0.120 & 0.282 & 0.143 & 0.299 \\ & METransformer (Wang et al., 2023a) & **0.386** & **0.250** & **0.169** & **0.124** & **0.291** & **0.152** & **0.362** \\ \hline GPT-4V (OpenAI, 2023) & 0.338 & 0.190 & 0.109 & 0.061 & 0.240 & 0.125 & 0.033 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison on the MIMIC-CXR dataset.
Figure 1: One case with few-shot mixed examples prompt. The ground truth does not reference a medical device; however, one is visibly present in the image indicated by a red box. GPT-4V demonstrates the ability to recognize this medical device, and describes it in the generated report.
Figure 11. Notably, GPT-4V tends to follow a specific order, including the information of lung, cardiomedi-astinal silhouette, bones, diaphragm, and soft tissues, for the majority of the generated reports.
While the format of the generated reports may vary from MIMIC-CXR, the content within these reports does convey both normal and abnormal aspects of the radiographic images. Figure 2 shows a selection of examples. The observations reveal that GPT-4V can describe the normal aspects in the images. Furthermore, as demonstrated in Example 3, GPT-4V exhibits the capacity to recognize abnormalities, '_suggestive of a possible infectious or inflammatory process'_. These instances collectively underscore that, even in the context of zero-shot prompts, GPT-4V may not replicate the exact report format found in MIMIC-CXR, yet it demonstrates a noteworthy ability to generate relevant reports and identify anomalies.
#### 3.2.2 Few-shot Behavior
In this prompt scenario, we explored 3 kinds of prompt settings:
Figure 2: Zero-shot prompt example. GPT-4V can generate radiology reports without example reports and can convey both normal and abnormal aspects. For better illustration, the key medical information in the reports is highlighted using different colors.
* **Few-shot normal examples prompt**
* **Few-shot abnormal examples prompt**
* **Few-shot mixed examples prompt**
In this section, we present a comprehensive analysis of reports generated by GPT-4V under three distinct few-shot prompts. We observe that different prompts significantly influence the generated reports. Specifically, Figure 3 illustrates the response to a normal chest X-ray image, where we employ three distinct prompt settings to guide GPT-4V in generating corresponding reports. Interestingly, the reports generated from the normal examples prompt and the mixed examples prompt both describe the image as normal. In contrast, the report from the abnormal examples prompt highlights anomalies in the image. This indicates that GPT-4V's inclination to generate normal or abnormal reports varies based on the provided example reports.
Figure 3: Few-shot normal case (The key medical information in the reports is highlighted using different colors). GPT-4V is more likely to generate abnormal reports when the prompt includes two abnormal examples. The words in red correspond to descriptions of abnormal conditions.
The analysis of reports generated for an abnormal chest X-ray image can be found in the appendix A.2 with a more detailed explanation. However, it's worth noting here that our subsequent tests have shown that the mixed examples prompt (illustrated in Figure 3, and 14) has a significant influence on GPT-4V's capacity to accurately determine the normalcy or abnormality of an image. Due to this observed consistency and reliability, we opted for the mixed examples prompt when testing the entire MIMIC-CXR test set and in the computation of related evaluation metrics.
For these examples, we can summarize the impact of different prompts on the generated reports as follows:
**Normal Examples Prompt** The generated report focuses on the normal aspects of the image, seemingly overlooking or not emphasizing the presence of abnormalities. This could be attributed to the inherent bias introduced by the normal examples in the prompt, steering the GPT-4V's focus towards more routine or standard interpretations.
**Abnormal Examples Prompt** As expected, the report provides a clear and distinct description of the abnormalities evident in the X-ray. However, for normal chest X-ray radiographs, the GPT-4V may also exhibit a heightened probability of generating certain erroneous indications of abnormality.
**Mixed Examples Prompt** The mixed examples prompt leads the GPT-4V to accurately describe the abnormal and normal conditions of the image. This suggests a balanced effect, where the GPT-4V does not get overly biased by either the normal or abnormal examples but leverages both to arrive at an accurate interpretation.
From this in-depth examination, it becomes evident that the choice of prompt plays a pivotal role in guiding GPT-4V's performance, especially when anomalies are present in medical images. The mixed examples prompt, in particular, shows promise in achieving a balanced and accurate report, making it a potential choice for diverse medical scenarios.
#### 3.2.3 Prompt Augmentation for Output view information
Additionally, our investigations revealed that augmenting the information content, like view information within a given prompt enables GPT-4V to produce more pertinent information in its generated reports. As an illustrative example, we incorporated instances with view information for chest X-ray images within both the few-shot mixed examples prompt and the few-shot abnormal Examples prompt (The example reports are displayed in Figure 13). Conversely, view information was omitted in the few-shot normal examples prompt. This deliberate contrast in prompt content demonstrated that prompts containing view information effectively instructed GPT-4V to incorporate considerations of image viewpoint into the report generation process.
More specifically, we supplemented the few-shot mixed examples prompt with _'Frontal and lateral views of the chest'_ and the few-shots abnormal examples prompt with _'PA and lateral views of the chest provided'_.
As illustrated in Figure 4, and 15 the inclusion of view information prompts GPT-4V to incorporate corresponding viewpoint details into the generated report. For instance, it generates phrases like _'PA view of the chest provided'_ and _'Frontal view of the chest demonstrates...'_. However, it is essential to acknowledge that while enhancing the prompt with view information empowers GPT-4V to produce reports enriched with these details, there are instances where GPT-4V inaccurately identifies viewpoint information. The incorrect case is shown in Appendix A.3.
This phenomenon can be attributed to two primary factors: firstly, potential constraints in GPT-4V's inherent recognition capabilities, and secondly, the potential inadequacy of prompt design in fully activating GPT-4V's ability to discern viewpoint information.
**Text-Prompt:**
It is imperative to emphasize that, even with the incorporation of view information, the core content within the generated reports exhibits a high degree of consistency (crucial medical information in the reports is distinguished using diverse colours in Figure 4). This observation leads to a significant conclusion: the inclusion of supplementary information within the prompt broadens the spectrum of content integrated into the generated report, all while preserving GPT-4V's capability to fulfill common tasks.
These examples vividly illustrate the critical role of prompt design within the domain of in-context few-shot learning. In contrast to the fine-tuning approach, few-shot learning empowers GPT-4V to gain essential knowledge from the prompt and subsequently apply this knowledge in generative tasks. Consequently, the meticulous design of a logical and effective prompt emerges as a pivotal factor when leveraging GPT-4V for medical report generation tasks. This aspect of prompt design deserves future studies.
Figure 4: Viewpoint information Case 1 (The key medical information in the reports is highlighted using different colors). The inclusion of view information in the prompt results in a higher probability of GPT-4V generating view information, indicated in red text in the figure. Notably, GPT-4V does not generate view information when the prompt lacks such information, as seen in the normal examples prompt (in Figure 13).
### Discussion
Our extensive evaluation and case study of GPT-4V's capabilities in Radiology Report Generation reveal its potential as well as its current limitations. By employing various prompts, GPT-4V demonstrates the capacity to generate descriptive reports for chest X-ray images, covering both normal and abnormal aspects. Remarkably, the design of the prompt significantly influences GPT-4V's performance; prompts with more information lead to greater attention to the image and the generation of more detailed descriptions.
It is essential to highlight that GPT-4V was not trained specifically on MIMIC-CXR, which impacts its capacity to generate specific rare words, leading to relatively lower scores on commonly used evaluation metrics. Nevertheless, GPT-4V demonstrates the ability to generate content related to images that is not explicitly mentioned in the Ground Truth but is visually apparent. As a result, further research aimed at improving GPT-4V's report accuracy remains a valuable pursuit.
## 4 Evaluation of Medical Visual Question Answering
Visual Question Answering (VQA) has become a much critical research area. The goal of VQA systems is to enable computers to understand natural language questions and provide accurate answers on images. In the following, we will explore the medical image VQA performance of GPT-4V on the VQA-RAD dataset and compare it with the current SOTA method.
### Evaluation
In order to assess GPT-4V's effectiveness on the Medical VQA dataset, we embarked on a comprehensive series of experiments. Utilizing the GPT-4V model, we applied it to generate predicted answers based on the input medical image and the question related to this image. Then, we proceeded to calculate the accuracy of the results. Subsequently, we conducted a comparative analysis with the current state-of-the-art (SOTA) methods. Herein, we present our main observations and conclusions.
#### 4.1.1 Dataset
VQA-RAD (Lau et al., 2018) is one of the most widely utilized radiology datasets. It comprises 315 images along with 3515 question-answer pairs, ensuring that each image corresponds to at least one question-answer pair. The questions encompass 11 distinct categories, including "anomalies," "properties," "color," "number," "morphology," "organ type," "other," and "section." A noteworthy 58% of these questions are designed as closed-ended queries, while the remainder take the form of open-ended inquiries. These images predominantly feature the head, chest, and abdomen regions of the human body. It is essential to manually partition the dataset into training and test sets for accurate evaluation.
#### 4.1.2 Overview of Prompt Methods
GPT-4V not only possesses powerful natural language processing capabilities, but also incorporates advanced computer vision techniques, which makes it excel in handling fusion tasks of images and text. It is trained to understand questions and extract information from images to generate accurate answers. However, the performance of GPT also depends on how the prompt is designed.
To ensure that GPT-4V accurately grasps the answer style of the VQA-RAD dataset, we provided seven examples to guide the model in generating responses consistent with the dataset's format. Without these examples, GPT-4V tends to produce more unconstrained answer text, complicating the task of comparing predicted answers with the ground truth.
We designed the prompt by following the template in Figure 5:
#### 4.1.3 Comparison with SOTA
Upon scrutinizing the results of GPT-4V on the VQA-RAD dataset's test set, it is calculated that the accuracy for closed-end questions is 61.4%, the result shows in table2, which is significantly lower than other published results. In terms of open-end questions, the calculated BLEU scores is 0.1155, which also does not reach a high standard. The majority of currently available research primarily employs classification based model to tackle Visual Question Answering (VQA) problems. This approach results in a lack of evaluations using the BLEU scores, making it challenging to draw comparisons between different methods. However, upon analyzing the cases provided by GPT-4V, it is postulated that the low BLEU scores may be attributed to the excessive flexibility of GPT-4V, resulting in substantial deviations from the correct answers. This might be due to some clear limitations of BLEU itself. BLEU lacks semantic understanding, as it mainly relies on the literal matching of n-grams and does not deeply understand context and semantics. It is insensitive to synonyms and diverse ways of expression. Even if two sentences mean the same thing, if they use different words or ways of expression, the BLEU scores might end up being quite low. In simpler terms, BLEU struggles to recognize when different words mean the same thing, and this can lead to unfairly low scores
Figure 5: VQA Prompt Method. Elements in double braces are replaced with specific questions
even when the answers are correct. We hope that in the future, more advanced criteria capable of deeply understanding the semantics of text will be developed, providing more accurate and reliable assessments.
### Case Study
We present case study of VQA in Figure 67. From the case study, we can tell that the GPT-4V showed some limitations in the Medical VQA domain. It showed strong ability in determining whether a question was close-end or open-end, and was almost always able to make a correct judgment. However, in answering some open-end questions, it did not make full use of the image information, relying instead on the medical terms mentioned in the question itself, and failing to make effective reference to the information in the medical images. For example, in the last case, the GPT-4V only expanded on the nouns that appeared in the question without taking the medical images into account, resulting in an incorrect answer. There were also some instances of incorrect responses to close-end questions. These questions did not perform as well as expected, and further improvements and optimizations are needed to improve performance in Medical VQA tasks.
### Discussion
Our extensive evaluation and in-depth case studies of GPT-4V's performance on the VQA-RAD dataset have highlighted its potential capabilities as well as the areas that necessitate substantial improvement within the Medical Visual Question Answering (VQA) field.
While GPT-4V demonstrates proficiency in distinguishing between closed-end and open-end questions, its accuracy rate of 61.4% for closed-end questions and low BLEU scores of 0.1155 for open-end questions signify a performance level that is considerably below the published benchmarks in this domain. This discrepancy underscores the need for more refined and optimized models that can more accurately interpret and respond to medical imagery. The capability to accurately identify whether a question is open-ended or closed-ended demonstrates GPT's substantial reasoning skills. However, its occasional low accuracy could be attributed to an insufficient amount of training data. Acquiring general Visual Question Answering
\begin{table}
\begin{tabular}{c|c|c|c|} \hline \hline Dataset & Reference Methods & Fusion Method & Close-end \\ \hline \multirow{10}{*}{VQA-RAD} & StAn (He et al., 2020) & SAN & 57.2 \\ \cline{2-4} & BiAn (He et al., 2020) & BAN & 67.9 \\ \cline{2-4} & MAML (Finn et al., 2017) & SAN & 69.7 \\ \cline{2-4} & BAN & 72.4 \\ \cline{2-4} & MEVF (Nguyen et al., 2019) & SAN & 74.1 \\ \cline{2-4} & BAN & 75.1 \\ \cline{2-4} & MMQ (Do et al., 2021) & SAN & 75.7 \\ \cline{2-4} & BAN & 75.8 \\ \cline{2-4} & PubMedCLIP (Eslami et al., 2021) & - & 80 \\ \cline{2-4} & MMBERT (Khare et al., 2021) & - & 77.9 \\ \cline{2-4} & Q2ATransformer (Liu et al., 2023) & - & 81.2 \\ \cline{2-4} & GPT-4V (OpenAI, 2023) & - & 61.40 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on VQA-RAD benchmark
(VQA) data is relatively easier compared to procuring medical VQA data. This discrepancy is due to the labor-intensive and expensive nature of labeling medical data. Consequently, as the volume of training data in the medical domain increases, we can anticipate an enhancement in the performance of VQA applications.
Furthermore, the limitations of the BLEU scores as an evaluation metric, particularly its lack of semantic understanding and sensitivity to diverse expressions and synonyms, have been highlighted. This brings to light the urgent need for the development of more advanced and semantically aware evaluation methods to provide accurate and reliable assessments of model performance in this field.
Figure 6: VQA Prompt examples.By given few-shot prompts, GPT-4V can generate answers for the given image and question pairs, the result for the close-end question is better than open-end questions
## 5 Evaluation of Medical Visual Grounding
Visual Grounding is one of the important tasks in the field of computer vision, aimed at enabling computers to understand natural language descriptions and associate them with specific regions in an image. This technique has great potential in areas such as medical image analysis. In this paper, we presented the performance of GPT-4V on MS-CXR dataset for visual grounding applications and compare it with current SOTA methods.
Figure 7: VQA Prompt examples. With the assistance of a few-shot prompt, GPT-4V has the capability to generate responses for open-ended questions, though there is room for refinement to enhance its performance.
### Evaluation
#### 5.1.1 Dataset
The MS-CXR (Boecking et al., 2022) dataset is a valuable resource for biomedical vision-language processing, featuring 1162 image-sentence pairs with bounding boxes and corresponding phrases. It was meticulously annotated by board-certified radiologists, covering eight cardiopulmonary radiological findings, each having an approximately equal number of pairs. This dataset offers both reviewed and edited bounding boxes/phrases and manually created bounding box labels from scratch. What sets MS-CXR apart is its focus on complex semantic modeling and real-world language understanding, challenging models with joint image-text reasoning and tasks like parsing domain-specific location references, complex negations, and variations in reporting style. It serves as a benchmark for phrase grounding and has been instrumental in demonstrating the effectiveness of principled textual semantic modeling for enhancing self-supervised vision-language processing.
#### 5.1.2 Overview of Prompt Methods
We have looked at various different ways to give instructions to GPT, and found a specific type that helps GPT understand better and makes it easier to create bounding boxes. We chose this prompt after carefully checking which one work best. We designed the prompt by following the template in Figure 8:
#### 5.1.3 Comparison with SOTA
In order to compare with current existing models, we use mean Intersection-Over-Union (mIoU) as our evaluation metric. Upon conducting an evaluation of GPT-4V's performance on the MS-CXR dataset, the calculated mean Intersection over Union (mIoU) was found to be 0.0833. This result is markedly lower than all published benchmarks. Empirical evidence demonstrates that while GPT-4V possesses the capability to comprehend applications within Visual Grounding, it exhibits a deficiency in accurately identifying medical organs and pathological signs. Consequently, this results in imprecise bounding box predictions.
Figure 8: VG Prompt Method. Elements in double braces are replaced with specific image width, height and description text related to image
Recently, SoM (Yang et al., 2023a) addressed this issue and made significant improvements. The approach in that paper involved first segmenting and labeling the image, and then proceeding with grounding, which led to substantial improvement on in performance. However, that method was applied to generic images, and it is not certain whether it would yield equally impressive results for medical images, which require much finer-grained features. Further experiments will be necessary to validate its effectiveness in such contexts.
### Case study
From the case study, it appears that the GPT-4V has the potential to generate bounding boxes, but notably, its performance is rather poorly. Although it was able to attempt to calibrate the position of the object, there were serious errors and uncertainties in this task. This may be due to the fact that GPT-4V's model has some limitations in processing the image information and is unable to fully understand and interpret the exact position of the object in the image. This is true, especially for medical images, which need more focus on fine features. Another possible reason for GPT-4V's poor performance could be that it was mainly trained using common, everyday images. The GPT model needs a lot more data to work well and become reliable. So, we guess it does not perform very well because it didn't have enough diverse labeled medical data to learn from.
### Discussion
Our comprehensive evaluation and case study of GPT-4V's capabilities in Visual Grounding highlight both its potential and its current limitations. While the model shows promise in recognizing bounding boxes, it falls significantly short in terms of accuracy, as evidenced by its low mIoU score when compared with existing benchmarks and its performance on the MS-CXR dataset. Its inability to precisely identify medical organs and pathological signs leads to imprecise bounding box predictions, and this may caused by lack of training data. Due to the inherent difficulty in acquiring sufficient labeled data in real-world scenarios, we hypothesize that the sub-optimal performance of GPT may be attributed to the limited availability of training data.
In light of these findings, it is evident that GPT-4V requires further refinement and training to overcome its current limitations and to enhance its bounding box localization accuracy. In order to achieve better results in this area, further model improvement and more data is needed to increase the accuracy of its bounding box localization, thus making it more useful and reliable in various applications.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Dataset & Methods & mIoU \\ \hline & BioViL (Boecking et al., 2022) & 22.9 \\ & BioViL-T (Bannur et al., 2023) & 24.3 \\ & RefTR (Li \& Sigal, 2021) & 50.11 \\ MS\_CXR & VGTR (Du et al., 2022) & 53.58 \\ & SeqTR (Zhu et al., 2022) & 56.63 \\ & TransVG (Deng et al., 2021) & 58.91 \\ & MedRPG (Chen et al., 2023) & 59.37 \\ \cline{2-3} & GPT-4V (OpenAI, 2023) & 8.33 \\ \hline \hline \end{tabular}
\end{table}
Table 3: mIoU(%) results on MS-CXR benchmark.
Doing so will undoubtedly make GPT-4V a more reliable and valuable tool in various applications, fostering its integration and utility in practical, real-world scenarios, especially within the medical field. This journey towards improvement is not only necessary but also a crucial step in advancing the field of Visual Grounding and in unlocking the full potential of models like GPT-4V.
Figure 9: Visual Grounding Prompt examples. The bounding boxes in red color are predicted box by GPT-4V, and the green bounding boxes are ground truth boxes. GPT-4V is capable of generating and estimating the bounding box coordinates for the reference text within an image. However, the results show that the GPT-4V cannot understand medical image properly.
## 6 Conclusion
The comprehensive assessment of GPT-4V's capabilities in Radiology Report Generation, Medical Visual Question Answering (VQA), and Visual Grounding offers a perspective on the model's potential and areas for improvement within the medical domain. GPT-4V's ability to generate radiology reports based on chest X-ray images is commendable, particularly when furnished with detailed prompts. This underscores the capacity of language models to aid in radiology diagnosis. Meanwhile, its challenges in recognizing uncommon terms and subtle differences specific to the MIMIC-CXR dataset underscore the necessity for domain-specific training and fine-tuning to elevate its proficiency in medical reporting.
Furthermore, although GPT-4V displays proficiency in distinguishing among various question types within the VQA-RAD dataset, its performance, especially for open-ended questions, falls short of public benchmarks. This sub-optimal performance reveals a gap in its comprehension and response capabilities related to medical imaging. Moreover, the limitations of current evaluation metrics like the BLEU scores underscore
Figure 10: These are some Visual Grounding Prompt examples. The bounding boxes in red color are predicted box by GPT-4V, and the green bounding boxes are ground truth boxes.
the significance of constructing semantically-aware evaluation of methods to gain a holistic comprehension of the model's aptitude.
The Visual Grounding evaluation further showed the difficulties of GPT-4V in achieving high precision in bounding box localization within medical images. These limitations, particularly its struggles in identifying medical organs and pathological indicators, underscore the urgent requirement for specialized training and model improvements to enhance its grounding capabilities.
In summary, GPT-4V demonstrates remarkable potential across various medical image analysis domains. Meanwhile, its current limitations underscore the necessity for domain-specific enhancements. Exploring dedicated training on medical datasets, designing comprehensive prompt methodologies, and advancing evaluation techniques will need further research. |
2309.04354 | Mobile V-MoEs: Scaling Down Vision Transformers via Sparse
Mixture-of-Experts | Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due
to their ability to decouple model size from inference efficiency by only
activating a small subset of the model parameters for any given input token. As
such, sparse MoEs have enabled unprecedented scalability, resulting in
tremendous successes across domains such as natural language processing and
computer vision. In this work, we instead explore the use of sparse MoEs to
scale-down Vision Transformers (ViTs) to make them more attractive for
resource-constrained vision applications. To this end, we propose a simplified
and mobile-friendly MoE design where entire images rather than individual
patches are routed to the experts. We also propose a stable MoE training
procedure that uses super-class information to guide the router. We empirically
show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off
between performance and efficiency than the corresponding dense ViTs. For
example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense
counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only
54M FLOPs inference cost, our MoE achieves an improvement of 4.66%. | Erik Daxberger, Floris Weers, Bowen Zhang, Tom Gunter, Ruoming Pang, Marcin Eichner, Michael Emmersberger, Yinfei Yang, Alexander Toshev, Xianzhi Du | 2023-09-08T14:24:10Z | http://arxiv.org/abs/2309.04354v1 | # Mobile V-MoEs: Scaling Down Vision Transformers
###### Abstract
Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due to their ability to decouple model size from inference efficiency by only activating a small subset of the model parameters for any given input token. As such, sparse MoEs have enabled unprecedented scalability, resulting in tremendous successes across domains such as natural language processing and computer vision. In this work, we instead explore the use of sparse MoEs to scale-down Vision Transformers (ViTs) to make them more attractive for resource-constrained vision applications. To this end, we propose a simplified and mobile-friendly MoE design where entire images rather than individual patches are routed to the experts. We also propose a stable MoE training procedure that uses super-class information to guide the router. We empirically show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off between performance and efficiency than the corresponding dense ViTs. For example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only 54M FLOPs inference cost, our MoE achieves an improvement of 4.66%.
## 1 Introduction
The trade-off between performance and efficiency of neural networks (NNs) remains a challenge, especially in settings where computational resources are limited. Recently, sparsely-gated Mixture-of-Experts models (sparse MoEs) have gained popularity as they provide a promising solution to this problem by enabling the decoupling of model size from inference efficiency [3]. MoEs are NNs that are partitioned into "experts", which are trained jointly with a router to specialize on subsets of the data. In MoEs, each input is processed by only a small subset of model parameters (aka _conditional computation_). In contrast, traditional dense models activate all parameters for each input.
Sparse MoEs were popularized in deep learning by [16], which introduced sparse MoE-layers as drop-in replacements for standard NN layers. Most recent MoEs are based on the Transformer [19], which processes individual input tokens; in accordance, recent MoEs also route individual input tokens to experts, i.e., image patches in the case of Vision Transformers (ViTs) [2, 13] (see Fig. 2b). Conditional computation as implemented by sparse MoEs has enabled the training of Transformers of unprecedented size [4]. As a result, MoEs have achieved impressive successes across various domains including language [4, 10], vision [13], speech [20] and multi-modal learning [12], and currently hold state-of-the-art results on many benchmarks [21].
The ability to increase model capacity while keeping inference cost low is also appealing for resource-constrained vision problems. While Transformers are getting increasingly established as the de-facto standard architecture for large-scale visual modeling [2, 13], virtually all mobile
Figure 1: **Accuracy vs. FLOPs** for ViTs of different sizes. Labels (e.g. 12\(\times\)192, which is ViT-Tiny) refer to the number of ViT layers (e.g. 12) and the hidden embedding dimension (e.g. 192). The sparse MoEs outperform their corresponding dense baselines across different model scales. Fig. 3a lists all numerical results.
friendly models still leverage convolutions due to their efficiency [1, 5, 6, 11, 15, 18]. As such, conditional computation could potentially enable attention-based models to reduce the gap to convolutional models in the small-scale regime. However, Transformer-based MoEs have not yet been explored for resource-constrained settings; this might be due to two main weaknesses of recently-popularized MoEs [16].
Firstly, while per-token routing increases the flexibility to learn an optimal computation path through the model, it makes inference inefficient, as many (or even all) experts need to be loaded for a single input image. Secondly, recent MoEs train the routers jointly with the rest or the model in an end-to-end fashion. To avoid collapse to just a few experts while ignoring all others, one needs to use load balancing mechanisms [3] such as dedicated auxiliary losses [16]. However, the resulting complex optimization objectives often lead to training instabilities / divergence [4, 10, 21, 12].
In this work, we investigate the potential of sparse MoEs to scale-down ViTs for resource-constrained vision applications via an MoE design and training procedure that addresses the aforementioned issues. Our contributions are:
1. We propose a simplified, mobile-friendly sparse MoE design in which a single router assigns entire images (rather than image patches) to the experts (see Fig. 2c).
2. We develop a simple yet robust training procedure in which expert imbalance is avoided by leveraging semantic super-classes to guide the router training.
3. We empirically show that our proposed sparse MoE approach allows us to scale-down ViT models by improving their performance vs. efficiency trade-off.
## 2 Scaling down ViTs via sparse MoEs
### Conditional computation with sparse MoEs
An MoE implements conditional computation by activating different subsets of a NN (so-called experts) for different inputs. We consider an MoE layer with \(E\) experts as
\[\text{MoE}(\mathbf{x})=\sum_{i=1}^{E}g(\mathbf{x})_{i}e_{i}(\mathbf{x}), \tag{1}\]
where \(\mathbf{x}\in\mathbb{R}^{D}\) is the input to the layer, \(e_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) is the function computed by expert \(i\), and \(g:\mathbb{R}^{D}\rightarrow\mathbb{R}^{E}\) is the routing function which computes an input-dependent weight for each expert [16]. In a ViT-based MoE, each expert \(e_{i}\) is parameterized by a separate multi-layer perceptron
Figure 2: **Model architectures.** (a) The dense ViT baseline model uses dense ViT layers throughout. (b) Regular sparse V-MoE with layer-wise per-patch routers. (c) Our proposed sparse Mobile V-MoE design with a single per-image router. In both (b) and (c), dense ViT layers are followed by MoE-ViT layers (here, \(k=1\) out of \(E=3\) experts are activated per input). (d) In contrast to dense ViT layers [19], MoE-ViT layers have a separate MLP per expert (preceded by a router) while all other parts of the layer are shared across all experts [13].
(MLP) within the ViT layer, while the other parts are shared across experts (see Fig. 1(d)). We use the routing function
\[g(\mathbf{x})=\text{TOP}_{k}(\text{softmax}(\mathbf{W}\mathbf{x})), \tag{2}\]
where the operation \(\text{TOP}_{k}(\mathbf{x})\) sets all elements of \(\mathbf{x}\) to zero except those with the \(k\) largest values [13]. In a sparse MoE, we have \(k\ll E\), s.t. we only need to load and compute the \(k\) experts with the largest routing weights. This allows us to scale-up the overall model capacity (determined by \(E\)) without increasing the inference cost (determined by \(k\)).
### Efficient and robust MoEs for small-scale ViTs
**Per-image routing.** Recent large-scale sparse MoEs use per-patch routing (i.e. the inputs \(\mathbf{x}\) are individual image patches). This generally requires a larger number of experts to be activated for each image. For example, [13] show that in their MoE with per-patch routing, "most images use -on aggregate by pooling over all their patches- most of the experts" [13, Appendix E.3]. Thus, per-patch routing can increase the computational and memory overhead of the routing mechanism and reduce the overall model efficiency. We instead propose to use per-image routing (i.e., the inputs \(\mathbf{x}\) are entire images) to reduce the number of activated experts per image, as also done in early works on MoEs [7, 9].
**Super-class-based routing.** Previous works on sparse MoEs jointly train the router end-to-end together with the experts and the dense ViT backbone, to allow the model to learn the optimal assignment from inputs to experts based on the data [13]. While learning the optimal routing mechanism from scratch can result in improved performance, it often leads to training instabilities and expert collapse, where most inputs are routed to only a small subset of the experts, while all other experts get neglected during training [3]. Thus, an additional auxiliary loss is typically required to ensure load-balancing between the different experts, which can increase the complexity of the training process [3].
In contrast, we propose to group the classes of the dataset into super-classes and explictly train the router to make each expert specialize on one super-class. To this end, we add an additional cross-entropy loss \(\mathcal{L}_{g}\) between the router output \(g(\mathbf{x})\) in Eq. (2) and the ground truth super-class labels to the regular classification loss \(\mathcal{L}_{C}\) to obtain the overall weighted loss \(\mathcal{L}=\mathcal{L}_{C}+\lambda\mathcal{L}_{g}\) (we use \(\lambda=0.3\) in our experiments, which we found to work well). Such a super-class division is often readily provided with the dataset (e.g. for CIFAR-10/100 or MS-COCO). If a dataset does not come with a super-class division, we can easily obtain one as follows: 1) we first train a dense baseline model on the dataset; 2) we then compute the model's confusion matrix over a held-out validation set; 3) we finally construct a confusion graph from the confusion matrix and apply a graph clustering algorithm to obtain the super-class division [8]. This approach encourages the super-classes to contain semantically similar images that the model often confuses. Intuitively, by allowing the different MoE experts to specialize on the different semantic data clusters, performance on the highly-confused classes should be improved. We use this approach in our experiments on ImageNet-1k, computing the confusion matrix via a dense ViT-S/16 model. The resulting super-class division for \(E=10\) experts is shown in Tab. 1; the super-classes contain semantically related classes.
## 3 Experiments
We now present empirical results on the standard ImageNet-1k classification benchmark [14]. We train all models from scratch on the ImageNet-1k training set of 1.28M images, and then evaluate their top-1 accuracy on the held-out validation set of 50K images. In Sec. 3.1, we first evaluate our proposed sparse Mobile V-MoE across a range of model scales and show that they achieve better performance vs. efficiency trade-offs than the respective dense ViT baselines. In Sec. 3.2, we then conduct several ablation studies to get a better understanding of the properties of our proposed sparse MoE model design and training procedure.
### Accuracy vs. efficiency across ViT scales
We consider ViT models (both MoEs and corresponding dense baselines) of different sizes by scaling the total number of layers (we use 12, 9 or 6) and the hidden embedding size (we use 384, 192, 96 or 64). The number of multi-head self-attention heads is (6, 3, 3, 2) for the different hidden embedding sizes. The embedding size of the MLP is \(4\times\) the hidden embedding size, as is common practice. We use \(E=10\) experts in total for the MoE, out of which \(k=1\) is activated per input image. Our MoEs comprise of \(L=2\) MoE-ViT layers preceded by (10, 7 or 4) dense ViT layers (see Fig. 1(c)). We use a patch size of \(32\times 32\) for all models. This is because the the patch size effectively controls
\begin{table}
\begin{tabular}{l c c} \hline \hline
**ID** & **Classes** & **Super-class** \\ \hline
0 & boxer, pug, Rottweiler & dogs \\
1 & orangutan, weasel, panda & other mammals \\
2 & toucan, flamingo, ostrich & birds \\
3 & eel, scorpion, hammerhead & other animals \\
4 & minivan, ambulance, taxi & land vehicles \\
5 & submarine, canoe, pirate & sea vehicles \\
6 & guacamole, hotdog, banana & food \\
7 & backpack, pyjama, kimono & clothes \\
8 & monitor, iPod, photocopier & tech devices \\
9 & xylophone, harp, trumpet & instruments \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Super-class division for \(E=10\). For each super-class, we list three randomly chosen class names (which turn out to be semantically related) together with a possible super-class name.**
the trade-off between FLOPs and number of model parameters: as we aim to optimize for FLOPs, a larger patch size (resulting in a fewer number of patches) is beneficial. We also tried using a smaller patch size of \(16\times 16\), where the result trends were basically the same (but where the number of FLOPs was higher relative to the model capacity and thus accuracy). For the ViTs with hidden sizes 384 and 192, we use the DeiT training recipe [17], while for hidden sizes 96 and 64, we use the standard ViT training recipe [2] to avoid underfitting. Figs. 1 and 2 compare top-1 validation accuracy vs. FLOPs. Our Mobile V-MoEs outperform the corresponding dense ViT baselines across all model sizes.
### Ablation studies
We train DeiT-Tiny [17] (12 layers total, 192 embedding size, \(16\times 16\) patch size) with \(k=1\) out of \(E=10\) experts per input, and with \(L=2\) MoE layers (unless noted otherwise); the dense ViT baseline achieves 70.79% accuracy.
**Total number of experts \(E\).** We consider different widths of the MoE, i.e., different numbers of experts \(E\) (and thus super-classes), ranging between \(E=5\) and \(E=20\). We report both the accuracy of the entire MoE model (i.e., on the 1,000-way classification task), as well as the accuracy of the router (i.e., on the \(E\)-way super-classification task). Fig. 2(b) shows that overall performance improves until \(E=10\), from which point onwards it stagnates. The router accuracy also drops beyond \(E=10\) due to the increased difficulty of the \(E\)-way super-classification problem.
**Number of MoE layers \(L\).** We consider different depths of the MoE, i.e., different numbers of MoE layers \(L\), ranging between \(L=1\) and \(L=8\) (out of 12 ViT layers in total). We again report both the full MoE and router accuracies. Fig. 2(c) shows that overall performance peaks at \(L=2\), and rapidly decreases for larger \(L\). This is due to the router accuracy, which declines with increasing \(L\) as the router gets less information (from the \(12-L\) ViT layers).
**Number of experts \(k\) per image.** We vary the number of experts \(k\) activated per image. We compare against dense baselines that use an MLP with hidden dimension scaled by \(k\) to match the MoE's inference FLOPs. Fig. 2(d) shows that \(k=1\) and \(k=2\) perform best (relative to the dense baseline), with decreasing performance delta for larger \(k\).
**Routing strategies.** We compare our proposed semantic super-class per-image routing vs. end-to-end-learned routing (both per-image and per-token) and a baseline with random super-classes (for \(k\)=2). Fig. 2(e) shows that our method (Fig. 1(c)) is better, except for learned per-token routing (as in the regular V-MoE [13], Fig. 1(b)), which however needs to activate many more experts and thus model parameters for each input image (up to 11.05M, vs. 6.31M for ours).
## 4 Conclusions and future work
We showed that sparse MoEs can improve the performance vs. efficiency trade-off compared to dense ViTs, in an attempt to make ViTs more amenable to resource-constrained applications. In the future, we aim to apply our MoE design to models that are more mobile-friendly than ViTs, e.g., light-weight CNNs such as MobileNets [5, 6, 15] or ViT-CNN hybrids [1, 18, 11]. We also aim to consider other vision tasks, e.g., object detection. Finally, we aim to get actual on-device latency measurements for all models.
Figure 3: **Empirical results. (a) Our Mobile V-MoEs outperform the respective dense ViTs across model scales. Model names (e.g. 12\(\times\)192) refer to the number of layers (12) and the embedding size (192). (b-e) Ablation studies using DeiT-Ti/16 [17], with \(k=1\), \(E=10\), \(L=2\) by default. Best performance vs. efficiency trade-off is achieved with (b) \(E=10\) experts total, (c) \(L=2\) MoE layers (out of 12 layers total), (d) \(k=1\) or \(k=2\) experts activated per image, (e) our semantic super-class routing; the settings used in (a) are bolded.** |
2309.04340 | Identifying Single-Input Linear System Dynamics from Reachable Sets | This paper is concerned with identifying linear system dynamics without the
knowledge of individual system trajectories, but from the knowledge of the
system's reachable sets observed at different times. Motivated by a scenario
where the reachable sets are known from partially transparent manufacturer
specifications or observations of the collective behavior of adversarial
agents, we aim to utilize such sets to determine the unknown system's dynamics.
This paper has two contributions. Firstly, we show that the sequence of the
system's reachable sets can be used to uniquely determine the system's dynamics
for asymmetric input sets under some generic assumptions, regardless of the
system's dimensions. We also prove the same property holds up to a sign change
for two-dimensional systems where the input set is symmetric around zero.
Secondly, we present an algorithm to determine these dynamics. We apply and
verify the developed theory and algorithms on an unknown band-pass filter
circuit solely provided the unknown system's reachable sets over a finite
observation period. | Taha Shafa, Roy Dong, Melkior Ornik | 2023-09-08T14:11:46Z | http://arxiv.org/abs/2309.04340v1 | # Identifying Single-Input Linear System Dynamics from Reachable Sets
###### Abstract
This paper is concerned with identifying linear system dynamics without the knowledge of individual system trajectories, but from the knowledge of the system's reachable sets observed at different times. Motivated by a scenario where the reachable sets are known from partially transparent manufacturer specifications or observations of the collective behavior of adversarial agents, we aim to utilize such sets to determine the unknown system's dynamics. This paper has two contributions. Firstly, we show that the sequence of the system's reachable sets can be used to uniquely determine the system's dynamics for asymmetric input sets under some generic assumptions, regardless of the system's dimensions. We also prove the same property holds up to a sign change for two-dimensional systems where the input set is symmetric around zero. Secondly, we present an algorithm to determine these dynamics. We apply and verify the developed theory and algorithms on an unknown band-pass filter circuit solely provided the unknown system's reachable sets over a finite observation period.
## I Introduction
This paper aims to determine whether it is possible to use a control system's reachable sets obtained at different time instances to calculate the system's dynamics. In certain instances, we may be able to determine an approximation of a system's reachable sets over a finite observation period. The purpose of this paper is to show that such information can be utilized to arrive at a dynamic model for an unknown system. Practical applications may include system identification of high-density drone and missile swarms [1, 2] where the reachable set can be found by observing multiple agents collectively, but without the capability of distinguishing them. Other applications include predicting macro-level population behaviors, e.g., determining how crowd behavior changes under social or economic events like the introduction of a new population or changes in the stock market [3]. We may also be able to model internal body functions on the cellular level [4, 5], namely understanding how cells change their identity and behavior in living systems.
We must first show that model identification using reachable sets will uniquely determine an unknown system's true dynamics. After uniqueness is proven, we develop a method to identify a linear model of an unknown system's behavior using its reachable sets. Previous research in system identification presents the most closely related contributions to the method presented in this paper. However, previous work on system identification classically relies on frequency response techniques induced by randomized actuator inputs [6, 7]. More sophisticated system identification techniques involve neural networks [8]. Single-layer and multi-layer neural networks have also been applied with the use of parameter estimation algorithms using a single hidden layer [9] and \(H_{\infty}\) control-induced excitations for robust identification of system nonlinearities [10]. More recent work involves using recurrent neural networks [11, 12] with Long Short-Term Memory Units (LSTM) and fractional order neural networks (FONN) [13, 14] to identify and control dynamic systems. These methods, however, cannot be used unless one has access to a system's actuators or individual trajectories. The significant difference of our novel method is that it does not require control of any actuators to model an unknown system nor observations of individual trajectories.
On a high level, the problem in this paper involves identifying the behaviors or capabilities of an observed system under limited information. While there exist other methods for adversarial behavior recognition, those works are focused on determining adversarial agent goals by matching actions of an agent against a plan library [15, 16, 17]. More recent work [18, 19] proposes using evolving fuzzy systems and artificial intelligence to adaptively predict agent behavior. In contrast, our method is starkly different since it is not primarily concerned with predicting adversarial behavior, but determining all possible actions of an adversary within a time horizon. Thus, instead of using a library of finite predetermined adversarial actions, our method uses reachable sets to produce a dynamic model of an unknown system.
The outline of this paper is as follows: in Section II, we discuss the problem statement, namely posing the question of whether linear dynamics can be uniquely recovered given an unknown system's sequence of reachable sets and how to recover said dynamics. In Section III, we address the question of whether the system dynamics are uniquely determined by the system's reachable sets. We show that under generic assumptions, the system dynamics are indeed unique under asymmetric input sets. For unknown systems with input sets symmetric around zero, uniqueness modulo a sign has been proved in the two-dimensional case; we conjecture the same holds for higher dimensions. In Section IV, we propose a procedure using knowledge of the reachable sets to calculate the system dynamics. In Section V, we illustrate by example how to implement this procedure to identify the models of an unknown band-pass filter circuit and an additional dynamical system with a symmetric input set.
### _Notation_
We denote the set of all \(n\times m\) real and complex matrices by \(\mathbb{R}^{n\times m}\) and \(\mathbb{C}^{n\times m}\) respectively; for \(M\in\mathbb{R}^{n\times m}\), we let \(M^{T}\in\mathbb{R}^{m\times n}\) denote its transpose. Vectors \(e_{1},\ldots,e_{n}\) will denote the canonical basis vectors in \(\mathbb{R}^{n}\). We let \(\mathbb{N}\) denote the set of all natural numbers, \(\mathbb{Z}_{\geq 0}\) denote the set of non-negative integers, and \(GL(n)\) denote the set of invertible square matrices of dimension \(n\in\mathbb{N}\). Let \(\mathcal{S}\) be a set of points in \(\mathbb{R}^{n}\). Then \(\mathrm{Conv}(\mathcal{S})\) denotes the convex hull of \(\mathcal{S}\). Notation \(B\mathcal{X}\) where \(B\in\mathbb{R}^{n\times m}\) and \(\mathcal{X}\subset\mathbb{R}^{m}\) denotes the set \(B\mathcal{X}=\{Bx\ |\ x\in\mathcal{X}\}\). Given two sets \(\mathcal{A}\), \(\mathcal{B}\in\mathbb{R}^{n}\), we denote \(\mathcal{A}\oplus\mathcal{B}=\{a+b\ |\ a\in\mathcal{A}\), \(b\in\mathcal{B}\}\) as their Minkowski sum. Similarly, \(\mathcal{A}\ominus\mathcal{B}=\{c\in\mathbb{R}^{n}\ |\ c\oplus B\subseteq \mathcal{A}\}\) denotes the Minkowski difference. We also define \(\mathcal{A}+b=\{a+b\ |\ a\in\mathbb{R}^{n}\}\) as the translation of \(\mathcal{A}\) by \(b\in\mathbb{R}^{n}\).
## II Problem Statement
We consider the discrete-time, single-input linear system
\[x[i+1]=Ax[i]+bu[i],\quad x[0]=0, \tag{1}\]
where all \(i\in\mathbb{Z}_{\geq 0}\), \(x\in\mathbb{R}^{n}\), \(A\in\mathbb{R}^{n\times n}\), \(b\in\mathbb{R}^{n}\) and \(u\ \in\ \mathcal{U}\ \subset\ \mathbb{R}\) where \(\mathcal{U}=[\underline{u},\overline{u}]\) such that \(\underline{u}\neq\overline{u}\). We assume \(b\neq 0\) since the system's reachable sets are trivial otherwise. We also assume \(x[0]=0\); by a shift in coordinates, the case of \(x[0]\neq 0\) is equivalent to that of an _affine_ system \(x[i+1]=Ax[i]+bu[i]+c\) with initial state at the origin. Solving the problem in this setting can likely be approached by reproducing similar calculations in subsequent sections, but we leave such an effort for future work.
Our goal is to establish whether the dynamics of (1), i.e., matrices \(A\) and \(b\), can be determined using the system's reachable sets. We now formally define said reachable sets.
**Definition 1**: _For \(i\in\mathbb{Z}_{\geq 0}\), the (forward) reachable set of system (1) at time \(i\) is_
\[\mathcal{R}(i,x[0])=\{\phi_{u}(i;x[0])\ |\ u:\mathbb{Z}_{\geq 0}\to\mathcal{U}\},\]
_where \(\phi_{u}(\cdot;x[0])\) denotes the controlled trajectory of system (1) with control signal \(u\)._
We present the problem of whether the system dynamics are uniquely determined by the system's reachable sets.
**Problem 1**: _Given a sequence of sets \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\) which is generated by (1) for some \((A,b)\), determine whether \((A,b)\) can be uniquely recovered from \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\)._
Notice that we explicitly assume the knowledge of all reachable sets at all times. Such an assumption might not always be realistic. We will show that we often need only the first \(n+1\) reachable sets to uniquely recover the dynamics. We leave the more general case - where only reachable sets at different time steps are available - for future work.
The first step to solving Problem 1 is to derive a simple relationship between the system matrices and \(\mathcal{R}(i,0)\). Given system (1), we naturally utilize Minkowski sums and the Minkowski difference [20] to produce such a relationship for all \(i\in\mathbb{N}\).
**Theorem 1**: _Let \(\mathcal{R}(i,0)\) be the reachable set at time \(i\) of (1). Then_
\[A^{i-1}b\mathcal{U}=\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0). \tag{2}\]
By (1) it is clear that \(\mathcal{R}(1,0)=b\mathcal{U}\). Since
\[x[i]=A^{i}x[0]+A^{i-1}bu[0]+\ldots+bu[i-1],\]
_clearly_
\[\mathcal{R}(i,0)=A^{i-1}b\mathcal{U}\oplus\ldots\oplus b\mathcal{U}\]
_and hence_
\[\mathcal{R}(i,0)=A^{i-1}b\mathcal{U}\oplus\mathcal{R}(i-1,0).\]
We recall that the Minkowski sum of two convex sets is also convex [21]. Since all sets \(A^{i-1}b\mathcal{U}\) are convex by the definition of \(\mathcal{U}\), all sets \(\mathcal{R}(i,0)\) are convex by induction. Hence, the appropriate Minkowski difference [22] can be calculated to arrive at (2).
Theorem 1 implies that we can obtain \(\{A^{i-1}b\mathcal{U}\}_{i\in\mathbb{N}}\) using the reachable sets \(\mathcal{R}(i,0)\). We will prove that when \(\mathcal{U}\neq[-c,c]\) with \(c\in\mathbb{R}\), matrices \(A\) and \(b\) are indeed _generically_ uniquely defined from \(\{A^{i-1}b\mathcal{U}\}_{i\in\mathbb{N}}\), that is, uniquely defined under the assumptions formally written in Theorem 2 shown to be generic in a topological sense in Lemma 1. When \(\mathcal{U}=[-c,c]\) for some \(c\in\mathbb{R}\), we can show that \((A,b)\) are not uniquely defined, but conjecture that they are unique up to a change in sign. We prove that this property holds for \(n=2\). We shall refer to solutions for cases with such a set \(\mathcal{U}\) as \(\pm\)_-unique_, which is explicitly defined in the next section.
Following Problem 1, which seeks to determine whether system dynamics are uniquely defined from reachable sets, we present the second problem, which aims to explicitly determine such dynamics.
**Problem 2**: _Develop a method to recover at least one pair \((A,b)\) which generates \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\)._
Based on methods in [20] for calculating Minkowski differences, we can calculate \(\{A^{i-1}b\mathcal{U}\}_{i\in\mathbb{N}}\). We show in Section IV that the results of these Minkowski differences and knowledge of \(\mathcal{U}\) are sufficient for calculating \(\{A^{i-1}b\}_{i\in\mathbb{N}}\), which in turn can be utilized to calculate the matrix pair \((A,b)\) for controllable systems. We first tackle Problem 1.
## III Uniqueness of the Derived System Model
We wish to determine when any pair \((A,b)\) uniquely defines the dynamics of (1). It can be easily shown that the answer is generally negative. Consider an unknown system (1) where
\[A=\begin{bmatrix}0&0\\ 0&1\end{bmatrix},\quad b=\begin{bmatrix}0\\ 1\end{bmatrix} \tag{3}\]
and \(\mathcal{U}=[0,1]\). By equation (2) of Theorem 1, we see that if \(A^{\prime}=I\), then the reachable sets of (1) with matrix pairs \((A,b)\) and \((A^{\prime},b)\) are equivalent. Thus, we begin by determining sufficient conditions which guarantee whether \((A,b)\) can be uniquely recovered as stated in Problem 1. We will show uniqueness under several technical assumptions; Lemma 1 shows said assumptions are generic in a topological sense.
**Lemma 1**: _Let \(\mathcal{N}\subset\mathbb{R}^{n\times n}\) be the set of all matrices such that if \(A\in\mathcal{N}\), then \(A^{2}\) has distinct eigenvalues. Let \(b\in\mathbb{R}^{n}\backslash\{0\}\) and \(\mathcal{O}\in\mathbb{R}^{n\times n}\) be the set of all matrices such that, if \(A\in\mathcal{O}\) and \(\eta\in\mathbb{C}^{n}\) is any left eigenvector of \(A\), \(b^{T}\eta\neq 0\). Then, \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) is an open and dense set._
It is a well known result that the set of all matrices with distinct eigenvalues and the set \(GL(n)\) are both open and dense [23]. Clearly, openness of the former set implies \(\mathcal{N}\) is open. To show \(\mathcal{N}\) is also dense, we would follow similar steps as part of the proof to show \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) is dense. For succinctness, we prove \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) is open and dense and leave the proof that \(\mathcal{N}\) is dense to the reader.
Openness of \(GL(n)\cap\mathcal{N}\cap\mathcal{O}\) can be trivially concluded by the continuity of eigenvectors [24], meaning if we consider a matrix \(A(t)\) whose elements are a continuous function of \(t\), any eigenvectors \(v_{i}(t)\) and left eigenvectors \(\eta_{i}(t)\) of norm \(1\) of \(A(t)\) are continuous function of \(t\).
We now prove denseness. In other words, we will show that for any arbitrary matrix \(A\) and any \(\epsilon>0\), there exists a matrix \(A^{\prime\prime}\in GL(n)\cap\mathcal{N}\cap\mathcal{O}\) such that \(\|A-A^{\prime\prime}\|<\epsilon\). Let \(\eta_{i}\) be the left eigenvectors of \(A\) so that \(\eta_{i}^{T}A=\lambda_{i}\eta_{i}^{T}\). By the denseness of \(GL(n)\), for any \(\delta>0\) we can find vectors \(\eta_{i}^{\prime T}\) such that \(\|\eta_{i}^{T}-\eta_{i}^{\prime T}\|<\delta\) for all \(i\) and \(\det(|\eta_{i}^{\prime}\ \eta_{2}^{\prime}\ \cdots\ \eta_{n}^{\prime T}|)\neq 0\). By the continuity of determinants and because \(b\neq 0\), we can slightly perturb one element of \(\eta_{i}^{\prime T}\) to obtain \(\eta_{i}^{\prime\prime T}\) such that \(\det([\eta_{1}^{\prime\prime}\ \eta_{2}^{\prime\prime}\ \cdots\ \eta_{n}^{\prime\prime}])\neq 0\), \(\|\eta_{i}^{T}-\eta_{i}^{\prime\prime T}\|<\delta\) for all \(i\), and \(b^{T}\eta_{i}\neq 0\). We now let \(\eta_{i}^{\prime\prime}\) form a basis in \(\mathbb{C}^{n}\), and define a matrix \(A^{\prime}\) such that \(\eta_{i}^{\prime\prime T}A^{\prime}=\lambda_{i}\eta_{i}^{\prime\prime T}\) and \(A^{\prime}\in\mathcal{O}\). If the perturbations above are performed in a way that ensure that perturbations of real eigenvectors remain real, and perturbations of complex conjugate vectors remain complex conjugates, matrix \(A^{\prime}\) is real [25].
Since \(\eta_{i}^{\prime\prime}\) form a basis in \(\mathbb{C}^{n}\), we can represent any vector \(x\in\mathbb{R}^{n}\) as \(x=\sum_{i=1}^{n}\beta_{i}(x)\eta_{i}^{\prime\prime T}\) where \(\beta_{i}(x)\in\mathbb{R}\). We can compute \(\beta_{i}(x)\) as a continuous function of \(x\). Recall that \(\|A\|=\max_{\|x\|=1}\|Ax\|\). We consider \(x\) such that \(\|x\|=1\). Then \(\beta_{i}(x)\) is a continuous function on a compact space and thus has a maximum. Let \(\alpha_{i}=\max\{|\beta_{i}(x)|\ |\ \|x\|=1\}\). Note that \(x^{T}A^{\prime}=\sum_{i=1}^{n}\lambda_{i}\beta_{i}(x)\eta_{i}^{\prime\prime T}\). It follows that
\[\|x^{T}A-x^{T}A^{\prime}\|\leq\sum_{i=1}^{n}\|(\beta_{i}(x)\eta_{i}^{\prime \prime T})A-\beta_{i}(x)\lambda_{i}\eta_{i}^{\prime\prime T}\|\]
\[=\sum_{i=1}^{n}\|\beta_{i}(x)(\eta_{i}^{T}A-(\eta_{i}^{T}-\eta_{i}^{\prime \prime T})A)-\beta_{i}(x)\lambda_{i}\eta_{i}^{\prime\prime T}\|\]
\[=\sum_{i=1}^{n}\|\beta_{i}(x)((\eta_{i}^{\prime\prime T}-\eta_{i}^{T})A+ \lambda_{i}(\eta_{i}^{T}-\eta_{i}^{\prime\prime T}))\|\]
\[<\sum_{i=1}^{n}(\|\alpha_{i}A\|+\|\alpha_{i}\lambda_{i}\|)\delta\]
and so if we set \(\delta=\epsilon/(2\sum_{i=1}^{n}\|\alpha_{i}A\|+\|\alpha_{i}\lambda_{i}\|)\), then \(\|x^{T}A-x^{T}A^{\prime}\|<\epsilon/2\).
Given \(\lambda_{1},\ldots,\lambda_{n}\), for any \(\rho>0\) we can obviously find a set \(\{\lambda_{1}^{\prime},\ldots,\lambda_{n}^{\prime}\}\) such that \(|\lambda_{i}-\lambda_{i}^{\prime}|<\rho\) for all \(i\), \(\lambda_{i}^{\prime}\neq 0\) for all \(i\), and \(\lambda_{i}=\overline{\lambda_{j}}\) implies \(\lambda_{i}^{\prime}=\overline{\lambda_{j}^{\prime}}\). Now, define \(A^{\prime\prime}\) by \(\eta_{i}^{\prime\prime T}A^{\prime\prime}=\lambda_{i}^{\prime}\eta_{i}^{\prime \prime T}\) and \(A^{\prime\prime}\in\mathcal{M}\cap\mathcal{N}\cap\mathcal{O}\). As before, if the perturbation of eigenvalues is performed in such a way that real eigenvalues remain real and complex conjugates remain conjugate, \(A^{\prime\prime}\) is real. It follows that
\[\|x^{T}A^{\prime}-x^{T}A^{\prime\prime}\|\leq\sum_{i=1}^{n}\|\beta_{i}(x)\lambda _{i}\eta_{i}^{\prime\prime T}-\beta_{i}(x)\lambda_{i}^{\prime}\eta_{i}^{\prime \prime T}\|\]
\[=\sum_{i=1}^{n}\|\beta_{i}(x)\eta_{i}^{\prime\prime T}(\lambda_{i}-\lambda_{i}^ {\prime})\|<\sum_{i=1}^{n}\|\alpha_{i}\eta_{i}^{\prime\prime T}\|\rho.\]
If we set \(\rho=\epsilon/(2\sum_{i=1}^{n}\|\alpha_{i}\eta_{i}^{\prime\prime T}\|)\), then \(\|x^{T}A^{\prime}-x^{T}A^{\prime\prime}\|<\epsilon/2\). Finally we have \(\|x^{T}Ax-x^{T}A^{\prime\prime}\|=\|x^{T}A-x^{T}A^{\prime\prime}+x^{T}A^{\prime}-x ^{T}A^{\prime}\|\leq\|x^{T}A-x^{T}A^{\prime}\|+\|x^{T}A^{\prime}-x^{T}A^{\prime \prime}\|<\epsilon/2+\epsilon/2=\epsilon\). Since this inequality holds for all \(x\) such that \(\|x\|=1\), indeed \(\|A-A^{\prime\prime}\|<\epsilon\), and the claim is proven.
We emphasize that many well-known linear controllable systems, such as the discrete double integrator, RLC circuit, and linearized pendulum [26], contain \(A\) matrices which satisfy the conditions of Lemma 1. Also, these generic assumptions are not necessary, but sufficient to guarantee uniqueness. For example, a row perturbation of \(A\) in (3) clearly does not satisfy the generic assumptions in Lemma 1, but the reachable sets of (1) with this new matrix can be used to uniquely generate the dynamics, which implies this method can be applied to a larger set of systems. Finding such non-generic assumptions which guarantee uniqueness is a highly involved problem and remains for future work. In the proof below, we will use the assumptions in Lemma 1 to prove that the dynamics derived from reachable sets are generically unique, at least for an asymmetric input set.
**Theorem 2**: _Let \(\mathcal{U}=[c,d]\), where \(c\neq\pm d\). Let \(\eta_{i}\ \in\ \mathbb{C}^{n}\) for \(i\in\{1,\ldots,n\}\) be the left eigenvectors of \(A\). Let the sequence \(\{\mathcal{R}(j,0)\}_{j\in\mathbb{N}}\) be generated by system (1) for system matrices \((A,b)\) and \((A^{\prime},b^{\prime})\), where \(A\), \(A^{\prime}\in GL(n)\), \(A\) and \(A^{\prime}\) have \(n\) distinct eigenvalues, and \(b^{T}\eta_{i}\neq 0\) for all \(i\). Then, \((A,b)=(A^{\prime},b^{\prime})\)._
If \((A,b)\) and \((A^{\prime},b^{\prime})\) for system (1) produce an identical sequence \(\{\mathcal{R}(j,0)\}_{j\in\mathbb{N}}\), then \(\mathcal{R}(1,0)=b\mathcal{U}=b^{\prime}\mathcal{U}\), i.e., there are two options: (i) \(bc=b^{\prime}c\) and \(bd=b^{\prime}d\) or (ii) \(bc=b^{\prime}d\) and \(bd=b^{\prime}c\). If the latter option is true, then \(bcd=b^{\prime}d^{2}=b^
the first element of the left eigenvectors of \(\hat{A}\) being non-zero. To simplify the notation, by a standard abuse we now let \((A,e_{1})\), \((A^{\prime},e_{1})\) represent the system matrices after performing the above transformation. By the above discussion, we are then assuming that \(A\) and \(A^{\prime}\) are invertible, have distinct eigenvalues, and that \(\eta_{i1}\neq 0\) for all \(i\).
Noting that the two systems produce the same reachable sets, by (2) it follows that \(A^{k}e_{1}=A^{\prime k}e_{1}\) for all \(k\in\mathbb{N}\). By the same logic as in the first paragraph of the proof, we see that since \(c\neq-d\), then \(A^{k}ce_{1}=A^{\prime k}ce_{1}\) and \(A^{k}de_{1}=A^{\prime k}de_{1}\) is satisfied for all \(k\in\mathbb{N}\), giving us the relation
\[A^{k}e_{1}=A^{\prime k}e_{1}\quad\forall\,k\in\mathbb{Z}_{\geq 0}. \tag{4}\]
Equation (4) implies \(A^{k-1}A^{\prime}e_{1}=A^{\prime k-1}Ae_{1}\) and \(A^{\prime k-2}Ae_{1}=A^{k-1}e_{1}\) for all \(k\geq 2\). We have
\[A^{k-1}A^{\prime}e_{1}=A^{\prime k-1}Ae_{1}=A^{\prime}A^{\prime k-2}Ae_{1}=A^{ \prime}A^{k-1}e_{1}.\]
Hence, \(A^{k-1}A^{\prime}e_{1}=A^{\prime}A^{k-1}e_{1}\); since \(A^{\prime}\) is invertible,
\[A^{k}e_{1}=A^{\prime(-1)}A^{k}A^{\prime}e_{1}\quad\forall\,k\in\mathbb{Z}_{ \geq 0}. \tag{5}\]
Let \(v_{i}\) denote the right eigenvectors of \(A\) and \(v^{\prime}_{i},\,\eta^{\prime}_{i}\) denote the right and left eigenvectors of \(A^{\prime(-1)}AA^{\prime}\) respectively. Since \(A\) and \(A^{\prime(-1)}AA^{\prime}\) are similar matrices, their eigenvalues are equal [25]. Let \(A=VDV^{-1}\) and \(A^{\prime(-1)}AA^{\prime}=V^{\prime}DV^{\prime(-1)}\) where the rows of \(V^{-1}\) and \(V^{\prime-1}\) are \(\eta^{\prime I}_{i}\) and \(\eta^{\prime T}_{i}\) respectively and the columns of \(V\) and \(V^{\prime}\) are \(v_{i}\) and \(v^{\prime}_{i}\) respectively. By our assumptions, \(\eta_{i1}\neq 0\), so we can now scale the \(\eta\)'s so that \(\eta_{i1}=1\). We then redefine \(v_{i}\) to be the newly scaled right eigenvectors such that \(\eta_{i1}=1\). Next, we write (5) in tensor notation [25] and get
\[\sum_{i}\lambda_{i}^{k}v_{i}=\sum_{i}\lambda_{i}^{k}v^{\prime}_{i}\eta^{ \prime T}_{i1}\quad\forall\,k\in\mathbb{Z}_{\geq 0}\]
which implies
\[\sum_{i}\lambda_{i}^{k}(v_{i}-v^{\prime}_{i}\eta^{\prime T}_{i1})=0\quad \forall\,k\in\mathbb{Z}_{\geq 0}. \tag{6}\]
Taking \(k\in\{0,\ldots,n-1\}\) we have a series of \(n\) equations. For the \(j\)-th element of any \(v_{i}\) and \(v^{\prime}_{i}\), we have
\[\Lambda S_{j}=\begin{bmatrix}1&\ldots&1\\ \lambda_{1}&\ldots&\lambda_{n}\\ \vdots&\vdots&\vdots\\ \lambda_{1}^{n-1}&\ldots&\lambda_{n}^{n-1}\end{bmatrix}\begin{bmatrix}v_{1j}- v^{\prime}_{1j}\eta^{\prime T}_{11}\\ v_{2j}-v^{\prime}_{2j}\eta^{\prime T}_{21}\\ \vdots\\ v_{nj}-v^{\prime}_{nj}\eta^{\prime T}_{n1}\end{bmatrix}=\begin{bmatrix}0\\ 0\\ \vdots\\ 0\end{bmatrix}\]
for any \(j\in\{1,\ldots,n\}\). Notice that \(\Lambda\in\mathbb{C}^{n\times n}\) is the square Vandermonde matrix [27]. Recall that the Vandermonde matrix is invertible if elements \(\lambda_{i}\) are distinct for all \(i\), which holds by assumption. If \(\eta^{\prime}_{i1}=0\) for any \(i\), then \(v_{i}=0\), which contradicts the assumption that \(A\) is diagonalizable. Consequently, \(\eta^{\prime}_{i1}\neq 0\) for all \(i\), so similar to the previous step, we can scale \(v^{\prime}_{i}\) and \(\eta^{\prime}_{i1}\) such that \(\eta^{\prime}_{i1}=1\) for all \(i\). It follows that \(v_{ij}=v^{\prime}_{ij}\) for all \(i,j\) since \(\Lambda\) is invertible. Therefore, \(A=A^{\prime(-1)}AA^{\prime}\).
Recall that we assumed that all eigenvalues of \(A\) are distinct. Thus, since \(A\) and \(A^{\prime}\) commute, we can conclude that \(A\) and \(A^{\prime}\) have the same eigenvectors [28]. Recall that \(A\) and \(A^{\prime}\) are both diagonalizable. If we take the eigenvalue expansion of \(A\) and \(A^{\prime}\) and multiply both on the left by \(V^{-1}\), then equation (4) implies
\[D^{k}\begin{bmatrix}\eta_{11}\\ \eta_{21}\\ \vdots\\ \eta_{n1}\end{bmatrix}=D^{\prime k}\begin{bmatrix}\eta_{11}\\ \eta_{21}\\ \vdots\\ \eta_{n1}\end{bmatrix}\quad\forall\,\,k\in\mathbb{N},\]
where \(D^{\prime}\) is the diagonal matrix with eigenvalues of \(A^{\prime}\) on the diagonal. Subtracting the right hand side from both sides reveals that (4) implies
\[(\lambda_{i}^{k}-\lambda_{i}^{\prime k})\eta_{i1}=0\quad\forall\,\,k\in \mathbb{N}.\]
By assumption, \(\eta_{i1}\neq 0\) for all \(i\), so
\[\lambda_{i}^{k}=\lambda_{i}^{\prime k}\quad\forall\,\,k\in\mathbb{N}.\]
Therefore, both \(A\) and \(A^{\prime}\) have the same eigenvectors and eigenvalues, hence \(A=A^{\prime}\).
Theorem 2 proves that given reachable sets of generic system (1), the pair \((A,b)\), i.e., the system dynamics, are uniquely defined when the set of control inputs is not symmetric around \(0\). We now want to address the degenerate case where \(\mathcal{U}=[-c,c]\). It can be easily seen that in such a case, system (1) with \((A,b)\) and \((-A,-b)\) will produce the same reachable sets. To discuss a relaxed notion of system uniqueness, we provide a formal definition of \(\pm\)-uniqueness.
**Definition 2**: _The system dynamics \((A,b)\) of (1) are \(\pm\)-unique if \((A,b)\) and \(-(A,b)\) generate the same reachable sets, but there do not exist other pairs \((A^{\prime},b^{\prime})\) which generate the same reachable sets._
We conjecture that in the case when \(\mathcal{U}\) is symmetric around \(0\) - a scenario common in many controls applications [29] - the dynamics are \(\pm\)-unique.
**Conjecture 1**: _Let \(\mathcal{U}=[-c,c]\). Let the sequence \(\{\mathcal{R}(i,0)\}_{i\in\mathbb{N}}\) be generated by \((A,b)\), where \(A^{2}\) has distinct eigenvalues and \((A,b)\) are known to satisfy the assumptions of Theorem 2. Then, \((A,b)\) is \(\pm\)-unique._
Proving the conjecture above requires extensive theoretical developments and remains for future work. As an illustration, we formally prove the conjecture to be true in the two-dimensional case.
**Theorem 3**: _Let \(n=2\). Then, Conjecture 1 is correct._
Similarly to the proof of Theorem 2, we have two options: \(bc=b^{\prime}c\) or \(bc=-b^{\prime}c\). In the former case, we reach the same result as before, namely \(b=b^{\prime}\). In the latter case, we obtain \(b=-b^{\prime}\). Altogether, we get \(b=(-1)^{p(0)}b^{\prime}\) where \(p(0)\in\{0,1\}\).
As in Theorem 2, through a coordinate transformation, we assume without loss of generality that \(b^{\prime}=e_{1}\). Then \(b\mathcal{U}=(-1)^{p(0)}b^{\prime}\mathcal{U}=(-1)^{p(0)}[-c,c]e_{1}\). Following the same steps as in the beginning of the proof in Theorem 2, with a standard abuse of notation, we let \(A\), \(A^{\prime}\) represent the system dynamics in this new basis where \(A\) and \(A^{\prime}\) satisfy our assumptions. Also, we find that if \(\mathcal{U}=[-c,c]\), then we arrive at the relation
\[A^{k}e_{1}=(-1)^{p(k)}A^{\prime k}e_{1}\quad\forall\,k\in\mathbb{Z}_{\geq 0}. \tag{7}\]
When \(k=2\), we see that regardless of \(p(1)\), \(AA^{\prime}e_{1}=(-1)^{p(2)}A^{\prime}Ae_{1}\). Using this fact along with equation (7) implies \(A^{k-1}A^{\prime}e_{1}=(-1)^{p(k)}A^{\prime k-1}Ae_{1}\) for all \(k\geq 1\) and \(A^{k-2}A^{\prime}e_{1}=(-1)^{p(k-1)}A^{\prime k-2}Ae_{1}\) for all \(k\geq 2\). We have
\[(-1)^{p(k)}A^{\prime k-1}Ae_{1}=(-1)^{p(k)}A^{\prime k-2}Ae_{1}\]
\[=(-1)^{p(k)}(-1)^{p(k-1)}(-1)^{p(1)}A^{\prime}A^{k-1}e_{1}.\]
Hence, \(A^{k-1}A^{\prime}e_{1}=(-1)^{p(k)}(-1)^{p(k-1)}(-1)^{p(1)}A^{\prime}A^{k-1}e_{1}\); since \(A^{\prime}\) is invertible,
\[A^{k-1}e_{1}=\frac{A^{\prime(-1)}A^{k-1}A^{\prime}e_{1}}{(-1)^{p(k)}(-1)^{p(k- 1)}(-1)^{p(1)}}.\]
We define \(q(k)\in\{0,1\}\) by \((-1)^{q(k)}=((-1)^{p(k)}(-1)^{p(k-1)}(-1)^{p(1)})^{-1}=(-1)^{p(k)}(-1)^{p(k-1) }(-1)^{p(1)}\). We then have
\[A^{k-1}e_{1}=(-1)^{q(k)}A^{\prime(-1)}A^{k-1}A^{\prime}e_{1}\quad\forall\,k \in\mathbb{N}.\]
It holds that \(A^{k-1}=A^{\prime(-1)}A^{k-1}A^{\prime}\) have the same eigenvalues, so \(A^{k-1}=-A^{\prime(-1)}A^{k-1}A^{\prime}\) must have eigenvalues of opposite sign. That is, if \(\lambda_{i}\) and \(\lambda_{i}^{\prime}\) are the eigenvalues of \(A\) and \(\pm A^{\prime(-1)}AA^{\prime}\) respectively, then \(\lambda_{i}=\pm\lambda_{i}^{\prime}\). Following the same steps as in the proof of Theorem 2 we get
\[\sum_{i}\lambda_{i}^{k-1}v_{i}=(-1)^{q(k)}\sum_{i}\lambda_{i}^{k-1}v_{i}^{ \prime}\eta_{i1}^{\prime T}\quad\forall\,k\in\mathbb{N}.\]
Subtracting the right hand side from both sides gives us
\[\sum_{i}\lambda_{i}^{k-1}(v_{i}-(-1)^{q(k)}v_{i}^{\prime}\eta_{i1}^{\prime T} )=0\quad\forall\,k\in\mathbb{N}. \tag{8}\]
We now show that if \((A,B)\in(\mathbb{R}^{2\times 2},\mathbb{R}^{2})\), then equation (8) implies \(A=\pm A^{\prime(-1)}AA^{\prime}\). Recall that \(q(k)\in\{0,1\}\) and so \((q(1),q(2))\in\{(0,0),\,(0,1),\,(1,0),\,(1,1)\}\). When \((q(1),q(2))=(0,0)\), equation (8) is the same as equation (6) for \(k=1\) and \(k=2\). If we write these equations in matrix form as in Theorem 2 we again have the Vandermonde matrix on the left-hand side. Following the same steps as Theorem 2, we see that \(v_{i}=v_{i}^{\prime}\) for all \(i\). Since \(\lambda_{i}=\pm\lambda_{i}^{\prime}\), then \(A=\pm A^{\prime(-1)}AA^{\prime}\). Similarly, if \((q(1),q(2))=(1,1)\), if we follow the same procedure to find \(A=\pm A^{\prime(-1)}AA^{\prime}\).
The most interesting cases are when \((q(1),q(2))\in\{(0,1),(1,0)\}\). Let us first consider \((q(1),q(2))=(1,0)\). Recall \(q(k)\in\{0,1\}\), so if \(q(3)=0\), then \((q(2),q(3))=(0,0)\). If \((q(k),q(k+1))=(0,0)\) for some \(k\), we then have
\[\Lambda S_{j}=\begin{bmatrix}\lambda_{1}^{k}&\lambda_{2}^{k}\\ \lambda_{1}^{k+1}&\lambda_{2}^{k+1}\end{bmatrix}\begin{bmatrix}v_{1j}-v_{1j}^ {\prime}\eta_{11}^{\prime T}\\ v_{2j}-v_{2j}^{\prime}\eta_{21}^{\prime T}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}. \tag{9}\]
We note
\[\det(\Lambda)=\lambda_{1}^{k}\lambda_{2}^{k}\begin{vmatrix}1&1\\ \lambda_{1}&\lambda_{2}\end{vmatrix}\neq 0\]
since we have two non-zero scalars multiplied by the non-zero Vandermonde determinant in the case of distinct eigenvalues. Hence, \(\Lambda\) as defined in (9) is invertible and we again conclude that \(A=\pm A^{\prime(-1)}AA^{\prime}\).
We lastly consider cases where \(q(k)\) is alternating, namely \(\{q(k)\}_{k=1}^{3}=(0,1,0)\) and \(\{q(k)\}_{k=1}^{3}=(1,0,1)\). In the former case, we have
\[\Lambda S_{j}=\begin{bmatrix}1&1\\ \lambda_{1}^{2}&\lambda_{2}^{2}\end{bmatrix}\begin{bmatrix}v_{1j}-v_{1j}^{ \prime}\eta_{11}^{\prime T}\\ v_{2j}-v_{2j}^{\prime}\eta_{21}^{\prime T}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}.\]
The generic assumption that all eigenvalues are distinct modulo a sign implies \(\Lambda\) is invertible, thus we again find \(v_{i}=v_{i}^{\prime}\) and thus \(A=\pm A^{\prime(-1)}AA^{\prime}\). By following the same steps, we arrive at the same conclusion when \(\{q(k)\}_{k=1}^{3}=(1,0,1)\).
We now have that \(A=(-1)^{q(2)}A^{\prime(-1)}AA^{\prime}\). If \(q(2)=0\), then \(A\) and \(A^{\prime}\) commute. Using assumptions of the theorem statement, we can conclude that \(A\) and \(A^{\prime}\) have the same eigenvectors [28]. If \(q(2)=1\), then \(A=-A^{\prime(-1)}AA^{\prime}\) and so \(A^{2}=A^{\prime(-1)}A^{2}A^{\prime}\). Clearly, \(A^{2}\) and \(A^{\prime}\) commute, and by the theorem statement, \(A^{2}\) has distinct eigenvalues, which again implies that \(A\) and \(A^{\prime}\) share the same eigenvectors.
We now follow the same steps as in the latter part of the proof of Theorem 2. Namely, we can diagonalize \(A\) and \(A^{\prime}\); taking the eigenvalue expansion of equation (7) and multiplying both sides on the left by the matrix of left eigenvectors gives us the series of equations
\[\lambda_{i}^{k-1}=(-1)^{q(k)}\lambda_{i}^{\prime k-1}\quad\forall\,k\in\mathbb{ N}.\]
Since \(q(2)=0\) or \(q(2)=1\), then \(\lambda_{i}=\lambda_{i}^{\prime}\) or \(\lambda_{i}=-\lambda_{i}^{\prime}\) for all \(i\). Since both \(A\) and \(A^{\prime}\) have the same eigenvectors and eigenvalues same to a sign, then \(A=\pm A^{\prime}\).
Theorem 2 solves Problem 1 in the generic case where \(\mathcal{U}\neq[-c,c]\) while Theorem 3 proves there exists a \(\pm\)-unique solution to Problem 1 in the two-dimensional case where \(\mathcal{U}=[-c,c]\). The proof of Theorem 3 drives our intuition for Conjecture 1 in general: intuitively, adding dimensions to the system should not make it more likely that multiple generic systems can produce the same reachable sets for all time, especially considering no two such systems exist when the input set is asymmetric. Formalizing this statement is left for future work.
We remark that if the system dynamics do not satisfy the assumptions of Theorem 2 or Theorem 3, they might not be (uniquely or \(\pm\)-uniquely) recoverable. However, using a slight perturbation of the reachable sets might recover a generic approximation of the true dynamics. Doing so, however, introduces challenges on the method of perturbing these sets. We leave such a discussion for future work.
## IV Solving for the System Dynamics
We ultimately want to use reachable sets to solve for the system dynamics. Equation (2) of Theorem 1 already gives us a formula for calculating \(A^{i-1}b\mathcal{U}\) for all \(i\in\mathbb{N}\), namely
\[A^{i-1}b\mathcal{U}=\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0).\]
In Theorem 2, we proved that the answer to Problem 1 is affirmative for generic, single-input linear systems, meaning that for cases where the linear system dynamics satisfy
the generic assumptions of Lemma 1, we can uniquely determine the true dynamics from the system's reachable sets. This motivates us to devise a procedure to calculate \((A,b)\).
We will determine \((A,b)\) from reachable sets through a two step procedure. First, we calculate \(A^{i-1}b\mathcal{U}\) for \(i=\{1,\ldots,n+1\}\). In the case where \(\mathcal{U}\neq[-c,c]\), the sequence of sets \(A^{i-1}b\mathcal{U}\) can be used to calculate \((A,b)\) directly. If \(\mathcal{U}=[-c,c]\), these same sets can be utilized to compute a number of candidate dynamics \((A,b)\) which satisfy \(\mathcal{R}(i,0)\) for all \(i\). To determine which candidate solutions are correct, we compute the forward reachable sets of (1) using all candidate \((A,b)\). By Theorem 3, in the two-dimensional case, only two solutions \((A,b)\) and \((A^{\prime},b^{\prime})\) such that \((A,b)=-(A^{\prime},b^{\prime})\) will satisfy \(\mathcal{R}(i,0)\) for all \(i\).
We begin our method by first using an algorithm that takes reachable sets of (1) and solves for \(A^{i-1}b\mathcal{U}\). By equation (2), we can utilize existing methods [20, 22, 30] to compute the Minkowski difference between two polygons to calculate \(A^{i-1}b\mathcal{U}\) given \(\mathcal{R}(i,0)\) for all \(i\in\mathbb{N}\). For this narrative, we adopt the method in [20]. By Lemma 1 of [20], if we let \(v^{(i)}\in\mathcal{V}\) be the vertices of \(\mathcal{R}(i-1,0)\), then the Minkowski difference \(\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0)\) may be computed by taking the intersection of the translation of the set \(\mathcal{R}(i,0)\) by vertices \(v^{(i)}\in\mathcal{V}\) of \(\mathcal{R}(i-1,0)\):
\[\mathcal{R}(i,0)\ominus\mathcal{R}(i-1,0)=\bigcap_{v^{(i)}\in\mathcal{V}}( \mathcal{R}(i,0)-v^{(i)}). \tag{10}\]
While computing the intersection in (10) is generally computationally difficult, calculations are made significantly easier as \(A^{i-1}b\mathcal{U}\) is a line segment; see [20] for details.
We now move to recover \(A^{i-1}b\) from \(A^{i-1}b\mathcal{U}\). We consider two cases: \(\mathcal{U}\neq[-c,c]\) and \(\mathcal{U}=[-c,c]\) for some \(c\in\mathbb{R}\). In the former case, taking the mean of the vertices of \(A^{i-1}b\mathcal{U}\) will provide \(A^{i-1}b\frac{c+d}{2}\). Multiplying this vector by \(\frac{2}{c+d}\) recovers \(A^{i-1}b\).
**Theorem 4**: _Let us assume the \(n\)-dimensional system (1) is controllable. Let \(C_{A,b}=\begin{bmatrix}b&Ab&\ldots&A^{n-1}b\end{bmatrix}\). For the single-input case, \(A=AC_{A,b}C_{A,b}^{-1}\)._
The proof of Theorem 4 is trivial, noting that \(C_{A,b}\) is full rank for controllable systems. We note that the assumption of controllability is generic [29].
In the case where \(\mathcal{U}=[-c,c]\), by multiplying the vertices of \(A^{i-1}b\mathcal{U}\) by \(c\), we can only recover \(A^{i-1}b\) up to a sign, generating two candidates for each \(i\). Substituting all possible candidates for \(A^{i-1}b\) into the columns of \(C_{A,b}\) and \(AC_{A,b}\) generates \(2^{n+1}\) candidate matrices \(A\).
To determine which candidate solutions yield the correct \(\pm\)-unique matrix pair \((A,b)\), we can plot the reachable sets of all \(2^{n+1}\) candidate solutions to solve for the desired unknown \(\pm\)-unique system dynamics. In the next section, we use the CORA toolkit [31] and adopt methods of computing the Minkowski difference detailed in [20] to numerically calculate the dynamics \((A,b)\) for an unknown band-pass filter circuit system and a two-dimensional unknown system with \(\mathcal{U}=[-1,1]\), validating the developed theory.
## V Numerical Examples
To validate the developed theory and demonstrate how to apply the proposed method, we first consider a scenario of reverse engineering an electric circuit from manufacturer specifications. At times, manufacturers will only release partial information about a system. For example, instead of providing a dynamic model of a manufactured part, manufacturers might convey the set of all voltages a circuit may output within a set amount of time given the set of all viable input frequencies. Such information can be interpreted as the minimum time in which a state can be reached, providing a picture of the system's reachable sets. Motivated by such an example, in this section, we provide an example of identifying the matrices \((A,b)\) of a band-pass filter circuit from its reachable sets. In a subsequent example, we identify the \(\pm\)-unique dynamics of an unknown two-dimensional system with an input set symmetric around zero. Both examples utilize the CORA toolkit [31] for set computations, namely to calculate convex hulls and Minkowski differences.
### _Band-Pass Filter Circuit_
We present the linear dynamic model of a band-pass filter circuit [32]. Let us assume \(x[0]=0\). The state-space controllable canonical representation [29] of this circuit is
\[x[j+1]=Ax[j]+bv_{c}[j] \tag{11}\] \[=\begin{bmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ -a_{0}&-a_{1}&-a_{2}&-a_{3}\end{bmatrix}x[j]+\begin{bmatrix}0\\ 0\\ 0\\ 1\end{bmatrix}v_{c}[j]\]
such that \(v_{c}[j]\in[0,1]\) for all \(j\in\mathbb{Z}_{\geq 0}\).
Assume the reachable sets \(\{\mathcal{R}(j+1,0)\}_{j=0}^{\infty}\) of the controllable, dynamical system (11) are known. From controllability, we know the form of system (11), but not parameters \(a_{0},\,a_{1},\,a_{2},\,a_{3}\). From this information, we want to recover the true parameters: \(a_{0}=3\), \(a_{1}=2\), \(a_{2}=3\), and \(a_{3}=6\). It can be easily shown that if \(a_{0}\neq 0\), the assumptions of Theorem 2 are satisfied. Clearly, \(A\) from (11) satisfies said assumptions. Clearly, the matrix \(M\) for which \(Mb=e_{1}\) is a simple row permutation; the assumptions of Theorem 2 are invariant under permutations, hence all assumptions are satisfied and the results of Theorem 2 apply when solving for the matrix pair \((A,b)\). That is, there exists a unique matrix pair which satisfies \(\{\mathcal{R}(j+1,0)\}_{j=0}^{\infty}\). Since the system is four-dimensional, Theorem 4 shows we need only consider the sets \(\{\mathcal{R}(j+1,0)\}_{j=0}^{4}\) to calculate \((A,b)\).
Assume, that \(\{\mathcal{R}(j+1,0)\}_{j=0}^{4}\) are known to equal
\[\mathcal{R}(1,0)=\mathrm{conv}\left(\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}\right),\]
\[\mathcal{R}(2,0)=\mathrm{conv}\left(\begin{bmatrix}0\\ 0\\ 1\\ -5\end{bmatrix},\begin{bmatrix}0\\ 0\\ -6\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 1\end{bmatrix}\right),\]
\[\mathcal{R}(3,0)=\mathrm{conv}\left(\begin{bmatrix}0\\ 0.86\\ -6.02\\ 33.00\end{bmatrix},\begin{bmatrix}0\\ -0.14\\ 0.98\\ -6.00\end{bmatrix},\begin{bmatrix}0\\ -0.14\\ -0.98\\ -5.00\end{bmatrix},\begin{bmatrix}0\\ 0.86\\ -6.02\\ 34.00\end{bmatrix}\right),\]
\[\mathcal{R}(4,0)=\mathrm{conv}\left(\begin{bmatrix}-0.15\\ -5.99\\ -5.99\\ 33.00\end{bmatrix},\begin{bmatrix}0.85\\ -5.95\\ -34.01\\ -188\end{bmatrix},\begin{bmatrix}0.85\\ -5.95\\ 34.01\\ -187\end{bmatrix},\begin{bmatrix}-0.15\\ 1.05\\ -5.99\\ 34.00\end{bmatrix}\right),\]
\[\mathcal{R}(5,0)=\mathrm{conv}\left(\begin{bmatrix}-5.93\\ 33.88\\ -188.02\\ 1035.00\end{bmatrix},\begin{bmatrix}1.07\\ -6.12\\ 33.98\\ -188.00\end{bmatrix},\begin{bmatrix}1.07\\ -6.12\\ 33.98\\ -187.00\end{bmatrix},\begin{bmatrix}-5.93\\ 33.88\\ -188.02\\ 1036.00\end{bmatrix}\right).\]
Based on Theorem 2, the knowledge of only these five sets is sufficient to reconstruct the true values of parameters \(a_{0}\), \(a_{1}\), \(a_{2}\), and \(a_{3}\).
Since \(\mathcal{R}(1,0)=\mathcal{U}\) and \(\mathcal{U}=[0,1]\), \(b\) can be trivially computed to equal \(b=\begin{bmatrix}0&0&0&1\end{bmatrix}^{T}\). Next, by equation (2) of Theorem 1, \(A\dot{b}\mathcal{U}=\mathcal{R}(2,0)\ominus\mathcal{R}(1,0)\). Given \(\mathcal{U}=[0,1]\), by taking the Minkowski difference we get \(Ab=\begin{bmatrix}0&0&1&-6\end{bmatrix}^{T}\). Repeating the procedure, we have \(A^{2}b\mathcal{U}=\mathcal{R}(3,0)\ominus\mathcal{R}(2,0)\), \(A^{3}b\mathcal{U}=\mathcal{R}(4,0)\ominus\mathcal{R}(3,0)\), \(A^{4}b\mathcal{U}=\mathcal{R}(5,0)\ominus\mathcal{R}(4,0)\), and \(\mathcal{U}=[0,1]\). It follows that
\[A^{2}b=\begin{bmatrix}0\\ 1\\ -6\\ 33\end{bmatrix},\,A^{3}b=\begin{bmatrix}1\\ -6\\ 33\\ -182\end{bmatrix},\,A^{4}b=\begin{bmatrix}-6\\ 33\\ -182\\ 1002\end{bmatrix}.\]
Recall we assume the system is controllable, and thus the controllability matrix \(C_{A,b}\) is invertible. Finally, by Theorem 4,
\[A=AC_{A,b}C_{A,b}^{-1}\]
\[=\begin{bmatrix}A^{4}b&A^{3}b&A^{2}b&Ab\end{bmatrix}\begin{bmatrix}A^{3}b&A^{2 }b&Ab&b\end{bmatrix}^{-1}\]
which produces the correct matrix \(A\) accurately reconstructing the parameters \(a_{0}=3\), \(a_{1}=2\), \(a_{2}=3\), and \(a_{3}=6\).
### _Numerical Example with a Symmetric Input Set_
To validate Theorem 3, we present an example of a linear two-dimensional dynamical system
\[\begin{bmatrix}x_{1}[i+1]\\ x_{2}[i+1]\end{bmatrix}=\begin{bmatrix}2&1\\ 2&3\end{bmatrix}\begin{bmatrix}x_{1}[i]\\ x_{2}[i]\end{bmatrix}+\begin{bmatrix}0\\ 1\end{bmatrix}u[i] \tag{12}\]
with \(\mathcal{U}=[-1,1]\). Such a system satisfies the assumptions of Lemma 1. As in the previous example, we will show that we can reconstruct the values of system matrices in (12) from reachable sets, albeit up to a sign. Assume, thus, that we are given a sequence of reachable sets \(\{\mathcal{R}(i,0)\}_{i=1}^{4}\) as convex hulls of vertices:
\[\mathcal{R}(1,0)=\mathrm{conv}\left(\pm\begin{bmatrix}0\\ 1\end{bmatrix}\right),\]
\[\mathcal{R}(2,0)=\mathrm{conv}\left(\pm\begin{bmatrix}-1\\ -4\end{bmatrix},\pm\begin{bmatrix}1\\ 2\end{bmatrix}\right),\]
\[\mathcal{R}(3,0)=\mathrm{conv}\left(\pm\begin{bmatrix}6\\ 15\end{bmatrix},\pm\begin{bmatrix}4\\ 7\end{bmatrix},\pm\begin{bmatrix}6\\ 13\end{bmatrix}\right),\]
\[\mathcal{R}(4,0)=\mathrm{conv}\left(\pm\begin{bmatrix}27\\ 58\end{bmatrix},\pm\begin{bmatrix}15\\ 28\end{bmatrix},\pm\begin{bmatrix}25\\ 50\end{bmatrix},\pm\begin{bmatrix}27\\ 56\end{bmatrix}\right).\]
Clearly, \(b\mathcal{U}=\mathcal{R}(1,0)\). Equation (2) of Theorem (1) also shows that \(Ab\mathcal{U}=\mathcal{R}(2,0)\ominus\mathcal{R}(1,0)\), \(A^{2}b\mathcal{U}=\mathcal{R}(3,0)\ominus\mathcal{R}(2,0)\), and \(A^{3}b\mathcal{U}=\mathcal{R}(4,0)\ominus\mathcal{R}(3,0)\). Since \(\mathcal{U}=[-1,1]\), through the same calculations in the previous example we thus obtain
\[b=\pm\begin{bmatrix}0\\ 1\end{bmatrix},\,Ab=\pm\begin{bmatrix}1\\ 3\end{bmatrix},\]
\[A^{2}b=\pm\begin{bmatrix}5\\ 11\end{bmatrix},\,A^{3}b=\pm\begin{bmatrix}21\\ 43\end{bmatrix}.\]
Let us denote \(b^{-}=\begin{bmatrix}0\\ -1\end{bmatrix}\), \(b^{+}=\begin{bmatrix}0\\ 1\end{bmatrix}\), \(Ab^{-}=\begin{bmatrix}-1\\ -3\end{bmatrix}\), etc. We now consider a set of \(2^{3}\) possible candidate pairs of \((C_{A,b},AC_{A,b})\) matrices:
\[(C_{A,b},AC_{A,b})=\left\{\begin{array}{l}\left(\begin{bmatrix}b^{+}&Ab^{+}\\ -6\\ 33\end{bmatrix}\right),\,A^{3}b=\begin{bmatrix}1\\ -6\\ 33\end{bmatrix},\,A^{3}b=\begin{bmatrix}1\\ -182\\ 1002\end{bmatrix},\,A^{3}b=\begin{bmatrix}-6\\ -182\\ 1002\end{bmatrix}.\end{array}\right. \tag{13}\]
By Theorem 4, determining all candidate matrix pairs \((A,b)\) becomes a trivial calculation using all possible pairs from (13). Doing so provides two \(\pm\)-unique candidate pairs:
\[(A,b)=\pm\left(\begin{bmatrix}2&1\\ 2&3\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right),\,(A^{\prime},b^{\prime})=\pm\left(\begin{bmatrix}8&-1\\ 20&-3\end{bmatrix},\begin{bmatrix}0\\ -1\end{bmatrix}\right).\]
While the calculations above used only \(\mathcal{R}(1,0)\), \(\mathcal{R}(2,0)\), and \(\mathcal{R}(3,0)\), to distinguish between these two final candidates we need to employ \(\mathcal{R}(4,0)\). Fig. 1 shows the plots of the forward reachable sets for system (12) at time \(i=4\) with matrix pairs \((A,b)\) and \((A^{\prime},b^{\prime})\) on the left and right respectively.
Fig. 1(a) shows a reachable set that is identical to \(\mathcal{R}(4,0)\), while Fig. 1(b) illustrates the reachable set
\[\mathcal{R}^{\prime}(4,0)=\mathrm{conv}\left(\pm\begin{bmatrix}35\\ 82\end{bmatrix},\pm\begin{bmatrix}25\\ 60\end{bmatrix},\pm\begin{bmatrix}33\\ 74\end{bmatrix},\pm\begin{bmatrix}35\\ 80\end{bmatrix}\right),\]
which is not the same as \(\mathcal{R}(4,0)\). Therefore, we can identify the matrix pair \((A,b)\), up to a sign, as the true dynamics
Fig. 1: Reachable sets for candidate dynamics at \(i=4\).
of the unknown linear system (12). As mentioned before, reachable sets in this case do not allow us to distinguish any further: the dynamics that differ only in the sign generate the same reachable sets.
## VI Conclusion
This paper considers the problem of determining the dynamics of an unknown discrete-time linear system using its reachable sets. The theory developed in this paper proves that for input sets that are asymmetric around the origin, the derived system dynamics are, given some technical assumptions, unique. Thus, in such cases, we can determine the true dynamics of an unknown system using the sequence of the system's reachable sets. For the case where the input set is symmetric, we prove that the derived dynamics are unique up to a factor of \(\pm 1\) for two-dimensional systems and provide a conjecture that asserts the same result holds for the \(n\)-dimensional case. We then develop a method for deriving the dynamics of a system given the sequence of the system's reachable sets using Minkowski differences and proceed to illustrate by example how the method can be applied to identify the unknown linear model of a band-pass filter. Novel identification methods are also applied to an academic system with an input set symmetric around zero to detail how we can adapt said methods to uniquely identify the model of a linear system.
A natural next step is to prove the stated conjecture to show \(\pm\)-uniqueness for \(n\)-dimensional systems. Also, our current technical assumptions are consistent with generic properties of matrices, but ideally we want to relax these assumptions to identify necessary conditions for uniqueness. We also want to consider cases when the state's initial conditions are non-zero, when there is only available knowledge of the system's reachable sets at non-consecutive time steps, and also when working with the more general framework of multi-input systems.
## Acknowledgements
We thank Jeffrey Stuart from Pacific Lutheran University for providing insights in combinatorial matrix theory that helped develop the scope this project, namely addressing the question of uniqueness outlined in Problem 1.
|
2309.07649 | Decay estimates for one Aharonov-Bohm solenoid in a uniform magnetic
field II: wave equation | This is the second of a series of papers in which we investigate the decay
estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform
magnetic field. In our first starting paper \cite{WZZ}, we have studied the
Strichartz estimates for Schr\"odinger equation with one Aharonov-Bohm solenoid
in a uniform magnetic field. The wave equation in this setting becomes more
delicate since a difficulty is raised from the square root of the eigenvalue of
the Schr\"odinger operator $H_{\alpha, B_0}$ so that we cannot directly
construct the half-wave propagator. An independent interesting result
concerning the Gaussian upper bounds of the heat kernel is proved by using two
different methods. The first one is based on establishing Davies-Gaffney
inequality in this setting and the second one is straightforward to construct
the heat kernel (which efficiently captures the magnetic effects) based on the
Schulman-Sunada formula. As byproducts, we prove optimal bounds for the heat
kernel and show the Bernstein inequality and the square function inequality for
Schr\"odinger operator with one Aharonov-Bohm solenoid in a uniform magnetic
field. | Haoran Wang, Fang Zhang, Junyong Zhang | 2023-09-14T12:15:42Z | http://arxiv.org/abs/2309.07649v1 | # Decay estimates for one Aharonov-Bohm solenoid in a uniform magnetic field II: wave equation
###### Abstract.
This is the second of a series of papers in which we investigate the decay estimates for dispersive equations with Aharonov-Bohm solenoids in a uniform magnetic field. In our first starting paper [36], we have studied the Strichartz estimates for Schrodinger equation with one Aharonov-Bohm solenoid in a uniform magnetic field. The wave equation in this setting becomes more delicate since a difficulty is raised from the square root of the eigenvalue of the Schrodinger operator \(H_{\alpha,B_{0}}\) so that we cannot directly construct the half-wave propagator. An independent interesting result concerning the Gaussian upper bounds of the heat kernel is proved by using two different methods. The first one is based on establishing Davies-Gaffney inequality in this setting and the second one is straightforward to construct the heat kernel (which efficiently captures the magnetic effects) based on the Schulman-Sunada formula. As byproducts, we prove optimal bounds for the heat kernel and show the Bernstein inequality and the square function inequality for Schrodinger operator with one Aharonov-Bohm solenoid in a uniform magnetic field.
**Key Words: Strichartz estimates, Davies-Gaffney inequality, wave equation, Aharonov-Bohm solenoids, uniform magnetic field AMS Classification: 42B37, 35Q40.**
## 1. Introduction
In this paper, as a sequence of recent papers [19, 21, 36], we study the decay and Strichartz estimates for the wave equation on the plane pierced by one infinitesimally thin Aharonov-Bohm solenoid and subjected to a perpendicular uniform magnetic field of constant magnitude \(B_{0}\). More precisely, we study the wave equation
\[\begin{cases}\partial_{tt}u(t,x)+H_{\alpha,B_{0}}u(t,x)=0,\\ u(0,x)=u_{0}(x),\quad\partial_{t}u(0,x)=u_{1}(x),\end{cases} \tag{1.1}\]
where the magnetic Schrodinger operator
\[H_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))^{2}, \tag{1.2}\]
is the same as the one considered in [36]. Here, \(A_{B}(x)\) is the Aharonov-Bohm potential (initially introduced in [3])
\[A_{B}(x)=\alpha\Big{(}-\frac{x_{2}}{|x|^{2}},\frac{x_{1}}{|x|^{2}}\Big{)}, \quad x=(x_{1},x_{2})\in\mathbb{R}^{2}\setminus\{0\}, \tag{1.3}\]
where \(\alpha\in\mathbb{R}\) represents the circulation of \(A_{B}\) around the solenoid; \(A_{\rm hmf}(x)\) is given by
\[A_{\rm hmf}(x)=\frac{B_{0}}{2}(-x_{2},x_{1}),\quad B_{0}>0, \tag{1.4}\]
which generates the background uniform magnetic field.
We stress that the model is on \(\mathbb{R}^{2}\) and the magnetic field \(B\) is given by
\[B(x):=DA-DA^{t},\quad B_{ij}=\frac{\partial A^{i}}{\partial x_{j}}-\frac{ \partial A^{j}}{\partial x_{i}},\quad i,j=1,2. \tag{1.5}\]
Hence, the generated magnetic field \(B(x)=B_{0}+\alpha\delta(x)\) is actually a superposition of the uniform field and the Aharonov-Bohm field, where \(\delta\) is the usual Dirac delta. As mentioned in [36], the Aharonov-Bohm potential that produces the singular magnetic field has the same homogeneity as \(\nabla\) (homogenous of degree \(-1\)) so that the perturbation from the Aharonov-Bohm potential (1.3) is critical; the potential \(A_{\rm hmf}(x)\) is unbounded at infinity and the uniform magnetic filed \(B(x)=B_{0}\) from (1.5) generates a trapped well. Moreover, due to the presence of the potential (1.4), the spectrum of the operator \(H_{\alpha,B_{0}}\) consists of pure point, and thus the dispersive behavior of wave equation associated with \(H_{\alpha,B_{0}}\) will be distinguished from the models in [19, 21].
The Hamiltonian \(H_{\alpha,B_{0}}\) can be defined as a self-adjoint operator on \(L^{2}\), via Friedrichs' Extension Theorem (see e.g. [22, Thm. VI.2.1] and [28, X.3]), with a natural form domain, which in 2D turns out to be equivalent to
\[\mathcal{D}(H_{\alpha,B_{0}})\simeq\mathcal{H}^{1}_{\alpha,B_{0}}:=\left\{f \in L^{2}(\mathbb{R}^{2};\mathbb{C}):\int_{\mathbb{R}^{2}}\big{|}\big{(} \nabla+i(A_{B}+A_{\rm hmf})f\big{|}^{2}\ dx<+\infty\right\}.\]
We refer to [36, Section 2] for the Friedrichs' extension via quadratic forms and to [15] for more about the self-adjoint extension theory. In what follows and throughout, the operator \(H_{\alpha,B_{0}}\) should be regarded as a self-adjoint operator generated by the procedure of the Friedrichs' extension. Therefore, the half-wave propagator \(e^{it\sqrt{H_{\alpha,B_{0}}}}\) can be treated as one-parameter groups of operators on \(L^{2}(\mathbb{R}^{2})\). This allows to study a large class of dispersive estimates, such as time-decay (perhaps local in time), Strichartz and local smoothing for dispersive evolutions as (1.1). The validity of such properties has been central object of deep investigation of dispersive equations in the last decades, due to their relevance in the description of linear and nonlinear dynamics. To better frame our results, let us briefly sketch the state of the art about these problems.
Due to the significance of dispersive and Strichartz estimates in harmonic analysis and partial differential equations, there are too much literature to cite all of them here. But we would like to refer to [6, 7, 8, 10, 11, 13, 14, 31] and the references therein for various dispersive equations with electromagnetic potentials in mathematics and physics. The dispersive equations with the Aharonov-Bohm potential, as a diffraction physical model, have attracted more and more researchers to study from the mathematical perspective. In [17, 18], the authors studied the validity of the time decay estimates for the Schrodinger equation with the Aharonov-Bohm potential. However, due to the lack of pseudo-conformal invariance (which plays a critical role in the Schrodinger case), the arguments of [17, 18] break down for the wave equation. Very recently, Fanelli, Zheng and the last author [19] established Strichartz estimate for the wave equation by constructing the odd sine propagator. To solve open problems, raised in the survey [16] on the dispersive estimates for other equations (e.g. Klein-Gordon, Dirac, etc.), Gao, Yin, Zheng and the last author [21] constructed the spectral measure and then applied to prove the time decay and Strichartz estimates for the Klein-Gordon equation. The potential models in
[17, 18, 19, 21] are all scaling-invariant and without unbounded (at infinity) perturbations, which is a special case of our model (1.2) (with \(B_{0}\equiv 0\)). In this paper, as [36], we proceed to consider the wave equation in the magnetic fields mixed with the Aharonov-Bohm and the uniform ones.
Before stating our main results, let us introduce some preliminary notations. We define the magnetic Besov spaces as follows. Let \(\varphi\in C_{c}^{\infty}(\mathbb{R}\setminus\{0\})\) satisfy \(0\leq\varphi\leq 1,\mathrm{supp}\,\varphi\subset[1/2,1]\), and
\[\sum_{j\in\mathbb{Z}}\varphi(2^{-j}\lambda)=1,\quad\varphi_{j}( \lambda):=\varphi(2^{-j}\lambda),\,j\in\mathbb{Z},\quad\phi_{0}(\lambda):=\sum _{j\leq 0}\varphi(2^{-j}\lambda). \tag{1.6}\]
**Definition 1.1** (Magnetic Besov spaces associated with \(H_{\alpha,B_{0}}\)).: For \(s\in\mathbb{R}\) and \(1\leq p,r<\infty\), the homogeneous Besov norm \(\|\cdot\|_{\dot{\mathcal{B}}^{s}_{p,r}(\mathbb{R}^{2})}\) is defined by
\[\|f\|_{\dot{\mathcal{B}}^{s}_{p,r}(\mathbb{R}^{2})}=\Big{(}\sum _{j\in\mathbb{Z}}2^{jsr}\|\varphi_{j}(\sqrt{H_{\alpha,B_{0}}})f\|_{L^{p}( \mathbb{R}^{2})}^{r}\Big{)}^{1/r}. \tag{1.7}\]
In particular, for \(p=r=2\), we have the Sobolev norm
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}:=\|f \|_{\dot{\mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}. \tag{1.8}\]
**Remark 1.2**.: Alternatively, the Sobolev space can be defined by
\[\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2}):=H^{-\frac{ \pi}{2}}_{\alpha,B_{0}}L^{2}(\mathbb{R}^{2}),\]
with the norm
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}:=\big{\|}H^{\frac {\pi}{2}}_{\alpha,B_{0}}f\big{\|}_{L^{2}(\mathbb{R}^{2})}. \tag{1.9}\]
By the spectral theory of operators on \(L^{2}\), the norms in (1.8) and (1.9) are equivalent; see Proposition 2.5 below.
**Definition 1.3**.: A pair \((q,p)\in[2,\infty]\times[2,\infty)\) is said to be admissible, if \((q,p)\) satisfies
\[\frac{2}{q}\leq\frac{1}{2}-\frac{1}{p}. \tag{1.10}\]
For \(s\in\mathbb{R}\), we denote \((q,p)\in\Lambda_{s}^{W}\) if \((q,p)\) is admissible and satisfies
\[\frac{1}{q}+\frac{2}{p}=1-s. \tag{1.11}\]
Now we state our main theorem.
**Theorem 1.4**.: _Let \(H_{\alpha,B_{0}}\) be as in (1.2) and \(t\in I:=[0,T]\) with any finite \(T\). Then there exists a constant \(C_{T}\) depending on \(T\) such that_
\[\|e^{it\sqrt{H_{\alpha,B_{0}}}}f\|_{L^{\infty}(\mathbb{R}^{2})} \leq C_{T}|t|^{-1/2}\|f\|_{\dot{\mathcal{B}}^{3/2}_{1,1}(\mathbb{R}^{2})}, \quad t\in I,\quad t\neq 0. \tag{1.12}\]
_Let \(u(t,x)\) be the solution of (1.1) with initial data \((u_{0},u_{1})\in\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})\times \dot{\mathcal{H}}^{s-1}_{\alpha,B_{0}}(\mathbb{R}^{2})\), then the Strichartz estimates_
\[\|u(t,x)\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}\leq C_{T}\left(\|u_{0} \|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}+\|u_{1}\|_{\dot{ \mathcal{H}}^{s-1}_{\alpha,B_{0}}(\mathbb{R}^{2})}\right) \tag{1.13}\]
_hold for \((q,p)\in\Lambda_{s}^{W}\) and \(0\leq s<1\)._
**Remark 1.5**.: The local-in-time decay estimate (1.12) is quite different from the Schrodinger counterpart (see [36, Theorem 1.1])
\[\big{\|}e^{itH_{\alpha,B_{0}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\leq C|\sin( tB_{0})|^{-1}\big{\|}f\big{\|}_{L^{1}\mathbb{R}^{2})},\quad t\neq\frac{k\pi}{B_{0}}, \,k\in\mathbb{Z},\]
which is similar to the harmonic oscillators (see Koch and Tataru [24]). The period \(\pi/B_{0}\) is essentially the Larmor period. However, for the wave equation, provided that the data \(f=\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\) is localized at frequency scale \(2^{j}\) with \(j\in\mathbb{Z}\), we can prove (see (5.11) below)
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}}) e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\] \[\lesssim 2^{2j}\big{(}1+2^{j}t\big{)}^{-N}\|\varphi(2^{-j}\sqrt{H_ {\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})},\quad\text{for}\quad 2^{j}t\lesssim 1\]
and (see (5.9) below)
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\] \[\lesssim 2^{2j}\big{(}1+2^{j}t\big{)}^{-\frac{1}{2}}\|\varphi(2^{- j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})},\quad\text{for}\quad 2^{j}t \lesssim 1,\quad 2^{-j}|t|\leq\frac{\pi}{8B_{0}}.\]
The decay estimates for waves depend on the frequency. The Strichartz estimates (1.13) is still local-in-time but the endpoint of the time interval \(T\) can beyond \(\frac{\pi}{2B_{0}}\) which is the upper bound of \(T\) for Schrodinger's Strichartz estimates. Due to the unbounded potentials caused a trapped well, the Strichartz estimate is impossible to be global-in-time (for example, the Strichartz estimates for dispersive equations on sphere or torus), but still captures integration regularity behavior near \(t=0\).
Now let us figure out some points in our proof.
* As mentioned above, for the Schrodinger equation considered in [36], the explicit eigenvalues and eigenfunctions of the operator \(H_{\alpha,B_{0}}\) are the key ingredients. In particular, the eigenvalues are given by \[\lambda_{k,m}=(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0},\quad m,\,k\in\mathbb{Z},\,m\geq 0,\] see (2.1) below. One feature of \(\lambda_{k,m}\) is that \(k\) and \(m\) can be separated in the series convergent argument. However, for the half-wave propagator, this feature breaks down for the square root of \(\lambda_{k,m}\). Therefore, we cannot directly construct the wave propagator by following the argument of [19, 36].
* Due to the uniform magnetic field caused a trapped well, the spectral measure will involve a factor \(\sin(tB_{0})\) (which is a short-time decay but not long-time). This lead to the failure of the spectral measure argument in [21].
* To go around constructing the spectral measure, we turn to prove the Bernstein inequality to deal with the low frequency. For the high frequency, we use the classical subordination formula \[e^{-y\sqrt{H_{\alpha,B_{0}}}}=\frac{y}{2\sqrt{\pi}}\int_{0}^{\infty}e^{-sH_{ \alpha,B_{0}}}e^{-\frac{y^{2}}{4s}}s^{-\frac{3}{2}}ds,\quad y>0,\] which provides a connecting bridge between the Schrodinger propagator and the half-wave propagator. This idea is originated from [27] and [12].
The dispersive estimates proved in [36] are then used to address the high frequency of the waves.
* The Littlewood-Paley theory (including Bernstein inequality and the square function inequality) associated with the Schrodinger operator \(H_{\alpha,B_{0}}\) are proved by establishing the Gaussian upper bounds for the heat kernel.
* The heat kernel estimates for magnetic Schrodinger operators have its own interest, we provide two methods to study the heat kernel. Unfortunately, due to the fact \(A(x)=A_{\mathrm{hmf}}(x)+A_{B}(x)\notin L^{2}_{\mathrm{loc}}(\mathbb{R}^{2})\), Simon's diamagnetic pointwise inequality (see e.g. [29, Theorem B.13.2], [4]) cannot be directly used. Even though we cannot recover all the magnetic effects to prove the optimal heat kernel estimates, but we can prove \[\left|e^{-tH_{\alpha,B_{0}}}(x,y)\right|\lesssim\frac{1}{t}e^{-\frac{|x-y|^{2} }{Ct}},\] which is enough for proving the Bernstein inequality and the square function inequality. We first prove the on-diagonal estimates and then extend to the off-diagonal estimates by establishing the Davies-Gaffney inequality. The key points are the argument of [9] and [20] applying to the magnetic operator \(H_{\alpha,B_{0}}\). To recover more magnetic effects, we use the Schulman-Sunada formula from [34, 35] to construct the heat kernel and prove \[\left|e^{-tH_{\alpha,B_{0}}}(x,y)\right|\leq C\frac{B_{0}e^{-\alpha tB_{0}}} {4\pi\sinh(tB_{0})}e^{-\frac{B_{0}|x-y|^{2}}{4\sinh(tB_{0})}},\] which is better than the previous one. For more discussion on the heat kernel estimates, we refer to the remarks in Section 3.
The paper is organized as follows. In Section 2, as a preliminary step, we briefly recall the self-adjoint extension and the spectrum of the operator \(H_{\alpha,B_{0}}\), and prove the equivalence between Sobolev norm and a special Besov norm. In Section 3, we construct the heat kernel and prove the Gaussian upper bounds. In Section 4, we prove the Bernstein inequalities and the square function inequality by using the heat kernel estimates. Finally, in Section 5 and Section 6, we prove the dispersive estimate (1.12) and the Strichartz estimate (1.13) in Theorem 1.4 respectively.
**Acknowledgments:** The authors thank L. Fanelli, P. St'ovicek and P. D'Ancona for helpful discussions. This work is supported by National Natural Science Foundation of China (12171031, 11901041, 11831004).
## 2. preliminaries
In this section, we first repeat the preliminary section of [36] to recall two known results about the Friedrichs self-adjoint extension of the operator \(H_{\alpha,B_{0}}\) and the spectrum of \(H_{\alpha,B_{0}}\). Next, we use the spectral argument to prove the equivalence between the Sobolev norm and a special Besov norm.
### Quadratic form and self-adjoint extension
Define the space \(\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2})\) as the completion of \(\mathcal{C}^{\infty}_{c}(\mathbb{R}^{2}\setminus\{0\};\mathbb{C})\) with respect to the norm
\[\|f\|_{\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2})}=\Big{(}\int_{\mathbb{R }^{2}}|\nabla_{\alpha,B_{0}}f(x)|^{2}dx\Big{)}^{\frac{1}{2}}\]
where
\[\nabla_{\alpha,B_{0}}f(x)=\nabla f+i(A_{B}+A_{\rm hmf})f.\]
The quadratic form \(Q_{\alpha,B_{0}}\) associated with \(H_{\alpha,B_{0}}\) is defined by
\[Q_{\alpha,B_{0}}:\qquad\mathcal{H}^{1}_{\alpha,B_{0}}\to\mathbb{R}\] \[Q_{\alpha,B_{0}}(f)=\int_{\mathbb{R}^{2}}|\nabla_{\alpha,B_{0}}f (x)|^{2}dx.\]
Then the quadratic form \(Q_{\alpha,B_{0}}\) is positive definite, which implies that the operator \(H_{\alpha,B_{0}}\) is symmetric semi bounded from below and thus admits a self-adjoint extension (Friedrichs extension) \(H^{F}_{\alpha,B_{0}}\) with the natural form domain
\[\mathcal{D}=\Big{\{}f\in\mathcal{H}^{1}_{\alpha,B_{0}}(\mathbb{R}^{2}):H^{F}_ {\alpha,B_{0}}f\in L^{2}(\mathbb{R}^{2})\Big{\}}\]
Even though the operator \(H_{\alpha,B_{0}}\) has many other self-adjoint extensions (see [15]) by the von Neumann extension theory, in this whole paper, we use the simplest Friedrichs extension and briefly write \(H_{\alpha,B_{0}}\) as its Friedrichs extension \(H^{F}_{\alpha,B_{0}}\).
### The spectrum of the operator \(H_{\alpha,B_{0}}\)
In this subsection, we exhibit the eigenvalues and eigenfunctions of the Schrodinger operator
\[H_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))^{2},\]
where the magnetic vector potentials are in (1.3) and (1.4).
**Proposition 2.1** (The spectrum for \(H_{\alpha,B_{0}}\)).: _Let \(H_{\alpha,B_{0}}\) be the self-adjoint Schrodinger operator in (1.2). Then the eigenvalues of \(H_{\alpha,B_{0}}\) are given by_
\[\lambda_{k,m}=(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0},\quad m,\,k\in\mathbb{Z}, \,m\geq 0, \tag{2.1}\]
_with (finite) multiplicity_
\[\#\Bigg{\{}j\in\mathbb{Z}:\frac{\lambda_{k,m}-(j+\alpha)B_{0}}{2B_{0}}-\frac {|j+\alpha|+1}{2}\in\mathbb{N}\Bigg{\}}.\]
_Furthermore, let \(\theta=\frac{x}{|x|}\), the corresponding eigenfunction is given by_
\[V_{k,m}(x)=|x|^{|k+\alpha|}e^{-\frac{B_{0}|x|^{2}}{4}}\,P_{k,m}\Bigg{(}\frac{ B_{0}|x|^{2}}{2}\Bigg{)}e^{ik\theta} \tag{2.2}\]
_where \(P_{k,m}\) is the polynomial of degree \(m\) given by_
\[P_{k,m}(r)=\sum_{n=0}^{m}\frac{(-m)_{n}}{(1+|k+\alpha|)_{n}}\frac{r^{n}}{n!}.\]
_with \((a)_{n}\) (\(a\in\mathbb{R}\)) the Pochhammer's symbol_
\[(a)_{n}=\begin{cases}1,&n=0;\\ a(a+1)\cdots(a+n-1),&n=1,2,\cdots\end{cases}\]
**Remark 2.2**.: One can verify that the orthogonality holds
\[\int_{\mathbb{R}^{2}}V_{k_{1},m_{1}}(x)V_{k_{2},m_{2}}(x)\,dx=0,\quad\text{ if}\quad(k_{1},m_{1})\neq(k_{2},m_{2}).\]
**Remark 2.3**.: Let \(L^{\alpha}_{m}(t)\) be the generalized Laguerre polynomials
\[L^{\alpha}_{m}(t)=\sum_{n=0}^{m}(-1)^{n}\Bigg{(}\begin{array}{c}m+\alpha\\ m-n\end{array}\Bigg{)}\frac{t^{n}}{n!},\]
then one has the well known orthogonality relation
\[\int_{0}^{\infty}x^{\alpha}e^{-x}L^{\alpha}_{m}(x)L^{\alpha}_{n}(x)\,dx=\frac{ \Gamma(n+\alpha+1)}{n!}\delta_{n,m},\]
where \(\delta_{n,m}\) is the Kronecker delta. Let \(\tilde{r}=\frac{B_{0}|x|^{2}}{2}\) and \(\alpha_{k}=|k+\alpha|\), then
\[P_{k,m}(\tilde{r})=\sum_{n=0}^{m}\frac{(-1)^{n}m(m-1)\cdots(m-(n-1))}{(\alpha_ {k}+1)(\alpha_{k}+2)\cdots(\alpha_{k}+n)}\frac{\tilde{r}^{n}}{n!}=\left( \begin{array}{c}m+\alpha_{k}\\ m\end{array}\right)^{-1}L^{\alpha_{k}}_{m}(\tilde{r}). \tag{2.3}\]
Therefore,
\[\|V_{k,m}(x)\|^{2}_{L^{2}(\mathbb{R}^{2})}=\pi\Big{(}\frac{2}{B_{0}}\Big{)}^{ \alpha_{k}+1}\Gamma(1+\alpha_{k})\Bigg{(}\begin{array}{c}m+\alpha_{k}\\ m\end{array}\Bigg{)}^{-1}. \tag{2.4}\]
**Remark 2.4**.: Recall the Poisson kernel formula for Laguerre polynomials [2, (6.2.25)]: for \(a,b,c,\alpha>0\)
\[\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+\alpha+1)}L^{\alpha }_{m}(a)L^{\alpha}_{m}(b)\] \[=\frac{e^{\frac{\alpha_{k}}{2}}}{(ab)^{\frac{n}{2}}(1-e^{-c})} \exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{\alpha}\left(\frac{2\sqrt{ab} e^{-\frac{c}{2}}}{1-e^{-c}}\right)\]
then this together with (2.3) gives
\[\sum_{m=0}^{\infty}e^{-cm}\frac{m!}{\Gamma(m+\alpha_{k}+1)}\Bigg{(} \begin{array}{c}m+\alpha_{k}\\ m\end{array}\Bigg{)}^{2}P_{k,m}(a)P_{k,m}(b) \tag{2.5}\] \[=\frac{e^{\frac{\alpha_{k}c}{2}}}{(ab)^{\frac{n}{2}}(1-e^{-c})} \exp\left(-\frac{(a+b)e^{-c}}{1-e^{-c}}\right)I_{\alpha_{k}}\left(\frac{2\sqrt {ab}e^{-\frac{c}{2}}}{1-e^{-c}}\right).\]
We refer to [36] for the proof.
### The Sobolev spaces
In this subsection, we will prove the equivalence of two norms.
**Proposition 2.5** (Equivalent norms).: _Let the Sobolev norm and Besov norm be defined in (1.9) and (1.7) respectively. For \(s\in\mathbb{R}\), then there exist positive constants \(c,C\) such that_
\[c\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}\leq\|f\|_{\dot{ \mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}\leq C\|f\|_{\dot{\mathcal{H}}^{s}_{ \alpha,B_{0}}(\mathbb{R}^{2})}, \tag{2.6}\]
_and_
\[c\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}\leq\|f\|_{\dot{ \mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}\leq C\|f\|_{\dot{\mathcal{H}}^{s}_{ \alpha,B_{0}}(\mathbb{R}^{2})}. \tag{2.7}\]
Proof.: Let \(\tilde{V}_{k,m}\) be the \(L^{2}\)-normalization of \(V_{k,m}\) in (2.2), then the eigenfunctions \(\left\{\tilde{V}_{k,m}\right\}_{k\in\mathbb{Z},m\in\mathbb{N}}\) form an orthonormal basis of \(L^{2}(\mathbb{R}^{2})\) corresponding to the eigenfunctions of \(H_{\alpha,B_{0}}\).
By the functional calculus, for any well-behaved functions \(F\) (e.g. bounded Borel measurable function) and \(f\in L^{2}\), we can write
\[F(H_{\alpha,B_{0}})f=\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}F(\lambda_{k,m})c _{k,m}\tilde{V}_{k,m}(x).\]
where
\[c_{k,m}=\int_{\mathbb{R}^{2}}f(y)\overline{\tilde{V}_{k,m}(y)}dy.\]
Then
\[\|F(H_{\alpha,B_{0}})f\|_{L^{2}(\mathbb{R}^{2})}=\Big{(}\sum_{k\in\mathbb{Z}, \atop m\in\mathbb{N}}\big{|}F(\lambda_{k,m})c_{k,m}\big{|}^{2}\Big{)}^{1/2}. \tag{2.8}\]
In particular, we have
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})}=\|H^{\frac{s}{2}}_ {\alpha,B_{0}}f\|_{L^{2}(\mathbb{R}^{2})}=\Big{(}\sum_{k\in\mathbb{Z},\atop m \in\mathbb{N}}\big{|}\lambda^{\frac{s}{2}}_{k,m}c_{k,m}\big{|}^{2}\Big{)}^{1/2}.\]
Let \(\varphi\in C^{\infty}_{c}(\mathbb{R}\setminus\{0\})\) in (1.6). On the one hand, by the definition and (2.8), we have
\[\|f\|_{\dot{\mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})} =\Big{(}\sum_{j\in\mathbb{Z}}2^{2js}\|\varphi_{j}(\sqrt{H_{\alpha,B_{0}}})f\|^{2}_{L^{2}(\mathbb{R}^{2})}\Big{)}^{1/2}\] \[=\Big{(}\sum_{j\in\mathbb{Z}}\sum_{k\in\mathbb{Z},\atop m\in \mathbb{N}}2^{2js}\big{|}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}}\Big{)} c_{k,m}\big{|}^{2}\Big{)}^{1/2}\] \[\leq\Big{(}\sum_{j\in\mathbb{Z}}\sum_{k\in\mathbb{Z},\atop m\in \mathbb{N}}\lambda^{s}_{k,m}\big{|}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2 ^{j}}\Big{)}c_{k,m}\big{|}^{2}\Big{)}^{1/2}\] \[\lesssim\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}| \lambda^{\frac{s}{2}}_{k,m}c_{k,m}|^{2}\sum_{j\in\mathbb{Z}}\big{|}\varphi \Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}}\Big{)}\big{|}^{2}\Big{)}^{1/2}\] \[\lesssim\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}| \lambda^{\frac{s}{2}}_{k,m}c_{k,m}|^{2}\Big{)}^{1/2}=\|f\|_{\dot{\mathcal{H}}^ {s}_{\alpha,B_{0}}(\mathbb{R}^{2})}.\]
On the other hand, we have
\[\|f\|_{\dot{\mathcal{H}}^{s}_{\alpha,B_{0}}(\mathbb{R}^{2})} =\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}|\lambda^{\frac {s}{2}}_{k,m}c_{k,m}|^{2}\Big{)}^{1/2}\] \[=\Big{(}\sum_{k\in\mathbb{Z},\atop m\in\mathbb{N}}\Big{|}\sum_{j \in\mathbb{Z}}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}}\Big{)}\lambda ^{\frac{s}{2}}_{k,m}c_{k,m}\Big{|}^{2}\Big{)}^{1/2}\] \[\leq C\Big{(}\sum_{j\in\mathbb{Z}}\sum_{k\in\mathbb{Z},\atop m\in \mathbb{N}}2^{2js}\big{|}\varphi\Big{(}\frac{\sqrt{\lambda_{k,m}}}{2^{j}} \Big{)}c_{k,m}\big{|}^{2}\Big{)}^{1/2}\] \[\lesssim\Big{(}\sum_{j\in\mathbb{Z}}2^{2js}\|\varphi_{j}(\sqrt{H_ {\alpha,B_{0}}})f\|^{2}_{L^{2}(\mathbb{R}^{2})}\Big{)}^{1/2}=\|f\|_{\dot{ \mathcal{B}}^{s}_{2,2}(\mathbb{R}^{2})}.\]
In the above inequality, we have used the fact that, for a fixed \(\lambda\), there are only finite terms in the summation
\[1=\sum_{j\in\mathbb{Z}}\varphi\big{(}\frac{\lambda}{2^{j}}\big{)}.\]
Above all, we have proved (2.6). One can prove (2.7) similarly.
## 3. Heat kernel estimates
In this section, for our purpose of the Littlewood-Paley theory associated with \(H_{\alpha,B_{0}}\), we study the heat kernel estimates associated with the magnetic Schrodinger operator \(H_{\alpha,B_{0}}\). We provide two methods to study the heat kernel. In the first method, we first combine the strategies of [18, 21, 19] to construct the heat kernel by using the spectrum property in Proposition 2.1. And then we use the representation of the heat kernel to obtain the on-diagonal estimates. Finally we extend the on-diagonal bounds by adding the Gaussian factor \(\exp(-d^{2}(x,y)/Ct)\) to obtain the off-diagonal Gaussian bounds. In the second one, we directly construct the heat kernel by using the Schulman-Sunada formula in [34, 35] and then optimal the established bounds.
### Method I:
More precisely, we will first prove the following result.
**Proposition 3.1**.: _Let \(H_{\alpha,B_{0}}\) be the operator in (1.2) and suppose \(x=r_{1}(\cos\theta_{1},\sin\theta_{1})\) and \(y=r_{2}(\cos\theta_{2},\sin\theta_{2})\). Let \(u(t,x)\) be the solution of the heat equation_
\[\begin{cases}\big{(}\partial_{t}+H_{\alpha,B_{0}}\big{)}u(t,x)=0,\\ u(0,x)=f(x).\end{cases} \tag{3.1}\]
_Then_
\[u(t,x)=e^{-tH_{\alpha,B_{0}}}f=\int_{\mathbb{R}^{2}}K_{H}(t;x,y)f(y)\,dy,\quad t >0,\]
_where the kernel of the heat propagator \(e^{-tH_{\alpha,B_{0}}}\) is given by_
\[K_{H}(t;x,y)=\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}}\Big{)} e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})\cosh tB_{0}}{4\sinh tB_{0}}}\sum_{k\in \mathbb{Z}}e^{ik(\theta_{1}-\theta_{2}+itB_{0})}I_{\alpha_{k}}\left(\frac{B_ {0}r_{1}r_{2}}{2\sinh tB_{0}}\right). \tag{3.2}\]
_Furthermore, there exists a constant \(C\) such that_
\[|K_{H}(t;x,y)|\leq C\frac{B_{0}e^{(1-\alpha)B_{0}t}}{\sinh tB_{0}}e^{-\frac{B _{0}(r_{1}-r_{2})^{2}}{4\tanh tB_{0}}}. \tag{3.3}\]
**Remark 3.2**.: The argument is a bit different from the proof for the Schrodinger propagator. In particular, at first glance, the factor \(e^{-tB_{0}k}\) is a trouble in the summation of the formula (3.2) when \(k\in-\mathbb{N}\), but it converges due to the factor \(\sinh(tB_{0})\) in the modified Bessel function.
Proof.: We construct the representation formula (3.2) of the heat flow \(e^{-tH_{\alpha,B_{0}}}\) by combining the argument of [18] and [19, 21]. This is close to the construction of Schrodinger flow in our previous paper [36], however, we provide the details again for self-contained.
Our starting point is the Proposition 2.1. Let \(\tilde{V}_{k,m}\) be the \(L^{2}\)-normalization of \(V_{k,m}\) in (2.2), then the eigenfunctions \(\left\{\tilde{V}_{k,m}\right\}_{k\in\mathbb{Z},m\in\mathbb{N}}\) form an orthonormal basis of \(L^{2}(\mathbb{R}^{2})\) corresponding to the eigenfunctions of \(H_{\alpha,B_{0}}\).
We expand the initial data \(f(x)\in L^{2}\) as
\[f(x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}c_{k,m}\tilde{V}_{k,m}(x)\]
where
\[c_{k,m}=\int_{\mathbb{R}^{2}}f(x)\overline{\tilde{V}_{k,m}(x)}\,dx. \tag{3.4}\]
The solution \(u(t,x)\) of (3.1) can be written as
\[u(t,x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}u_{k,m}(t)\tilde{V}_{k,m}(x), \tag{3.5}\]
where \(u_{k,m}(t)\) satisfies the ODE
\[\begin{cases}&u_{k,m}^{\prime}(t)=-\lambda_{k,m}u_{k,m}(t),\\ &u_{k,m}(0)=c_{k,m},\quad k\in\mathbb{Z},\,m\in\mathbb{N}.\end{cases}\]
Thus we obtain \(u_{k,m}(t)=c_{k,m}e^{-t\lambda_{k,m}}\). Therefore the solution (3.5) becomes
\[u(t,x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}c_{k,m}e^{-t\lambda_{k,m}}\tilde{V}_{k,m}(x).\]
Plugging (3.4) into the above expression yields
\[u(t,x)=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}e^{-t\lambda_{k,m}}\left(\int_{\mathbb{R}^{2}}f(y )\overline{\tilde{V}_{k,m}(y)}dy\right)\tilde{V}_{k,m}(x).\]
We write \(f\) in a harmonic spherical expansion
\[f(y)=\sum_{k\in\mathbb{Z}}f_{k}(r_{2})e^{ik\theta_{2}},\]
where
\[f_{k}(r_{2})=\frac{1}{2\pi}\int_{0}^{2\pi}f(r_{2},\theta_{2})e^{-ik\theta_{2} }\,d\theta_{2},\quad r_{2}=|y|, \tag{3.6}\]
we thus have
\[u(t,x) =\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}e^{-t\lambda_{k,m}}\frac{V_{k,m}(x)}{\|V_{k,m}\| _{L^{2}}^{2}}\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})e^{-\frac{B_{0}r_{2}^{2}}{ 4}}\,P_{k,m}\Big{(}\frac{B_{0}r_{2}^{2}}{2}\Big{)}r_{2}^{1+\alpha_{k}}\mathrm{ d}r_{2}\Bigg{)}\] \[=\Big{(}\frac{B_{0}}{2\pi}\Big{)}\sum_{k\in\mathbb{Z}}e^{ik\theta _{1}}\frac{B_{0}^{\alpha_{k}}e^{-t\beta_{k}}}{2^{\alpha_{k}}\Gamma(1+\alpha_{k })}\Bigg{[}\sum_{m=0}^{\infty}\left(\begin{array}{c}m+\alpha_{k}\\ m\end{array}\right)e^{-2tmB_{0}}\] \[\times\Bigg{(}\int_{0}^{\infty}f_{k}(r_{2})(r_{1}r_{2})^{\alpha_ {k}}e^{-\frac{B_{0}(r_{2}^{2}+r_{2}^{2})}{4}}P_{k,m}\left(\frac{B_{0}r_{2}^{2 }}{2}\right)P_{k,m}\left(\frac{B_{0}r_{1}^{2}}{2}\right)r_{2}\mathrm{d}r_{2} \Bigg{)}\Bigg{]},\]
where \(\alpha_{k}=|k+\alpha|\) and we use (2.1),(2.2),(2.4) and
\[\lambda_{k,m} =(2m+1+|k+\alpha|)B_{0}+(k+\alpha)B_{0}\] \[:=2mB_{0}+\beta_{k}\]
with \(\beta_{k}=(1+|k+\alpha|)B_{0}+(k+\alpha)B_{0}\geq B_{0}>0\).
Using the formula (2.5) and (3.6), we obtain
\[u(t,x) =\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}}\Big{)} \int_{0}^{\infty}\int_{0}^{2\pi}\sum_{k\in\mathbb{Z}}e^{ik(\theta_{1}-\theta_{2 })}\frac{B_{0}^{\alpha_{k}}e^{-t\beta_{k}}}{2^{\alpha_{k}}\Gamma(1+\alpha_{k}) }(r_{1}r_{2})^{\alpha_{k}}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{4}}f(r_{2}, \theta_{2})\] \[\times\frac{2^{\alpha_{k}}e^{t\alpha_{k}B_{0}}}{(B_{0}r_{1}r_{2} )^{\alpha_{k}}}\exp\left(-\frac{\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{2}e^{-2tB_{ 0}}}{1-e^{-2tB_{0}}}\right)I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}e^{-tB_{0 }}}{1-e^{-2tB_{0}}}\right)r_{2}\mathrm{d}r_{2}\mathrm{d}\theta_{2}\] \[=\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}} \Big{)}\int_{0}^{\infty}\int_{0}^{2\pi}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2}) \cosh tB_{0}}{4\sinh tB_{0}}}f(r_{2},\theta_{2})\] \[\quad\times\sum_{k\in\mathbb{Z}}e^{ik(\theta_{1}-\theta_{2}+itB_{ 0})}I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}}\right)r_{2} \mathrm{d}r_{2}\mathrm{d}\theta_{2}.\]
Therefore, we obtain the heat kernel
\[K_{H}(t;x,y)=\Big{(}\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi^{2}\sinh tB_{0}}\Big{)} e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})\cosh tB_{0}}{4\sinh tB_{0}}}\sum_{k\in \mathbb{Z}}e^{ik(\theta_{1}-\theta_{2}+itB_{0})}I_{\alpha_{k}}\left(\frac{B_ {0}r_{1}r_{2}}{2\sinh tB_{0}}\right),\]
which gives (3.2).
Now we need to verify the inequality (3.3). To this end, it suffices to show
\[|K_{H}(t;x,y)|\leq\frac{B_{0}e^{-\alpha B_{0}t}}{4\pi^{2}\sinh tB_{0}}e^{- \frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{4\tanh tB_{0}}}\sum_{k\in\mathbb{Z}}e^{-kB_{ 0}t}I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}}\right). \tag{3.7}\]
Let \(z=\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}}>0\) and notice the monotonicity of the modified Bessel function \(I_{\mu}(z)\) with respect to the order, in other words, for fixed \(z>0\),
\[I_{\mu}(z)\leq I_{\nu}(z),\quad\mu\geq\nu.\]
Recall \(\alpha_{k}=|k+\alpha|\) and \(\alpha\in(0,1)\), thus we show
\[\sum_{k\in\mathbb{Z}}e^{-kB_{0}t}I_{\alpha_{k}}\left(z\right)= \sum_{k\geq 0}e^{-kB_{0}t}I_{k+\alpha}(z)+\sum_{k\geq 1}e^{kB_{0}t}I_{k- \alpha}(z)\] \[\leq \sum_{k\geq 0}e^{-kB_{0}t}I_{k}(z)+e^{B_{0}t}\sum_{k\geq 0}e^{kB_{ 0}t}I_{k}(z)\] \[\leq e^{B_{0}t}\Big{(}\sum_{k\geq 0}e^{-kB_{0}t}I_{k}(z)+\sum_{k\geq 0}e^{ kB_{0}t}I_{k}(z)\Big{)}\] \[\leq e^{B_{0}t}\Big{(}\sum_{k\in\mathbb{Z}}e^{kB_{0}t}I_{|k|}(z)+I_{0}( z)\Big{)}\] \[\leq Ce^{B_{0}t}(e^{z}+e^{z\cosh(B_{0}t)})\] \[\leq Ce^{B_{0}t}e^{z\cosh(B_{0}t)}\] \[\leq Ce^{B_{0}t}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh tB_{0}!}},\]
where we use the formula [5, Eq. (9.6.19)]
\[\sum_{k\in\mathbb{Z}}e^{kt}I_{|k|}(z)=e^{z\cosh(t)},\quad I_{0}(z)\leq Ce^{z}.\]
Combining with (3.7), we have verified (3.3).
We next extend our result of the "on-diagonal" kernel estimate
\[|K_{H}(t;x,x)|\leq C\frac{B_{0}e^{(1-\alpha)B_{0}t}}{|\sinh tB_{0}|}\]
to the "off-diagonal". Let \(p_{t}(x,y)\) denote the heat kernel corresponding to a second-order differential elliptic or sub-elliptic operator, then the usual theory says that one can automatically improve on-diagonal bounds
\[p_{t}(x,x)\leq\frac{C}{V(x,\sqrt{t})}\]
to the typical Gaussian heat kernel upper bound
\[p_{t}(x,y)\leq\frac{C}{V(x,\sqrt{t})}\exp\Big{(}-\frac{d^{2}(x,y)}{Ct}\Big{)}\]
for all \(t>0\) and \(x,y\) ranging in the space where the operator acts, for an appropriate function \(V\).
For our specific operator \(H_{\alpha,B_{0}}\), we prove that
**Proposition 3.3**.: _Let \(K_{H}(t;x,y)\) be in Proposition 3.1, then there exists a constant \(C\) such that_
\[|K_{H}(z;x,y)|\leq C(\operatorname{Re}z)^{-1}\exp\Big{(}-\operatorname{Re} \frac{d^{2}(x,y)}{Cz}\Big{)},\]
_for all \(z\in\mathbb{C}_{+}\) and \(x,y\in\mathbb{R}^{2}\). In particular, \(z=t>0\), then_
\[|K_{H}(t;x,y)|\leq Ct^{-1}\exp\Big{(}-\frac{d^{2}(x,y)}{Ct}\Big{)},\quad t>0. \tag{3.8}\]
**Remark 3.4**.: One usual way to prove the Gaussian bounds for the magnetic Schrodinger operator is to apply the important diamagnetic inequality
\[\Big{|}\Big{(}e^{t(\nabla+iA(x))^{2}}f\Big{)}(x)\Big{|}\leq\Big{(}e^{t\Delta} |f|\Big{)}(x), \tag{3.9}\]
which relates estimates on the magnetic Schrodinger operator semigroup to estimates on the free heat semigroup. The obvious disadvantage of using (3.9) is that all the effects of the magnetic field are completely eliminated. To our best knowledge, (3.9) is available for \(A(x)\in L^{2}_{\operatorname{loc}}\), see [29]. Unfortunately, our magnetic potential \(A(x)=A_{B}(x)+A_{\operatorname{hmrf}}(x)\notin L^{2}_{\operatorname{loc}}( \mathbb{R}^{2})\).
**Remark 3.5**.: To recover some magnetic effects, it would be tempting to prove
\[\Big{|}\Big{(}e^{-tH_{\alpha,B_{0}}}f\Big{)}(x)\Big{|}\leq\Big{(}e^{-tH_{0,B_{ 0}}}|f|\Big{)}(x), \tag{3.10}\]
or
\[\Big{|}\Big{(}e^{-tH_{\alpha,B_{0}}}f\Big{)}(x)\Big{|}\leq\Big{(}e^{-tH_{ \alpha,0}}|f|\Big{)}(x). \tag{3.11}\]
If (3.10) was available,
\[\big{|}e^{-tH_{\alpha,B_{0}}}(x,y)\big{|}\lesssim\big{|}e^{-tH_{0,B_{0}}}(x,y )\big{|}\lesssim\frac{1}{\sinh(B_{0}t)}e^{-\frac{B_{0}|x-y|^{2}}{4\tanh(B_{0}t )}}, \tag{3.12}\]
where we use the Mehler heat kernel of \(e^{-tH_{0,B_{0}}}\) (e.g. [30, P168])
\[e^{-tH_{0,B_{0}}}(x,y)=\frac{B_{0}}{4\pi\sinh(B_{0}t)}e^{-\frac{B_{0}|x-y|^{2} }{4\tanh(B_{0}t)}-\frac{iB_{0}}{2}(x_{1}y_{2}-x_{2}y_{1})}. \tag{3.13}\]
If (3.11) was available, we obtain
\[\left|e^{-tH_{\alpha,B_{0}}}(x,y)\right|\lesssim\left|e^{-tH_{\alpha,0}}(x,y) \right|\lesssim\frac{1}{t}e^{-\frac{|x-y|^{2}}{4t}}. \tag{3.14}\]
We refer to [19, Proposition 3.2] for the last Gaussian upper bounds for \(e^{-tH_{\alpha,0}}(x,y)\). Since for \(t\geq 0\), one has
\[\sinh(t)\geq t,\quad t/\tanh(t)\geq 1,\]
hence (3.14) is weaker than (3.12). Unfortunately, as pointed out in [26], the semigroup generated by the magnetic Schrodinger operator is not Markovian, in fact not even positivity preserving which is important in the theory of comparison of heat semigroups. So the truth of (3.10) and (3.11) is not known, we refer to [26].
**Remark 3.6**.: The Gaussian decay on the right side of (3.8) is the one of the heat kernel, which is considerably weaker than the decay of the Mehler kernel (3.12). Similarly as [26], we ask that how to prove
\[|K_{H}(t;x,y)|\lesssim\frac{B_{0}e^{-\alpha B_{0}t}}{|\sinh tB_{0}|}e^{-\frac{ |x-y|^{2}}{C\tanh(B_{0}t)}},\quad t>0.\]
The truth of this estimate would reveal a robust dependence of the magnetic heat kernel on the magnetic field. In our case, we give a positive answer to this problem by proving (3.30) in the subsequent subsection.
Proof.: We prove (3.8) by using [9, Theorem 4.2]. The Theorem claims that if \((M,d,\mu,L)\) satisfies the Davies-Gaffney estimates, that is,
\[|\langle e^{-tL}f,g\rangle|\leq\|f\|_{L^{2}(U_{1})}\|g\|_{L^{2}(U_{2})}e^{- \frac{d^{2}(U_{1},U_{2})}{4t}}\]
for all \(t>0\), \(U_{i}\subset M\) with \(i=1,2\) and \(f\in L^{2}(U_{1},d\mu)\), \(g\in L^{2}(U_{2},d\mu)\) and \(d(U_{1},U_{2})=\inf\{\rho=|x-y|:x\in U_{1},y\in U_{2}\}\). If, for some \(K\) and \(D>0\),
\[e^{-tL}(x,x)\leq Kt^{-\frac{D}{2}},\qquad\forall t>0,\quad x\in M,\]
then
\[|e^{-zL}(x,y)|\leq K(\operatorname{Re}z)^{-\frac{D}{2}}\Big{(}1+\operatorname {Re}\frac{d^{2}(x,y)}{4z}\Big{)}^{\frac{D}{2}}\exp\Big{(}-\operatorname{Re} \frac{d^{2}(x,y)}{4z}\Big{)}\]
for all \(z\in\mathbb{C}_{+}\) and \(x,y\in M\).
For our model \(M=\mathbb{R}^{2}\) and \(L=H_{\alpha,B_{0}}\), we need to verify the on-diagonal estimates
\[e^{-tH_{\alpha,B_{0}}}(x,x)\leq Kt^{-1},\qquad\forall t>0,\quad x\in M, \tag{3.15}\]
and the Davies-Gaffney estimates
\[|\langle e^{-tH_{\alpha,B_{0}}}f,g\rangle|\leq\|f\|_{L^{2}(U_{1})}\|g\|_{L^{2} (U_{2})}e^{-\frac{d^{2}(U_{1},U_{2})}{4t}}. \tag{3.16}\]
If this has been done, for \(z=t>0\) and \(D=2\) and \(\epsilon>0\), then
\[|e^{-tH_{\alpha,B_{0}}}(x,y)| \leq Ct^{-1}\Big{(}1+\frac{|x-y|^{2}}{4t}\Big{)}\exp\Big{(}-\frac {|x-y|^{2}}{4t}\Big{)}\] \[\leq Ct^{-1}\exp\Big{(}-\frac{|x-y|^{2}}{(4+\epsilon)t}\Big{)}.\]
Therefore, it suffices to verify (3.15) and (3.16). Since \(x=y\) and \(\frac{B_{0}e^{-\alpha B_{0}t}}{4\pi^{2}|\sinh tB_{0}|}\leq Ct^{-1}\), the estimate (3.15) is a direct consequence of (3.3). However, the inequality (3.16) is more complicated, this is a consequence of (3.17) below.
**Proposition 3.7** (Davies-Gaffney inequality).: _Let \(A\) and \(B\) be two disjoint measurable sets in \(\mathbb{R}^{2}\) and suppose that \(f\in L^{2}(A)\) and \(g\in L^{2}(B)\) such that \(\operatorname{supp}(f)\subset A\) and \(\operatorname{supp}(g)\subset B\). Then_
\[|\langle e^{-tH_{\alpha,B_{0}}}f,g\rangle|\leq\|f\|_{L^{2}(A)}\|g\|_{L^{2}(B)} e^{-\frac{d^{2}(A,B)}{4t}} \tag{3.17}\]
_where \(d(A,B)=\inf\{\rho=|x-y|:x\in A,y\in B\}\)._
Proof.: Let \(\rho=d(A,B)\) and define
\[A_{\rho}=\{x\in\mathbb{R}^{2}:d(x,A)<\rho\},\quad A_{\rho}^{c}=\mathbb{R}^{2} \setminus A_{\rho}\]
where \(d(x,A)=\inf\{|x-y|:y\in A\}\). Then \(B\subset A_{\rho}^{c}\), furthermore, by Cauchy-Schwartz inequality, we have
\[|\langle e^{-tH_{\alpha,B_{0}}}f,g\rangle| \leq\Big{(}\int_{B}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\Big{)}^{1/2}\| g\|_{L^{2}(B)}\] \[\leq\Big{(}\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx \Big{)}^{1/2}\|g\|_{L^{2}(B)}.\]
Therefore, (3.17) follows if we could prove
\[\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\leq\|f\|_{L^{2}(A)}^{2}e^{- \frac{d^{2}(A,B)}{2t}}. \tag{3.18}\]
To this end, for any fixed \(s>t\) and \(x\in\mathbb{R}^{2}\) and \(\tau\in[0,s)\), we define the function
\[\xi(\tau,x):=\frac{d^{2}(x,A_{\rho}^{c})}{2(\tau-s)},\]
and set
\[J(\tau):=\int_{\mathbb{R}^{2}}\big{|}e^{-\tau H_{\alpha,B_{0}}}f\big{|}^{2}e^{ \xi(\tau,x)}\,dx. \tag{3.19}\]
**Lemma 3.8**.: _For the function defined in (3.19), we have that_
\[J(t)\leq J(0). \tag{3.20}\]
We assume (3.20) to prove (3.18) by postponing the proof for a moment. Since \(x\in A_{\rho}^{c}\), one has \(\xi(\tau,x)=0\), thus
\[\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\leq\int_{\mathbb{R}^{2}} \big{|}e^{-tH_{\alpha,B_{0}}}f\big{|}^{2}e^{\xi(t,x)}\,dx=J(t). \tag{3.21}\]
For \(t=0\), since
\[e^{\xi(0,x)}\leq\begin{cases}1,&x\in A_{\rho}^{c};\\ e^{-\frac{d^{2}(x,A_{\rho}^{c})}{2s}},&x\in A,\end{cases}\]
we see
\[J(0)=\int_{\mathbb{R}^{2}}f(x)^{2}e^{\xi(0,x)}\,dx\leq\int_{A_{\rho}^{c}}|f(x )|^{2}\,dx+\exp\Big{(}-\frac{\rho^{2}}{2s}\Big{)}\int_{A}|f(x)|^{2}\,dx. \tag{3.22}\]
By using (3.21), (3.20) and (3.22) and taking \(s\to t+\), we obtain
\[\int_{A_{\rho}^{c}}|e^{-tH_{\alpha,B_{0}}}f|^{2}dx\leq J(t)\leq J(0)\] \[\leq C\left(\int_{A_{\rho}^{c}}|f(x)|^{2}\,dx+\exp\Big{(}-\frac{ \rho^{2}}{2s}\Big{)}\int_{A}|f(x)|^{2}\,dx\right)\] \[\leq\int_{A_{\rho}^{c}}|f(x)|^{2}\,dx+\exp\Big{(}-\frac{\rho^{2}}{ 2t}\Big{)}\|f\|_{L^{2}(A)}^{2}.\]
Since \(f\in L^{2}(A)\) and \(\operatorname{supp}(f)\subset A\), then \(\int_{A_{\rho}^{c}}|f(x)|^{2}\,dx=0\) which implies (3.18).
Now it remains to prove (3.20) in Lemma 3.8.
The proof of Lemma 3.8.: Indeed, we need to prove that the function \(J(\tau)\) defined in (3.19) is non-increasing in \(\tau\in[0,s)\). We closely follow the argument of the integrated maximum principle [20, Theorem 12.1]. Furthermore, for all \(\tau,\tau_{0}\in[0,s)\), if \(\tau>\tau_{0}\), then
\[J(\tau)\leq J(\tau_{0}). \tag{3.23}\]
which shows (3.20) by taking \(\tau=t\) and \(\tau_{0}=0\). Without loss of generality, we assume \(f\geq 0\) in (3.19). Indeed, if \(f\) has a change sign, we set \(g=|e^{-\tau_{0}H_{\alpha,B_{0}}}f|\geq 0\), then
\[|e^{-\tau H_{\alpha,B_{0}}}f|=|e^{-(\tau-\tau_{0})H_{\alpha,B_{0}}}e^{-\tau_{0 }H_{\alpha,B_{0}}}f|\leq e^{-(\tau-\tau_{0})H_{\alpha,B_{0}}}g.\]
Assume that (3.23) holds for \(g\geq 0\), then
\[J(\tau) =\int_{\mathbb{R}^{2}}\big{(}e^{-\tau H_{\alpha,B_{0}}}f\big{)}^{ 2}e^{\xi(\tau,x)}\,dx\] \[\leq\int_{\mathbb{R}^{2}}\big{(}e^{-(\tau-\tau_{0})H_{\alpha,B_{0 }}}g\big{)}^{2}e^{\xi(\tau,x)}\,dx\] \[\leq\int_{\mathbb{R}^{2}}g^{2}(x)e^{\xi(\tau_{0},x)}dx\] \[=\int_{\mathbb{R}^{2}}\big{(}e^{-\tau_{0}H_{\alpha,B_{0}}}f\big{)} ^{2}e^{\xi(\tau_{0},x)}dx\] \[=J(\tau_{0}).\]
From now on, we assume \(f\geq 0\). By using [20, Theorem 5.23](which claims \(e^{-\tau H_{\alpha,B_{0}}^{\Omega_{i}}}f\to e^{-\tau H_{\alpha,B_{0}}}f\) in \(L^{2}\) as \(i\to+\infty\) where \(\Omega_{0}\subset\Omega_{1}\subset\cdots\subset\Omega_{i}\subset\cdots\to \mathbb{R}^{2}\) as \(i\to+\infty\) ), it suffices to show that, for any relatively compact open set \(\Omega\subset\mathbb{R}^{2}\), the function
\[J_{\Omega}(\tau):=\int_{\Omega}\big{|}e^{-\tau H_{\alpha,B_{0}}^{\Omega}}f \big{|}^{2}e^{\xi(\tau,x)}\,dx\]
is non-increasing in \(\tau\in[0,s)\), where \(H_{\alpha,B_{0}}^{\Omega}\) is the Dirichelet Laplace operator in \(\Omega\)
\[H_{\alpha,B_{0}}^{\Omega}=H_{\alpha,B_{0}}\big{|}_{W_{0}^{2}(\Omega)\cap D(H_{ \alpha,B_{0}})}.\]
To this end, we need to prove the derivative on \(J_{\Omega}(\tau)\) w.r.t. \(\tau\) is non-positive.
By using [20, Theorem 4.9], the function \(u(t,\cdot)=e^{-tH^{\Omega}_{\alpha,R_{0}}}f\) is strongly differentiable in \(L^{2}(\Omega)\) and its strong derivative \(\frac{du}{dt}\) in \(L^{2}(\Omega)\) is given by
\[\frac{du}{dt}=-H^{\Omega}_{\alpha,B_{0}}u.\]
Then we have
\[\begin{split}\frac{dJ_{\Omega}(\tau)}{d\tau}&=\operatorname {Re}\frac{d}{d\tau}\langle u,ue^{\xi(\tau,x)}\rangle\\ &=\operatorname{Re}\langle\frac{du}{d\tau},ue^{\xi(\tau,x)} \rangle+\operatorname{Re}\langle u,\frac{d(ue^{\xi(\tau,x)})}{d\tau}\rangle \\ &=2\operatorname{Re}\langle\frac{du}{d\tau},ue^{\xi(\tau,x)} \rangle+\langle|u|^{2},\frac{d(e^{\xi(\tau,x)})}{d\tau}\rangle\\ &=2\operatorname{Re}\langle-H^{\Omega}_{\alpha,B_{0}}u,ue^{\xi( \tau,x)}\rangle+\langle|u|^{2},\frac{d(e^{\xi(\tau,x)})}{d\tau}\rangle.\end{split} \tag{3.24}\]
Since \(e^{\xi(\tau,\cdot)}\in\operatorname{Lip}_{\operatorname{loc}}(\mathbb{R}^{2})\), one has \(e^{\xi(\tau,\cdot)}\in\operatorname{Lip}(\Omega)\). The solution \(u(t,\cdot)\in W^{1}_{0}(\Omega)\), hence \(e^{\xi(\tau,\cdot)}u(t,\cdot)\in W^{1}_{0}(\Omega)\). On the one hand, recall the operator
\[H^{\Omega}_{\alpha,B_{0}}=-(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x)))^{2},\]
by using the Green formula, we obtain
\[2\operatorname{Re}\langle-H^{\Omega}_{\alpha,B_{0}}u,ue^{\xi( \tau,x)}\rangle=2\langle(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x)))^{2}u,ue ^{\xi(\tau,x)}\rangle \tag{3.25}\] \[=-2\int_{\Omega}|(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x))) u|^{2}e^{\xi(\tau,x)}\,dx-2\operatorname{Re}\int_{\Omega}\nabla u\cdot\nabla \xi e^{\xi(\tau,x)}\bar{u}\,dx.\]
On the other hand, we observe that
\[\frac{d(e^{\xi(\tau,x)})}{dt}=e^{\xi(\tau,x)}\frac{\partial\xi}{\partial\tau},\]
then
\[\begin{split}\langle|u|^{2},\frac{d(e^{\xi(\tau,x)})}{dt}\rangle& =\int_{\Omega}|u|^{2}e^{\xi(\tau,x)}\frac{\partial\xi(\tau,x)}{ \partial\tau}\,dx\\ &\leq-\frac{1}{2}\int_{\Omega}|u|^{2}e^{\xi(\tau,x)}|\nabla\xi|^{ 2}\,dx.\end{split} \tag{3.26}\]
This is because that we can verify that the function \(\xi(\tau,x)\) satisfies
\[\frac{\partial\xi}{\partial\tau}+\frac{1}{2}|\nabla\xi|^{2}=-\frac{d^{2}(x,A^ {c}_{\rho})}{2(\tau-s)^{2}}+\frac{1}{2}\Bigg{(}\frac{d(x,A^{c}_{\rho})|\nabla d (x,A^{c}_{\rho})|}{2(\tau-s)}\Bigg{)}^{2}\leq-\frac{3}{4}\frac{d^{2}(x,A^{c}_{ \rho})}{2(\tau-s)^{2}}\leq 0,\]
since \(\|\nabla f\|_{L^{\infty}}\leq\|f\|_{\operatorname{Lip}}\) and the function \(x\mapsto d(x,E)\) is Lipschitz function with Lipschitz norm \(1\), see [20, Lemma 11.2 and Theorem 11.3]. Therefore, by collecting (3.24), (3.25) and (3.26), we finally show
\[\begin{split}\frac{dJ_{\Omega}(\tau)}{d\tau}&\leq-2 \int_{\Omega}|(\nabla+i(A_{B}(x)+A_{\operatorname{hmf}}(x)))u|^{2}e^{\xi( \tau,x)}\,dx\\ &\quad-2\int_{\Omega}\Big{(}\operatorname{Re}\big{(}\nabla u \cdot\nabla\xi\bar{u}\big{)}+\frac{1}{4}|u|^{2}|\nabla\xi|^{2}\Big{)}e^{\xi( \tau,x)}\,dx.\end{split}\]
On the one hand, we notice that
\[\operatorname{Re}(\nabla u\cdot\nabla\xi\bar{u})=\frac{1}{2}\big{(}(\nabla u \cdot\nabla\xi\bar{u})+(\nabla\bar{u}\cdot\nabla\xi u)\big{)}=\frac{1}{2} \big{(}\nabla|u|^{2}\cdot\nabla\xi\big{)}=(\nabla|u|)\cdot\nabla\xi|u|.\]
Therefore, we have
\[\frac{dJ_{\Omega}(\tau)}{d\tau} \leq-2\int_{\Omega}\big{(}|(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))u|^{2}- |\nabla|u||^{2}\big{)}e^{\xi(\tau,x)}\,dx\] \[\quad-2\int_{\Omega}\Big{(}|\nabla|u||^{2}+(\nabla|u|)\cdot\nabla \xi|u|+\frac{1}{4}|u|^{2}|\nabla\xi|^{2}\Big{)}e^{\xi(\tau,x)}\,dx\] \[=-2\int_{\Omega}\big{(}|(\nabla+i(A_{B}(x)+A_{\rm hmf}(x)))u|^{2} -|\nabla|u||^{2}\big{)}e^{\xi(\tau,x)}\,dx\] \[\quad-2\int_{\Omega}\big{|}\nabla|u|+\frac{1}{2}|u|\nabla\xi\big{|} ^{2}e^{\xi(\tau,x)}\,dx.\]
On the other hand, the diamagnetic inequality shows that
\[|\nabla|u|(x)|=\Big{|}\operatorname{Re}\big{(}\frac{\bar{u}(x)}{|u (x)|}\nabla u(x)\big{)}\Big{|} =\Big{|}\operatorname{Re}\Big{(}\big{(}\nabla u(x)+i(A_{B}+A_{\rm hmf })u(x)\big{)}\frac{\bar{u}(x)}{|u(x)|}\Big{)}\Big{|}\] \[\leq\Big{|}\big{(}\nabla+i(A_{B}+A_{\rm hmf})\big{)}u(x)\Big{|}.\]
Therefore, we finally prove that
\[\frac{dJ_{\Omega}(\tau)}{d\tau}\leq 0\]
Consequently, (3.23) follows, which implies Lemma 3.8.
### Method II
In this subsection, we will use the Schulman-Sunada formula (which is the second method we used in [36] to construct the Schrodinger propagator) to reconstruct the heat propagator. For more about the Schulman-Sunada formula, we refer the reads to [34, 35]. The representation and estimate of the heat kernel capture more magnetic effects.
Let \(M=\mathbb{R}^{2}\setminus\{\vec{0}\}=(0,+\infty)\times\mathbb{S}^{1}\) where \(\mathbb{S}^{1}\) is the unit circle. The universal covering space of \(M\) is \(\tilde{M}=(0,+\infty)\times\mathbb{R}\), then \(M=\tilde{M}/\Gamma\) where the structure group \(\Gamma=2\pi\mathbb{Z}\) acts in the second factor of the Cartesian product. Then Schulman's ansatz (see [34, 35]) enables us to compute the heat propagator \(e^{-tH_{\alpha,B_{0}}}\) on \(M\) by using the heat propagator \(e^{-t\tilde{H}_{\alpha,B_{0}}}\) (see the operator \(\tilde{H}_{\alpha,B_{0}}\) in (3.31) below) on \(\tilde{M}\). More precisely, see [35, (1)], we have
\[e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2})=\sum_{j\in\mathbb{Z }}e^{-t\tilde{H}_{\alpha,B_{0}}}(r_{1},\theta_{1}+2j\pi;r_{2},\theta_{2}), \tag{3.27}\]
which is similar to the construction of wave propagator on \(\mathbb{T}^{n}\), see [32, (3.5.12)]. In the following subsections, we will use it to construct the heat kernel.
**Proposition 3.9**.: _Let \(K_{H}(t;x,y)\) be the heat kernel in (3.2) of Proposition 3.1. Then_
\[e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2}) \tag{3.28}\] \[=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{ 2})}{4\tanh(tB_{0})}}\] \[\quad\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh (tB_{0}+i(\theta_{1}-\theta_{2}))}e^{-\alpha tB_{0}}e^{-i\alpha(\theta_{1}- \theta_{2})}\varphi(\theta_{1},\theta_{2})\] \[\quad-\frac{\sin(\alpha\pi)}{\pi}\int_{-\infty}^{\infty}e^{-\frac {B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\frac{e^{s\alpha}}{e^{i(\theta_{1}- \theta_{2})}e^{s+tB_{0}}+1}\,ds\bigg{)},\]
_where_
\[\varphi(\theta_{1},\theta_{2})=\begin{cases}1,\qquad|\theta_{1}-\theta_{2}|\leq\pi \\ e^{-2\pi\alpha i},\quad-2\pi<\theta_{1}-\theta_{2}\leq-\pi\\ e^{2\pi\alpha i},\quad\pi<\theta_{1}-\theta_{2}\leq 2\pi.\end{cases} \tag{3.29}\]
_Furthermore, we obtain the estimate_
\[\Big{|}e^{-tH_{\alpha},B_{0}}(x,y)\Big{|}\leq C\frac{B_{0}e^{-\alpha tB_{0}}} {4\pi\sinh(tB_{0})}e^{-\frac{B_{0}|x-y|^{2}}{4\sinh(tB_{0})}}. \tag{3.30}\]
Proof.: Recall (3.2) and \(\alpha_{k}=|k+\alpha|\), we have
\[K_{H}(t;x,y)=\frac{B_{0}}{4\pi^{2}\sinh tB_{0}}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^ {2})\cosh tB_{0}}{4\sinh tB_{0}}}\sum_{k\in\mathbb{Z}}e^{ik(\theta_{1}-\theta _{2})}e^{-(k+\alpha)tB_{0}}I_{\alpha_{k}}\left(\frac{B_{0}r_{1}r_{2}}{2\sinh tB _{0}}\right).\]
The main obstacle is to take the summation in \(k\). If \(\alpha=0\), by using the formula [5, Eq. 9. 6. 19]
\[\sum_{k\in\mathbb{Z}}e^{kz}I_{|k|}(x)=e^{x\cosh(z)},\]
we will see
\[K_{H}(t;x,y)=\frac{B_{0}}{4\pi\sinh(B_{0}t)}\exp\left(-\frac{B_{0}(r_{1}^{2}+r_ {2}^{2})}{4\tanh(B_{0}t)}+\frac{B_{0}r_{1}r_{2}}{2\sinh(B_{0}t)}\cosh(B_{0}t+i (\theta_{1}-\theta_{2}))\right),\quad\text{if}\quad\alpha=0,\]
which is exactly the same as the result (3.13) obtained from the Mehler formula. Heuristically, if we can replace the above summation in \(k\) by the integration in \(k\), then one can use the translation invariant of integration to obtain further result. To this end, as did in [36, Section 4], we consider the operator
\[\tilde{H}_{\alpha,B_{0}}=-\partial_{r}^{2}-\frac{1}{r}\partial_{r}+\frac{1}{ r^{2}}\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}, \tag{3.31}\]
which acts on \(L^{2}(\tilde{M},rdr\,d\theta)\). We emphasize that the variable \(\theta\in\mathbb{R}\) while not compact manifold \(\mathbb{S}^{1}\). Then we choose \(e^{i(\tilde{k}-\alpha)\theta}\) as an eigenfunction of the operator \(\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\) on \(L^{2}_{\theta}(\mathbb{R})\), which satisfies that
\[\Big{(}-i\partial_{\theta}+\alpha+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi( \theta)=\Big{(}\tilde{k}+\frac{B_{0}r^{2}}{2}\Big{)}^{2}\varphi(\theta). \tag{3.32}\]
It worths to point out that \(\tilde{k}\in\mathbb{R}\) is a real number while \(k\in\mathbb{Z}\). More importantly, we informally move the \(\alpha\) in right hand side of (3.32) to the \(e^{i(\tilde{k}-\alpha)\theta}\), which will simplify the eigenfunctions. Hence, similarly as [36] for the Schrodinger kernel, we obtain the heat kernel of \(e^{-t\tilde{H}_{\alpha,B_{0}}}\)
\[\tilde{K}_{H}(t;x,y)= \frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2 })}{4\tanh(tB_{0})}}\] \[\times\int_{\mathbb{R}}e^{i(\tilde{k}-\alpha)(\theta_{1}-\theta_ {2}-itB_{0})}I_{|\tilde{k}|}\bigg{(}\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \bigg{)}\,d\tilde{k},\]
where \(x=(r_{1},\theta_{1})\in\tilde{M}\) and \(y=(r_{2},\theta_{2})\in\tilde{M}\). A key identity is from [34, (2.11)], we state it in the following lemma.
**Lemma 3.10**.: _For any \(z\in\mathbb{C}\), one has_
\[\begin{split}\int_{\mathbb{R}}e^{z\vec{k}}I_{|\vec{k}|}(x)\,d\vec{k}& =e^{x\cosh(z)}H(\pi-|\operatorname{Im}z|)\\ &+\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left( \frac{1}{z+s+i\pi}-\frac{1}{z+s-i\pi}\right)\,ds.\end{split} \tag{3.33}\]
_where_
\[H(x)=\begin{cases}1,&x>0,\\ 0,&x\leq 0.\end{cases}\]
_is the Heaviside step function._
Proof.: 1 To prove (3.33), we recall the representation of the modified Bessel function of order \(\nu\)
Footnote 1: We would like to appreciate to Prof. P. Šfovícek for his helpful discussion.
\[I_{\nu}(x)=\frac{1}{\pi}\int_{0}^{\pi}e^{x\cos s}\cos(s\nu)\,ds-\frac{\sin(\nu \pi)}{\pi}\int_{0}^{\infty}e^{-x\cosh s-\nu s}\,ds,\quad x>0. \tag{3.34}\]
For fixed \(x>0\), one has that
\[I_{\nu}(x)\sim\frac{1}{\sqrt{2\pi\nu}}\Big{(}\frac{ex}{2\nu}\Big{)}^{\nu}, \quad\nu\to+\infty\]
decays very rapidly in the order \(\nu\). Due to this fact, the LHS of (3.33) is absolutely convergent, hence the dominated convergent theorem implies that the LHS of (3.33) represents an entire function in \(z\) (holomorphic everywhere on \(\mathbb{C}\)). The RHS of (3.33) is also an entire function in \(z\) as well but it is less obvious. The RHS of (3.33) is surely holomorphic in \(z\) everywhere on \(\mathbb{C}\) except the lines \(\operatorname{Im}z=\pm\pi\). On these lines, there is no discontinuity for the RHS of (3.33). For example, we consider \(\operatorname{Im}z=\pi\). In fact, if set
\[F(z)= e^{x\cosh(z)}H(\pi-|\operatorname{Im}z|)\] \[+\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac {1}{z+s+i\pi}-\frac{1}{z+s-i\pi}\right)\,ds,\]
then one can prove
\[\lim_{\operatorname{Im}z\to\pi-}F(z)=\lim_{\operatorname{Im}z\to\pi+}F(z).\]
Indeed, for \(a\in\mathbb{R}\) and \(\epsilon>0\), we need to prove
\[\lim_{\epsilon\to 0}F(a+i\pi-i\epsilon)=\lim_{\epsilon\to 0}F(a+i\pi+i \epsilon),\]
that is,
\[\lim_{\epsilon\to 0}\left(e^{x\cosh(a+i\pi-i\epsilon)}+\frac{1}{2\pi i} \int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac{1}{a+i\pi-i\epsilon+s+i\pi} -\frac{1}{a+i\pi-i\epsilon+s-i\pi}\right)\,ds\right)\] \[=\lim_{\epsilon\to 0}\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x \cosh(s)}\left(\frac{1}{a+i\pi+i\epsilon+s+i\pi}-\frac{1}{a+i\pi+i\epsilon+s- i\pi}\right)\,ds.\]
By direct computation, we obtain
\[\lim_{\epsilon\to 0}\frac{1}{2\pi i}\Big{[}\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\left(\frac{1}{a+i\epsilon+s+2i\pi}-\frac{1}{a-i\epsilon+s +2i\pi}\right)\,ds\] \[\qquad+\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac{1}{a+s-i \epsilon}-\frac{1}{a+s+i\epsilon}\right)\,ds\Big{]}\] \[=\lim_{\epsilon\to 0}\frac{1}{2\pi i}\Big{[}\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\frac{-2i\epsilon}{(a+s+2i\pi)^{2}+\epsilon^{2}}\,ds+ \int_{-\infty}^{\infty}e^{-x\cosh(s)}\frac{2i\epsilon}{(a+s)^{2}+\epsilon^{2}} \,ds\Big{]}\] \[=\lim_{\epsilon\to 0}\frac{1}{\pi}\Big{[}-\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\frac{\epsilon}{(a+s+2i\pi)^{2}+\epsilon^{2}}\,ds+\int_ {-\infty}^{\infty}e^{-x\cosh(s)}\frac{\epsilon}{(a+s)^{2}+\epsilon^{2}}\,ds \Big{]}\] \[=e^{-x\cosh(a)},\]
where we use the fact that the Poisson kernel is an approximation to the identity, implying that, for any reasonable function \(m(x)\)
\[m(x)=\lim_{\epsilon\to 0}\frac{1}{\pi}\int_{-\infty}^{\infty}m(y)\frac{ \epsilon}{(x-y)^{2}+\epsilon^{2}}\,ds.\]
Obviously, since \(\cosh(a+i\pi)=-\cosh a\), we have
\[e^{x\cosh(a+i\pi)}=e^{-x\cosh(a)}\] \[= \lim_{\epsilon\to 0}\frac{1}{2\pi i}\Big{[}\int_{-\infty}^{ \infty}e^{-x\cosh(s)}\left(\frac{1}{a+i\epsilon+s+2i\pi}-\frac{1}{a-i\epsilon+ s+2i\pi}\right)\,ds\] \[\qquad+\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac{1}{a+s-i \epsilon}-\frac{1}{a+s+i\epsilon}\right)\,ds\Big{]}.\]
Therefore the RHS of (3.33), \(F(z)\), is an entire function in \(z\) as well. As a consequence, it suffices to verify the formula (3.33) for purely imaginary value of \(z\) only. Let \(z=ib\) and recall (3.34), then
\[\int_{\mathbb{R}}e^{ib\tilde{k}}I_{|\tilde{k}|}(x)\,d\tilde{k} =\frac{1}{\pi}\int_{\mathbb{R}}e^{ib\tilde{k}}\int_{0}^{\pi}e^{x \cos s}\cos(s|\tilde{k}|)\,ds\,d\tilde{k}\] \[\qquad-\int_{\mathbb{R}}e^{ib\tilde{k}}\frac{\sin(|\tilde{k}|\pi) }{\pi}\int_{0}^{\infty}e^{-x\cosh s-|\tilde{k}|s}\,ds\,d\tilde{k}.\]
The first term becomes that
\[\frac{1}{\pi}\int_{\mathbb{R}}e^{ib\tilde{k}}\int_{0}^{\pi}e^{x \cos s}\cos(s\tilde{k})\,ds\,d\tilde{k} =\frac{1}{2\pi}\int_{\mathbb{R}}e^{ib\tilde{k}}\int_{\mathbb{R}}H (\pi-|s|)e^{x\cos s}e^{-is\tilde{k}}\,ds\,d\tilde{k}\] \[=e^{x\cos b}H(\pi-|b|)=e^{x\cosh(z)}H(\pi-|\operatorname{Im}z|).\]
The second term gives
\[-\int_{\mathbb{R}}e^{ib\tilde{k}}\frac{\sin(|\tilde{k}|\pi)}{\pi} \int_{0}^{\infty}e^{-x\cosh s-|\tilde{k}|s}\,ds\,d\tilde{k}\] \[=-\frac{1}{2\pi i}\int_{0}^{\infty}e^{-x\cosh s}\Big{(}\int_{0}^{ \infty}\left(e^{[i(b+\pi)-s]\tilde{k}}-e^{[i(b-\pi)-s]\tilde{k}}\right)d \tilde{k}\] \[\qquad+\int_{-\infty}^{0}\left(e^{[i(b-\pi)+s]\tilde{k}}-e^{[i(b +\pi)+s]\tilde{k}}\right)d\tilde{k}\Big{)}\,ds\] \[=\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-x\cosh(s)}\left(\frac {1}{ib+s+i\pi}-\frac{1}{ib+s-i\pi}\right)\,ds.\]
Therefore, we have proved (3.33).
Let \(z=tB_{0}+i(\theta_{1}-\theta_{2})\), by using Lemma 3.10, we obtain
\[\tilde{K}_{H}(t;x,y)=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}( r_{1}^{2}+r_{2}^{2})}{4\tanh(tB_{0})}}e^{-\alpha tB_{0}}e^{-i\alpha(\theta_{1}- \theta_{2})}\] \[\qquad\qquad\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0 })}\cosh(z)}H\big{(}\pi-|\theta_{1}-\theta_{2}|\big{)}\] \[\qquad+\,\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-\frac{B_{0}r _{1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\left(\frac{1}{z+s+i\pi}-\frac{1}{z+s-i\pi} \right)\,ds\bigg{)}.\]
By using (3.27) and letting \(z_{j}=tB_{0}+i(\theta_{1}+2\pi j-\theta_{2})\), we further show
\[\begin{split}& e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2}, \theta_{2})=\sum_{j\in\mathbb{Z}}e^{-t\tilde{H}_{\alpha,B_{0}}}(r_{1},\theta_ {1}+2j\pi;r_{2},\theta_{2}),\\ &=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^ {2})}{4\tanh(tB_{0})}}\sum_{j\in\mathbb{Z}}e^{-\alpha z_{j}}\\ &\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(z_{ j})}H\big{(}\pi-|\theta_{1}+2\pi j-\theta_{2}|\big{)}\\ &\quad+\frac{1}{2\pi i}\int_{-\infty}^{\infty}e^{-\frac{B_{0}r_{ 1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\left(\frac{1}{z_{j}+s+i\pi}-\frac{1}{z_{j}+s- i\pi}\right)\,ds\bigg{)}.\end{split} \tag{3.35}\]
In the following, we consider the two summations. Since \(\theta_{1},\theta_{2}\in[0,2\pi)\), then \(\theta_{1}-\theta_{2}\in(-2\pi,2\pi)\), recall (3.29), hence we obtain
\[\begin{split}&\sum_{j\in\mathbb{Z}}e^{-\alpha z_{j}}e^{\frac{B_{0}r _{1}r_{2}}{2\sinh(tB_{0})}\cosh(z_{j})}H\big{(}\pi-|\theta_{1}+2\pi j-\theta_{ 2}|\big{)}\\ &=e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(z)}e^{-\alpha tB _{0}}\sum_{j\in\mathbb{Z}}e^{-i\alpha(\theta_{1}-\theta_{2}+2\pi j)}H\big{(} \pi-|\theta_{1}+2\pi j-\theta_{2}|\big{)}\\ &=e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(z)}e^{-\alpha tB _{0}}e^{-i\alpha(\theta_{1}-\theta_{2})}\varphi(\theta_{1},\theta_{2}),\end{split}\]
For the second summation term, we use the formula
\[\sum_{j\in\mathbb{Z}}\frac{e^{-2\pi i\alpha j}}{\sigma+2\pi j}=\frac{ie^{i \alpha\sigma}}{e^{i\sigma}-1},\quad\alpha\in(0,1),\quad\sigma\in\mathbb{C} \setminus 2\pi\mathbb{Z},\]
to obtain
\[\begin{split}&\sum_{j\in\mathbb{Z}}e^{-\alpha z_{j}}\left(\frac{1} {z_{j}+s+i\pi}-\frac{1}{z_{j}+s-i\pi}\right)\\ &=e^{-\alpha(tB_{0}+i(\theta_{1}-\theta_{2}))}\sum_{j\in\mathbb{Z} }\big{(}\frac{e^{-2\pi i\alpha j}}{i(\theta_{1}-\theta_{2}+2j\pi+\pi)+s+tB_{0 }}-\frac{e^{-2\pi i\alpha j}}{i(\theta_{1}-\theta_{2}+2j\pi-\pi)+s+tB_{0}} \big{)}\\ &=-ie^{-\alpha(tB_{0}+i(\theta_{1}-\theta_{2}))}\sum_{j\in\mathbb{ Z}}\big{(}\frac{e^{-2\pi i\alpha j}}{\sigma_{1}+2j\pi}-\frac{e^{-2\pi i \alpha j}}{\sigma_{2}+2j\pi}\big{)}\\ &=e^{-\alpha(tB_{0}+i(\theta_{1}-\theta_{2}))}\big{(}\frac{e^{i \alpha\sigma_{1}}}{e^{i\sigma_{1}}-1}-\frac{e^{i\alpha\sigma_{2}}}{e^{i\sigma_ {2}}-1}\big{)}=\frac{e^{s\alpha}}{e^{i(\theta_{1}-\theta_{2}+\pi)}e^{s+tB_{0 }}-1}\big{(}e^{i\alpha\pi}-e^{-i\alpha\pi}\big{)}\\ &=-2i\sin(\alpha\pi)\frac{e^{s\alpha}}{e^{i(\theta_{1}-\theta_{2}) }e^{s+tB_{0}}+1},\end{split}\]
where \(\sigma_{1}=(\theta_{1}-\theta_{2}+\pi)-i(s+tB_{0})\) and \(\sigma_{2}=(\theta_{1}-\theta_{2}-\pi)-i(s+tB_{0})\). Therefore, by using (3.35), we show (3.28)
\[\begin{split}& e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2}, \theta_{2})\\ &=\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{1}^{2}+r_{2} )}{4\sinh(tB_{0})}}\\ &\qquad\times\bigg{(}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(tB_{0}+i(\theta_{1}-\theta_{2}))}e^{-\alpha tB_{0}}e^{-i\alpha(\theta_{1 }-\theta_{2})}\varphi(\theta_{1},\theta_{2})\\ &\quad-\frac{\sin(\alpha\pi)}{\pi}\int_{-\infty}^{\infty}e^{- \frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(s)}\frac{e^{s\alpha}}{e^{i(\theta _{1}-\theta_{2})}e^{s+tB_{0}}+1}\,ds\bigg{)}.\end{split} \tag{3.36}\]
To prove (3.30), we first note that
\[\begin{split}\cosh(tB_{0}+i\theta)&=\cos\theta \cosh(tB_{0})+i\sin\theta\sinh(tB_{0}),\\ |x-y|^{2}&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos( \theta_{1}-\theta_{2}),\end{split}\]
the first term is controlled by
\[\begin{split}&\frac{B_{0}}{4\pi\sinh(tB_{0})}e^{-\frac{B_{0}(r_{ 2}^{2}+r_{2}^{2})}{4\sinh(tB_{0})}}e^{\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(tB_{0})\cos(\theta_{1}-\theta_{2}))}e^{-\alpha tB_{0}}\\ &\leq C\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi\sinh(tB_{0})}e^{-\frac {B_{0}|x-y|^{2}}{4\sinh(tB_{0})}}.\end{split} \tag{3.37}\]
For the second term, we aim to prove
\[\Big{|}\int_{\mathbb{R}}e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh s} \frac{e^{\alpha s}}{1+e^{s+i(\theta_{1}-\theta_{2})+tB_{0}}}\,ds\Big{|}\leq C, \tag{3.38}\]
where \(C\) is a constant independent of \(t\), \(r_{1},r_{2}\) and \(\theta_{1},\theta_{2}\). To this end, let \(\theta=\theta_{1}-\theta_{2}\), we write
\[\begin{split}&\int_{\mathbb{R}}e^{-\frac{B_{0}r_{1}r_{2}}{2 \sinh(tB_{0})}\cosh s}\frac{e^{\alpha s}}{1+e^{s+i(\theta_{1}-\theta_{2})+tB_{0 }}}\,ds\\ &=e^{-\alpha tB_{0}}\int_{0}^{\infty}\Big{(}e^{-\frac{B_{0}r_{1}r _{2}}{2\sinh(tB_{0})}\cosh(-s-tB_{0})}\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+ e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(s-tB_{0})}\frac{e^{\alpha s}}{1+e^{s+i \theta}}\Big{)}\,ds\\ &=e^{-\alpha tB_{0}}\Big{(}\int_{0}^{\infty}e^{-\frac{B_{0}r_{1}r _{2}}{2\sinh(tB_{0})}\cosh(-s-tB_{0})}\big{(}\frac{e^{-\alpha s}}{1+e^{-s+i \theta}}+\frac{e^{\alpha s}}{1+e^{s+i\theta}}\big{)}\,ds\\ &\quad+\int_{0}^{\infty}\Big{(}e^{-\frac{B_{0}r_{1}r_{2}}{2 \sinh(tB_{0})}\cosh(s-tB_{0})}-e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(-s-tB_{0})}\Big{)}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\,ds\Big{)},\end{split}\]
then we just need to verify that
\[\int_{0}^{\infty}\Big{|}\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+\frac{e^{\alpha s }}{1+e^{s+i\theta}}\Big{|}\,ds\lesssim 1, \tag{3.39}\]
and
\[\Big{|}\int_{0}^{\infty}\Big{(}e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})} \cosh(s-tB_{0})}-e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh(-s-tB_{0})} \Big{)}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\,ds\Big{|}\lesssim 1, \tag{3.40}\]
where the implicit constant is independent of \(\theta\). We first prove (3.39). In fact,
\[\frac{e^{-\alpha s}}{1+e^{-s+i\theta}}+\frac{e^{\alpha s}}{1+e^{s+i \theta}}\] \[=\frac{\cosh(\alpha s)e^{-i\theta}+\cosh((1-\alpha)s)}{\cos\theta+ \cosh s}\] \[=\frac{\cosh(\alpha s)\cos\theta+\cosh((1-\alpha)s)-i\sin\theta \cosh(\alpha s)}{2(\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2}))}\] \[=\frac{2\cos^{2}(\frac{\theta}{2})\cosh(\alpha s)+(\cosh((1- \alpha)s)-\cosh(\alpha s))-2i\sin(\frac{\theta}{2})\cos(\frac{\theta}{2}) \cosh(\alpha s)}{2(\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2}))}.\]
Since \(\cosh x-1\sim\frac{x^{2}}{2},\sinh x\sim x\), as \(x\to 0\); \(\cosh x\sim e^{x},\sinh x\sim e^{x}\), as \(x\to\infty\), we have
\[\int_{0}^{\infty}\Big{|}\frac{\cos^{2}(\frac{\theta}{2})\cosh(\alpha s)}{\cos ^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\lesssim\int_{0}^{1} \frac{2|\cos(\frac{\theta}{2})|}{s^{2}+(2|\cos(\frac{\theta}{2})|)^{2}}ds+\int _{1}^{\infty}\ e^{(\alpha-1)s}ds\lesssim 1.\]
Similarly, we obtain
\[\int_{0}^{\infty}\Big{|}\frac{\sin(\frac{\theta}{2})\cos(\frac{\theta}{2}) \cosh(\alpha s)}{\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds \lesssim 1.\]
Finally, we verify that
\[\int_{0}^{\infty}\Big{|}\frac{\cosh((1-\alpha)s)-\cosh(\alpha s)} {\cos^{2}(\frac{\theta}{2})+\sinh^{2}(\frac{s}{2})}\Big{|}\,ds\] \[\lesssim\int_{0}^{1}\frac{|\frac{(1-\alpha)^{2}}{2}-\frac{\alpha ^{2}}{2}|s^{2}}{s^{2}}ds+\int_{1}^{\infty}\big{(}e^{-\alpha s}+e^{(\alpha-1)s }\big{)}ds\lesssim 1.\]
We next prove (3.40). For convenience, we denote
\[f(s)=e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cdot\cosh(s-tB_{0})}\]
Then, by noting that \(|f(\pm s)|\leq 1\), we obtain
\[\Big{|}\int_{1}^{\infty}\big{(}f(s)-f(-s)\big{)}\frac{e^{\alpha s }}{1+e^{s+i\theta}}\,ds\Big{|} \leq\int_{1}^{\infty}\Big{|}\frac{e^{\alpha s}}{1+e^{s+i\theta}} \Big{|}ds\] \[=\int_{1}^{\infty}\Big{|}\frac{e^{(\alpha-1)s}}{e^{-s}+e^{i \theta}}\Big{|}ds\] \[\leq\frac{1}{1-e^{-1}}\int_{1}^{\infty}e^{(\alpha-1)s}ds\] \[\lesssim 1,\]
hence, for (3.40), it suffices to prove
\[\int_{0}^{1}|f(s)-f(-s)|\Big{|}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\Big{|}\, ds\lesssim 1.\]
Since
\[f^{\prime}(\pm s) =\mp\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\sinh(\pm s-tB_{0})f( \pm s)\] \[=\mp\frac{B_{0}r_{1}r_{2}}{2}\Big{(}\frac{\pm\sinh s\cosh(tB_{0}) }{\sinh(tB_{0})}-\cosh s\Big{)}e^{-\frac{B_{0}r_{1}r_{2}}{2\sinh(tB_{0})}\cosh (\pm s-tB_{0})},\]
then for \(0<s<1\), there exists a constant \(C\) such that \(|f^{\prime}(s)|\leq C\). Hence, by differential mean value theorem, \(|f(s)-f(-s)|\leq Cs\) for \(0<s<1\), thus we have
\[\int_{0}^{1}|f(s)-f(-s)|\Big{|}\frac{e^{\alpha s}}{1+e^{s+i\theta}}\Big{|}\,ds \leq C\int_{0}^{1}\frac{se^{\alpha s}}{e^{s}-1}ds\lesssim 1.\]
Therefore, we prove (3.38). By collecting (3.36), (3.37) and (3.38), we finally obtain
\[\Big{|}e^{-tH_{\alpha,B_{0}}}(r_{1},\theta_{1};r_{2},\theta_{2}) \Big{|}\] \[\leq C\frac{B_{0}e^{-\alpha tB_{0}}}{4\pi\sinh(tB_{0})}\left(e^{- \frac{B_{0}|x-y|^{2}}{4\tanh(tB_{0})}}+e^{-\frac{B_{0}(r_{1}^{2}+r_{2}^{2})}{ 4\tanh(tB_{0})}}\right).\]
which implies (3.30).
## 4. Bernstein inequalities and square function inequalities
In this section, we prove the Bernstein inequalities and the square function inequality associated with the Schrodinger operator \(H_{\alpha,B_{0}}\) by using the heat kernel estimates showed in the previous section.
**Proposition 4.1** (Bernstein inequalities).: _Let \(\varphi(\lambda)\) be a \(C_{c}^{\infty}\) bump function on \(\mathbb{R}\) with support in \([\frac{1}{2},2]\), then it holds for any \(f\in L^{q}(\mathbb{R}^{2})\) and \(j\in\mathbb{Z}\)_
\[\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{p}(\mathbb{R}^{2})}\lesssim 2 ^{2j\left(\frac{1}{q}-\frac{1}{p}\right)}\|f\|_{L^{q}(\mathbb{R}^{2})},\quad 1 \leq q\leq p\leq\infty. \tag{4.1}\]
Proof.: Let \(\psi(x)=\varphi(\sqrt{x})\) and \(\psi_{e}(x):=\psi(x)e^{2x}\). Then \(\psi_{e}\) is a \(C_{c}^{\infty}\)-function on \(\mathbb{R}\) with support in \([\frac{1}{4},4]\) and then its Fourier transform \(\hat{\psi}_{e}\) belongs to Schwartz class. We write
\[\varphi(\sqrt{x}) =\psi(x)=e^{-2x}\psi_{e}(x)\] \[=e^{-2x}\int_{\mathbb{R}}e^{ix\cdot\xi}\hat{\psi}_{e}(\xi)\,d\xi\] \[=e^{-x}\int_{\mathbb{R}}e^{-x(1-i\xi)}\hat{\psi}_{e}(\xi)\,d\xi.\]
Therefore, by the functional calculus, we obtain
\[\varphi(\sqrt{H_{\alpha,B_{0}}})=\psi(H_{\alpha,B_{0}})=e^{-H_{\alpha,B_{0}}} \int_{\mathbb{R}}e^{-(1-i\xi)H_{\alpha,B_{0}}}\hat{\psi}_{e}(\xi)\,d\xi,\]
furthermore,
\[\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})=\psi(2^{-2j}H_{\alpha,B_{0}})=e^{-2^{-2 j}H_{\alpha,B_{0}}}\int_{\mathbb{R}}e^{-(1-i\xi)2^{-2j}H_{\alpha,B_{0}}}\hat{ \psi}_{e}(\xi)\,d\xi.\]
By using (3.8) in Proposition 3.3 with \(t=2^{-2j}\), we have
\[\Big{|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})(x,y)\Big{|} \lesssim 2^{4j}\int_{\mathbb{R}^{2}}e^{-\frac{2^{2j}|x-z|^{2}}{C}} e^{-\frac{2^{2j}|y-z|^{2}}{C}}\,dz\int_{\mathbb{R}}\hat{\psi}_{e}(\xi)\,d\xi\] \[\lesssim 2^{2j}\int_{\mathbb{R}^{2}}e^{-\frac{|2^{j}x-z|^{2}}{C}} e^{-\frac{|2^{j}y-z|^{2}}{C}}\,dz\] \[\lesssim 2^{2j}(1+2^{j}|x-y|)^{-N},\quad\forall N\geq 1.\]
where we use the fact that \(|\alpha-z|^{2}+|\beta-z|^{2}\geq\frac{1}{2}|\alpha-\beta|^{2}\) with \(\alpha,\beta\in\mathbb{R}^{2}\) and
\[\int_{\mathbb{R}^{2}}e^{-\frac{|\alpha-t|^{2}}{C}}e^{-\frac{|\beta- t|^{2}}{C}}\,dz \lesssim e^{-\frac{|\alpha-t|^{2}}{4C}}\int_{\mathbb{R}^{2}}e^{- \frac{|\alpha-t|^{2}}{2C}}e^{-\frac{|\beta-t|^{2}}{2C}}\,dz\] \[\lesssim e^{-\frac{|\alpha-t|^{2}}{4C}}\leq(1+|\alpha-\beta|)^{-N},\forall N\geq 1.\]
By Young's inequality, it follows (4.1).
**Proposition 4.2** (The square function inequality).: _Let \(\{\varphi_{j}\}_{j\in\mathbb{Z}}\) be a Littlewood-Paley sequence given by\((\ref{eq:1.6})\). Then for \(1<p<\infty\), there exist constants \(c_{p}\) and \(C_{p}\) depending on \(p\) such that_
\[c_{p}\|f\|_{L^{p}(\mathbb{R}^{2})}\leq\Big{\|}\Big{(}\sum_{j\in \mathbb{Z}}|\varphi_{j}(\sqrt{H_{\alpha,B_{0}}})f|^{2}\Big{)}^{\frac{1}{2}} \Big{\|}_{L^{p}(\mathbb{R}^{2})}\leq C_{p}\|f\|_{L^{p}(\mathbb{R}^{2})}. \tag{4.2}\]
Proof.: By using (3.8) in Proposition 3.3, Proposition 4.2 follows from the Rademacher functions argument in [33]. We also refer the reader to [1] for result that the square function inequality (4.2) can be derived from the heat kernel with Gaussian upper bounds.
## 5. The decay estimates
In this section, we mainly prove the decay estimate (1.12). The first key ingredient is the following Proposition about the subordination formula from [27, 12].
**Proposition 5.1**.: _If \(\varphi(\lambda)\in C_{c}^{\infty}(\mathbb{R})\) is supported in \([\frac{1}{2},2]\), then, for all \(j\in\mathbb{Z},t,x>0\) with \(2^{j}t\geq 1\), we can write_
\[\begin{split}&\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}\\ &=\rho\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}+\varphi(2^{-j}\sqrt{x })\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{ i2^{j}t}{4s}}e^{i2^{-j}tsx}\,ds,\end{split} \tag{5.1}\]
_where \(\rho(s,\tau)\in\mathcal{S}(\mathbb{R}\times\mathbb{R})\) is a Schwartz function and and \(\chi\in C^{\infty}(\mathbb{R}\times\mathbb{R})\) with \(\text{supp}\,\chi(\cdot,\tau)\subseteq[\frac{1}{16},4]\) such that_
\[\sup_{\tau\in\mathbb{R}}\big{|}\partial_{s}^{\alpha}\partial_{\tau}^{\beta} \chi(s,\tau)\big{|}\lesssim_{\alpha,\beta}(1+|s|)^{-\alpha},\quad\forall\alpha,\beta\geq 0. \tag{5.2}\]
If this has been done, then by the spectral theory for the non-negative self-adjoint operator \(H_{\alpha,B_{0}}\), we can have the representation of the micolocalized half-wave propagator
\[\begin{split}&\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_ {\alpha,B_{0}}}}\\ &=\rho\big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j}t\big{)}+\varphi (2^{-j}\sqrt{H_{\alpha,B_{0}}})\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{ \infty}\chi(s,2^{j}t)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_{0}}}\,ds. \end{split} \tag{5.3}\]
Proof.: This result is original from [27] and the authors of [12] provided an independent proof of the key formula. For the self-contained and the convenience of the reader, we follow the idea of [12] to provide the details of the proof.
The starting point of the proof is the subordination formula
\[e^{-y\sqrt{x}}=\frac{y}{2\sqrt{\pi}}\int_{0}^{\infty}e^{-sx-\frac{y^{2}}{4s}}s ^{-\frac{3}{2}}ds,\quad x,y>0. \tag{5.4}\]
Indeed, let \(|\xi|=\sqrt{x}\), we have
\[e^{-y\sqrt{x}} =e^{-y|\xi|}=\int_{\mathbb{R}}e^{-2\pi it\cdot\xi}\int_{\mathbb{R}}e^ {2\pi it\cdot\eta}e^{-y|\eta|}\,d\eta\,dt\] \[=\int_{\mathbb{R}}e^{-2\pi it\cdot\xi}\Big{(}\int_{-\infty}^{0}e^ {2\pi it\cdot\eta}e^{y\eta}\,d\eta+\int_{0}^{\infty}e^{2\pi it\cdot\eta}e^{-y \eta}\,d\eta\Big{)}\,dt\] \[=2\int_{\mathbb{R}}e^{-2\pi it\cdot\xi}\frac{y}{y^{2}+(2\pi t)^{2 }}\,dt=2\int_{\mathbb{R}}e^{-2\pi iyt\cdot\xi}\frac{1}{1+(2\pi t)^{2}}\,dt\] \[=2\int_{\mathbb{R}}e^{-2\pi iyt\cdot\xi}\int_{0}^{\infty}e^{-r(1 +(2\pi t)^{2})}\,dr\,dt\] \[=2\int_{0}^{\infty}e^{-r}\int_{\mathbb{R}}e^{-2\pi iyt\cdot\xi} e^{-r(2\pi t)^{2}}\,dt\,dr\] \[=2\int_{0}^{\infty}\frac{e^{-r}e^{-\frac{\ell^{2}y^{2}}{4r}}}{ \sqrt{4\pi r}}\,dr=2\int_{0}^{\infty}\frac{e^{-r}e^{-\frac{xy^{2}}{4r}}}{\sqrt {4\pi r}}\,dr,\quad s=\frac{y^{2}}{4r},\xi=\sqrt{x},\] \[=\frac{y}{2\sqrt{\pi}}\int_{0}^{\infty}e^{-\frac{y^{2}}{4s}}e^{- xs}s^{-\frac{3}{2}}\,ds\]
where we use
\[\alpha^{-\frac{\eta}{2}}e^{-\pi|\xi|^{2}/\alpha}=\int_{\mathbb{R}^{n}}e^{-2\pi ix \xi}e^{-\pi\alpha|x|^{2}}dx.\]
To obtain \(e^{it\sqrt{x}}\), we extend (5.4) by setting \(y=\epsilon-it\) with \(\epsilon>0\)
\[\begin{split} e^{it\sqrt{x}}&=\lim_{\epsilon\to 0^{+}}e^{-( \epsilon-it)\sqrt{x}}\\ &=\lim_{\epsilon\to 0}\frac{\epsilon-it}{2\sqrt{\pi}}\int_{0}^{ \infty}e^{-xs}e^{\frac{(it-\epsilon)^{2}}{4s}}s^{-\frac{3}{2}}ds,\quad s=r( \epsilon-it)\\ &=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}\int_{0}^{ \infty}e^{rx(it-\epsilon)}e^{\frac{it-\epsilon}{4r}}r^{-\frac{3}{2}}dr\\ &=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}\int_{0} ^{\infty}e^{itrx}e^{-\epsilon rx}e^{\frac{it}{4r}}e^{-\frac{\epsilon}{4r}}r^{- \frac{3}{2}}dr\\ &=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}I_{ \epsilon,\epsilon x}(tx,t),\end{split} \tag{5.5}\]
where
\[I_{\epsilon,\delta}(a,t):=\int_{0}^{\infty}e^{ira}e^{-\delta r}e^{\frac{it}{4r }}e^{-\frac{\epsilon}{4r}}r^{-\frac{3}{2}}dr.\]
By the dominate convergence theorem, we have that
\[e^{it\sqrt{x}}=\lim_{\epsilon\to 0}\frac{\sqrt{\epsilon-it}}{2\sqrt{\pi}}I_{ \epsilon,\epsilon x}(tx,t)=\sqrt{\frac{t}{4\pi}}e^{-\frac{\pi}{4}i}\lim_{ \epsilon\to 0}I_{\epsilon,\epsilon x}(tx,t).\]
Thus it suffices to consider the oscillation integral
\[\lim_{\epsilon\to 0}I_{\epsilon,\epsilon x}(a,t)=I_{0,0}(a,t)=\int_{0}^{ \infty}e^{ira}e^{\frac{it}{4r}}r^{-\frac{3}{2}}dr. \tag{5.6}\]
**Lemma 5.2**.: _Let_
\[I(a,t)=\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}r^{-\frac{3}{2}}dr.\]
_Then we can write_
\[I(a,t)=\tilde{\rho}(a,t)+\int_{0}^{\infty}e^{ira}e^{\frac{it}{4\tau}}\tilde{\chi}( r)\,dr, \tag{5.7}\]
_where \(\tilde{\chi}(r)\in C_{0}^{\infty}(r)\) and \(\operatorname{supp}\tilde{\chi}\subset[\frac{1}{16},4]\) and \(\tilde{\rho}(a,t)\) satisfies_
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}\tilde{\rho}(a,t)\big{|}\leq C _{N,\alpha,\beta}(a+t)^{-N},\quad\frac{1}{4}\leq\frac{a}{t}\leq 4,\,t\geq 1, \forall N\geq 1. \tag{5.8}\]
We now assume this lemma to prove (5.1). By (5.5) and (5.6) and noticing
\[I(a,t)=2^{\frac{j}{2}}I(2^{-j}a,2^{j}t),\]
we have that
\[\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}=\sqrt{\frac{t}{4\pi}}e^{-\frac{\pi}{4}i }\varphi(2^{-j}\sqrt{x})2^{\frac{j}{2}}I\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}.\]
By the support of \(\varphi\), one has \(2^{2j-2}\leq x\leq 2^{2j+2}\), hence \(\frac{1}{4}\leq\frac{tx}{2^{j}}/2^{j}t=x/2^{2j}\leq 4\). Note the condition \(2^{j}t\geq 1\). Therefore, by using this lemma, we prove
\[\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}\] \[= \frac{1}{\sqrt{4\pi}}e^{-\frac{\pi}{4}i}\big{(}2^{j}t\big{)}^{ \frac{1}{2}}\varphi(2^{-j}\sqrt{x})\Big{(}\tilde{\rho}\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}+\int_{0}^{\infty}\tilde{\chi}(s)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j }txs}\,ds\Big{)}.\]
We need consider this expression when \(2^{j}t\geq 1\). To this end, let \(\phi\in C^{\infty}([0,+\infty)\) satisfies \(\phi(t)=1\) if \(t\geq 1\) and \(\phi(t)=0\) if \(0\leq t\leq\frac{1}{2}\), then set
\[\rho\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}=e^{-\frac{\pi}{4}i}\big{(}2^{j}t \big{)}^{\frac{1}{2}}\varphi(2^{-j}\sqrt{x})\tilde{\rho}\big{(}\frac{tx}{2^{j }},2^{j}t\big{)}\phi(2^{j}t).\]
This together with (5.8) shows
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}\rho(a,t)\big{|}\leq C_{N, \alpha,\beta}(1+(a+t))^{-N},\quad\forall N\geq 0.\]
which implies \(\rho(a,t)\in\mathcal{S}(\mathbb{R}_{+}\times\mathbb{R}_{+})\). Set
\[\chi\big{(}s,2^{j}t\big{)}=e^{-\frac{\pi}{4}i}\tilde{\chi}\big{(}s\big{)}\phi (2^{j}t),\]
then \(\chi\) satisfies (5.2). then we finally write
\[\varphi(2^{-j}\sqrt{x})e^{it\sqrt{x}}\] \[= \rho\big{(}\frac{tx}{2^{j}},2^{j}t\big{)}+\big{(}2^{j}t\big{)}^{ \frac{1}{2}}\varphi(2^{-j}\sqrt{x})\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{i2 ^{j}t}{4s}}e^{i2^{-j}txs}\,ds,\]
which proves (5.1) as desired.
_The proof of Lemma 5.2._ To prove (5.7), we divide the integral into three pieces. Let \(\beta(r)\) be a function in \(C^{\infty}(\mathbb{R})\) compact supported in \([\frac{1}{2},2]\) such that
\[1=\sum_{j\in\mathbb{Z}}\beta_{j}(r),\quad\beta_{j}(r)=\beta(2^{-j}r).\]
Corresponding to \(\beta_{j}\), we decompose
\[I(a,t)=\sum_{j\in\mathbb{Z}}I_{j}(a,t)=I_{l}(a,t)+I_{m}(a,t)+I_{h}(a,t)\]
where
\[I_{l}(a,t)=\sum_{j\leq-5}I_{j}(a,t), I_{m}(a,t)=\sum_{-4\leq j\leq 1}I_{j}(a,t),\quad I_{h}(a,t)=\sum_{j\geq 2 }I_{j}(a,t),\] \[I_{j}(a,t)=\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}\beta_{j}(r)r ^{-\frac{3}{2}}dr.\]
Define the phase function \(\phi_{a,t}(r)=ra+\frac{t}{4r}\), then
\[\phi^{\prime}_{a,t}(r)=a-\frac{t}{4r^{2}}.\]
Define
\[\bar{\rho}(a,t)=I_{l}(a,t)+I_{h}(a,t),\]
we aim to prove (5.8). We first consider \(I_{h}(a,t)\).
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{h}(a,t)\big{|} =\sum_{j\geq 2}\Big{|}\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}} \beta_{j}(r)r^{-\frac{3}{2}+\alpha}\big{(}\frac{1}{4r}\big{)}^{\beta}dr\Big{|}\] \[\leq C\sum_{j\geq 2}\int_{0}^{\infty}\Big{|}\frac{d}{dr}\Big{[} \Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\frac{d}{dr}\Big{)}^{N-1}\Big{(}\frac {1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^{-\frac{3}{2}+\alpha-\beta}\Big{)} \Big{]}\Big{|}\,dr.\]
Due to that \(\operatorname{supp}\beta_{j}\subset[2^{j-1},2^{j+1}]\) with \(j\geq 2\) implies \(r\in[2,\infty)\) and the assumption \(\frac{a}{t}\geq\frac{1}{4}\), we see that
\[\big{|}\phi^{\prime}_{a,t}(r)\big{|}=\big{|}a-\frac{t}{4r^{2}}\big{|}\geq\frac {a+t}{16}.\]
For choosing \(N>\alpha\), we notice the fact that
\[\Big{|}\Big{(}\frac{d}{dr}\Big{)}^{K}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)} \Big{)}\Big{|}\leq C_{K}(a+t)^{-1}r^{-K},\quad t,r\in[1,+\infty)\]
to obtain
\[\Big{|}\frac{d}{dr}\Big{[}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\frac{d}{dr} \Big{)}^{N-1}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^{-\frac{3}{ 2}+\alpha-\beta}\Big{)}\Big{]}\Big{|}\leq C_{N}(a+t)^{-N}2^{-\frac{3}{2}j}.\]
Therefore, we prove
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{h}(a,t)\big{|}\leq C_{N}(a +t)^{-N}\sum_{j\geq 2}2^{-\frac{1}{2}j}\,\leq C_{N}(a+t)^{-N},\quad N\geq\alpha.\]
Next we consider \(I_{l}(a,t)\).
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{l}(a,t)\big{|}=\sum_{j\leq- 5}\Big{|}\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}\beta_{j}(r)r^{-\frac{3}{2}+ \alpha}\big{(}\frac{1}{4r}\big{)}^{\beta}dr\Big{|}.\]
\[\sum_{j\leq-5}\Big{|}\int_{0}^{\infty}e^{\frac{it}{4r}}e^{ira} \beta_{j}(r)r^{-\frac{3}{2}+\alpha-\beta}dr\Big{|}\] \[\leq C_{N}\sum_{j\leq-5}\int_{0}^{\infty}\Big{|}\frac{d}{dr}\Big{[} \Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\frac{d}{dr}\Big{)}^{N-1}\Big{(}\frac {1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^{-\frac{3}{2}+\alpha-\beta}\Big{)} \Big{]}\Big{|}\,dr.\]
Due to that \(\operatorname{supp}\beta_{j}\subset[2^{j-1},2^{j+1}]\) with \(j\leq-5\) implies \(r\in(0,\frac{1}{16}]\) and the assumption \(\frac{a}{t}\leq 4\), we see that
\[\big{|}\phi^{\prime}_{a,t}(r)\big{|}=\big{|}a-\frac{t}{4r^{2}}\big{|}=\frac{|4r ^{2}a-t|}{4r^{2}}\geq\frac{a+t}{32r^{2}}.\]
For choosing \(N>\alpha\), we notice the fact that
\[\Big{|}\Big{(}\frac{d}{dr}\Big{)}^{K}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)} \Big{)}\Big{|}\leq C_{K}(a+t)^{-1}r^{2-K},\quad t\in[1,+\infty),r\in(0,\frac{1}{ 16}]\]
to obtain
\[\Big{|}\frac{d}{dr}\Big{[}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)} \frac{d}{dr}\Big{)}^{N-1}\Big{(}\frac{1}{\phi^{\prime}_{a,t}(r)}\beta_{j}(r)r^ {-\frac{3}{2}+\alpha-\beta}\Big{)}\Big{]}\Big{|}\] \[\leq C_{N}(a+t)^{-N}2^{j\left(-\frac{3}{2}+\alpha-\beta+N\right)}.\]
Therefore, for large enough \(N\) such that \(-\frac{1}{2}+\alpha-\beta+N>0\), we prove
\[\big{|}\partial_{a}^{\alpha}\partial_{t}^{\beta}I_{h}(a,t)\big{|} \leq C_{N}(a+t)^{-N}\sum_{j\leq-5}2^{j\left(-\frac{1}{2}+\alpha- \beta+N\right)}\] \[\leq C_{N}(a+t)^{-N},\]
where we use the assumption that \(\frac{a}{t}\leq 4\) and \(t\geq 1\). In sum, we prove (5.8). Let
\[\tilde{\chi}(r)=\sum_{j=-4}^{1}\beta_{j}(r)r^{-\frac{3}{2}},\]
then \(\tilde{\chi}(r)\in C_{0}^{\infty}(r)\) and \(\operatorname{supp}\tilde{\chi}\subset[\frac{1}{16},4]\). Hence we have
\[I_{m}(a,t)=\int_{0}^{\infty}e^{ira}e^{\frac{it}{4r}}\tilde{\chi}(r)dr.\]
Therefore, we complete the proof of Lemma 5.2.
### Decay estimates for the microlocalized half-wave propagator
In this subsection, we mainly prove the following results
**Proposition 5.3**.: _Let \(2^{-j}|t|\leq\frac{\pi}{8B_{0}}\) and \(\varphi\) be in (1.6), then_
\[\begin{split}\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})& e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\\ &\lesssim 2^{2j}\big{(}1+2^{j}|t|\big{)}^{-\frac{1}{2}}\|\varphi(2^{-j} \sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})}.\end{split} \tag{5.9}\]
_In particular, for \(0<t<T\) with any finite \(T\), there exists a constant \(C_{T}\) depending on \(T\) such that_
\[\begin{split}\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})& e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\\ &\leq C_{T}2^{2j}\big{(}1+2^{j}|t|\big{)}^{-\frac{1}{2}}\|\varphi (2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})}.\end{split} \tag{5.10}\]
**Remark 5.4**.: The finite \(T\) can be choosen beyond \(\frac{\pi}{B_{0}}\). If we could prove (5.10), then (1.12) follows
\[\begin{split}&\big{\|}e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{ \infty}(\mathbb{R}^{2})}\leq\sum_{j\in\mathbb{Z}}\big{\|}\varphi(2^{-j}\sqrt{H _{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R} ^{2})}\\ &\leq C_{T}|t|^{-\frac{1}{2}}\sum_{j\in\mathbb{Z}}2^{\frac{3}{2}j }\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})}\leq C_{T} |t|^{-\frac{1}{2}}\|f\|_{\mathcal{B}^{3/2}_{1,1}(\mathbb{R}^{2})}.\end{split}\]
We estimate the microlocalized half-wave propagator
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|} _{L^{\infty}(\mathbb{R}^{2})}\]
by considering two cases that: \(|t|2^{j}\geq 1\) and \(|t|2^{j}\lesssim 1\). In the following argument, we can choose \(\tilde{\varphi}\in C_{c}^{\infty}((0,+\infty))\) such that \(\tilde{\varphi}(\lambda)=1\) if \(\lambda\in\operatorname{supp}\varphi\) and \(\tilde{\varphi}\varphi=\varphi\). Since \(\tilde{\varphi}\) has the same property of \(\varphi\), without confusion, we drop off the tilde above \(\varphi\) for brief. Without loss of generality, in the following argument, we assume \(t>0\).
**Case 1: \(t2^{j}\lesssim 1\).** We remark that we consider \(t2^{j}\lesssim 1\) while not \(t2^{j}\leq 1\), this will be used to extend the time interval. By the spectral theorem, one has
\[\|e^{it\sqrt{H_{\alpha,B_{0}}}}\|_{L^{2}(\mathbb{R}^{2})\to L^{2}( \mathbb{R}^{2})}\leq C.\]
Indeed, by the functional calculus, for \(f\in L^{2}\), we can write
\[e^{it\sqrt{H_{\alpha,B_{0}}}}f=\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}e^{it\sqrt{\lambda_{k,m}}}c_{k,m}\tilde{V}_{k,m}( x).\]
where
\[c_{k,m}=\int_{\mathbb{R}^{2}}f(y)\overline{\tilde{V}_{k,m}(y)}dy.\]
Then
\[\|e^{it\sqrt{H_{\alpha,B_{0}}}}f\|_{L^{2}(\mathbb{R}^{2})}=\Big{(}\sum_{ \begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}\big{|}e^{it\sqrt{\lambda_{k,m}}}c_{k,m}\big{|}^{ 2}\Big{)}^{1/2}=\Big{(}\sum_{\begin{subarray}{c}k\in\mathbb{Z},\\ m\in\mathbb{N}\end{subarray}}\big{|}c_{k,m}\big{|}^{2}\Big{)}^{1/2}=\|f\|_{L^{2} (\mathbb{R}^{2})}.\]
Together with this, we use the Bernstein inequality (4.1) to prove
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\] \[\lesssim 2^{j}\|e^{it\sqrt{H_{\alpha,B_{0}}}}\varphi(2^{-j} \sqrt{H_{\alpha,B_{0}}})f\|_{L^{2}(\mathbb{R}^{2})}\] \[\lesssim 2^{j}\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{2}( \mathbb{R}^{2})}\lesssim 2^{2j}\|\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\|_{L^{1}( \mathbb{R}^{2})}.\]
In this case \(0<t\lesssim 2^{-j}\), we have
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})} \tag{5.11}\] \[\quad\lesssim 2^{2j}(1+2^{j}t)^{-N}\|\varphi(2^{-j}\sqrt{H_{ \alpha,B_{0}}})f\|_{L^{1}(\mathbb{R}^{2})},\quad\forall N\geq 0.\]
**Case 2: \(t2^{j}\geq 1\).** In this case, we can use (5.3) to obtain the micolocalized half-wave propagator
\[\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}\] \[=\rho\big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j}t\big{)}+\varphi (2^{-j}\sqrt{H_{\alpha,B_{0}}})\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{ \infty}\chi(s,2^{j}t)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_{0}}}\,ds.\]
We first use the spectral theorems and the Bernstein inequality again to estimate
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})\rho\big{(}\frac{tH_{\alpha,B_{0 }}}{2^{j}},2^{j}t\big{)}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}.\]
Indeed, since \(\rho\in\mathcal{S}(\mathbb{R}\times\mathbb{R})\), then
\[\big{|}\rho\big{(}\frac{t\lambda_{k,m}}{2^{j}},2^{j}t\big{)}\big{|}\leq C(1+2^ {j}t)^{-N},\quad\forall N\geq 0.\]
Therefore, we use the Bernstein inequality and the spectral theorems to show
\[\begin{split}&\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})\rho \big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j}t\big{)}f\big{\|}_{L^{\infty}( \mathbb{R}^{2})}\\ &\lesssim 2^{j}\big{\|}\rho\big{(}\frac{tH_{\alpha,B_{0}}}{2^{j}},2^{j }t\big{)}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})f\big{\|}_{L^{2}(\mathbb{R}^{2} )}\\ &\lesssim 2^{j}(1+2^{j}t)^{-N}\Big{\|}\varphi(2^{-j}\sqrt{H_{ \alpha,B_{0}}})f\Big{\|}_{L^{2}(\mathbb{R}^{2})}\\ &\lesssim 2^{2j}(1+2^{j}t)^{-N}\Big{\|}\varphi(2^{-j}\sqrt{H_{ \alpha,B_{0}}})f\Big{\|}_{L^{1}(\mathbb{R}^{2})}.\end{split}\]
Next we use the dispersive estimates of Schrodinger propagator (see [36, Theorem 1.1])
\[\big{\|}e^{itH_{\alpha,B_{0}}}f\big{\|}_{L^{\infty}(\mathbb{R}^{2})}\leq C|\sin (tB_{0})|^{-1}\big{\|}f\big{\|}_{L^{1}\mathbb{R}^{2})},\quad t\neq\frac{k\pi}{ B_{0}},\,k\in\mathbb{Z},\]
to estimate
\[\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}})\big{(}2^{j}t\big{)}^{\frac{1}{2 }}\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{i2^{j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_ {0}}}f\,ds\big{\|}_{L^{\infty}(\mathbb{R}^{2})}.\]
For \(0<t<T_{0}<\frac{\pi}{2B_{0}}\), then \(\sin(tB_{0})\sim tB_{0}\)
\[\begin{split}&\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha,B_{0}}}) \big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{\infty}\chi(s,2^{j}t)e^{\frac{i2^{ j}t}{4s}}e^{i2^{-j}tsH_{\alpha,B_{0}}}f\,ds\big{\|}_{L^{\infty}(\mathbb{R}^{2})} \\ &\lesssim\big{(}2^{j}t\big{)}^{\frac{1}{2}}\int_{0}^{\infty}\chi (s,2^{j}t)|\sin(2^{-j}tsB_{0})|^{-1}\,ds\big{\|}\varphi(2^{-j}\sqrt{H_{\alpha, B_{0}}})f\big{\|}_{L^{1}(\mathbb{R}^{2})}.\end{split}\]
Since \(s\in[\frac{1}{16},4]\) (the compact support of \(\chi\) in \(s\)) and \(B_{0}>0\), if \(2^{-j}t\leq\frac{\pi}{8B_{0}}\), then
(5.12)
Collecting (5.11) and (5.12), it gives (5.9). To prove (5.10), we consider \(0<t<T\). For any \(T>0\), there exists \(j_{0}\) such that \(2^{-j_{0}}T\leq\frac{\pi}{8B_{0}}\) with \(j_{0}\in\mathbb{Z}_{+}\). For \(j\leq j_{0}\), then \(2^{j}t\lesssim 1\), then one has (5.10) from the first case. While for \(j\geq j_{0}\), if \(2^{j}t\lesssim 1\), one still has (5.10) from the first case. Otherwise, i.e. \(2^{j}t\geq 1\), one has (5.10) from the second case, since we always have \(2^{-j}t\leq\frac{\pi}{8B_{0}}\) for \(j\geq j_{0}\) and \(0<t\leq T\).
## 6. Strichartz estimate
In this section, we prove the Strichartz estimates (1.13) in Theorem 1.4 by using (5.10). To this end, we need a variety of the abstract Keel-Tao's Strichartz estimates theorem ([23]).
**Proposition 6.1**.: _Let \((X,\mathcal{M},\mu)\) be a \(\sigma\)-finite measured space and \(U:I=[0,T]\to B(L^{2}(X,\mathcal{M},\mu))\) be a weakly measurable map satisfying, for some constants \(C\) may
_depending on \(T\), \(\alpha\geq 0\), \(\sigma,h>0\),_
\[\begin{split}\|U(t)\|_{L^{2}\to L^{2}}&\leq C,\quad t \in\mathbb{R},\\ \|U(t)U(s)^{*}f\|_{L^{\infty}}&\leq Ch^{-\alpha}(h+| t-s|)^{-\sigma}\|f\|_{L^{1}}.\end{split} \tag{6.1}\]
_Then for every pair \(q,p\in[1,\infty]\) such that \((q,p,\sigma)\neq(2,\infty,1)\) and_
\[\frac{1}{q}+\frac{\sigma}{p}\leq\frac{\sigma}{2},\quad q\geq 2,\]
_there exists a constant \(\tilde{C}\) only depending on \(C\), \(\sigma\), \(q\) and \(r\) such that_
\[\Big{(}\int_{I}\|U(t)u_{0}\|_{L^{r}}^{q}dt\Big{)}^{\frac{1}{q}}\leq\tilde{C} \Lambda(h)\|u_{0}\|_{L^{2}}\]
_where \(\Lambda(h)=h^{-(\alpha+\sigma)(\frac{1}{2}-\frac{1}{p})+\frac{1}{q}}\)._
Proof.: This is an analogue of the semiclassical Strichartz estimates for Schrodinger in [25, 38]. We refer to [37] for the proof.
Now we prove the Strichartz estimates (1.13). Recall \(\varphi\) in (1.6) and Littlewood-Paley frequency cutoff \(\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})\), for each \(k\in\mathbb{Z}\), we define
\[u_{k}(t,\cdot)=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})u(t,\cdot).\]
where \(u(t,x)\) is the solution of (1.1). Then, for each \(k\in\mathbb{Z}\), \(u_{k}(t,x)\) solves the Cauchy problem
\[\partial_{t}^{2}u_{k}+H_{\alpha,B_{0}}u_{k}=0,\quad u_{k}(0)=f_{k}(z),\ \partial_{t}u_{k}(0)=g_{k}(z),\]
where \(f_{k}=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})u_{0}\) and \(g_{k}=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})u_{1}\). Since \((q,p)\in\Lambda_{s}^{W}\) in definition 1.3, then \(q,p\geq 2\). Thus, by using the square-function estimates (4.2) and the Minkowski inequality, we obtain
\[\|u(t,x)\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}\lesssim\Big{(}\sum_{k\in\mathbb{Z }}\|u_{k}(t,x)\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}^{2}\Big{)}^{\frac{1}{2}}, \tag{6.2}\]
where \(I=[0,T]\). Denote the half-wave propagator \(U(t)=e^{it\sqrt{H_{\alpha,B_{0}}}}\), then we write
\[u_{k}(t,z)=\frac{U(t)+U(-t)}{2}f_{k}+\frac{U(t)-U(-t)}{2i\sqrt{H_{\alpha,B_{0} }}}g_{k}. \tag{6.3}\]
By using (6.2) and (6.3), we complete the proof of (1.13) after taking summation in \(k\in\mathbb{Z}\) if we could prove
**Proposition 6.2**.: _Let \(f=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})f\) for \(\varphi_{k}\) in (1.6) and \(k\in\mathbb{Z}\). Then_
\[\|U(t)f\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}\leq C_{T}2^{ks}\|f\|_{L^{2}(\mathbb{ R}^{2})}, \tag{6.4}\]
_where the admissible pair \((q,p)\in[2,+\infty]\times[2,+\infty)\) and \(s\) satisfy (1.10) and (1.11)._
Proof.: Since \(f=\varphi_{k}(\sqrt{\mathrm{H}})f\), then
\[U(t)f=\varphi_{k}(\sqrt{H_{\alpha,B_{0}}})e^{it\sqrt{H_{\alpha,B_{0}}}}f:=U_{k }f.\]
By using the spectral theorem, we see
\[\|U_{k}(t)f\|_{L^{2}(\mathbb{R}^{2})}\leq C\|f\|_{L^{2}(\mathbb{R}^{2})}.\]
By using (5.10), we obtain
\[\|U_{k}(t)U_{k}^{*}(s)f\|_{L^{\infty}(\mathbb{R}^{2})} =\|U_{k}(t-s)f\|_{L^{\infty}(\mathbb{R}^{2})}\] \[\leq C_{T}2^{\frac{3}{2}k}\big{(}2^{-k}+|t-s|\big{)}^{-\frac{1}{2}} \|f\|_{L^{1}(\mathbb{R}^{2})},\]
Then the estimates (6.1) for \(U_{k}(t)\) hold for \(\alpha=3/2\), \(\sigma=1/2\) and \(h=2^{-k}\). Hence, Proposition 6.1 gives
\[\|U(t)f\|_{L^{q}(I;L^{p}(\mathbb{R}^{2}))}=\|U_{k}(t)f\|_{L^{q}(I;L^{p}( \mathbb{R}^{2}))}\leq C_{T}2^{k[2(\frac{1}{2}-\frac{1}{p})-\frac{1}{q}]}\|f\|_ {L^{2}(\mathbb{R}^{2})}.\]
which implies (6.4) since \(s=2(\frac{1}{2}-\frac{1}{p})-\frac{1}{q}\).
|
2309.07891 | HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene from a
Single RGB Image | This paper presents a method to learn hand-object interaction prior for
reconstructing a 3D hand-object scene from a single RGB image. The inference as
well as training-data generation for 3D hand-object scene reconstruction is
challenging due to the depth ambiguity of a single image and occlusions by the
hand and object. We turn this challenge into an opportunity by utilizing the
hand shape to constrain the possible relative configuration of the hand and
object geometry. We design a generalizable implicit function, HandNeRF, that
explicitly encodes the correlation of the 3D hand shape features and 2D object
features to predict the hand and object scene geometry. With experiments on
real-world datasets, we show that HandNeRF is able to reconstruct hand-object
scenes of novel grasp configurations more accurately than comparable methods.
Moreover, we demonstrate that object reconstruction from HandNeRF ensures more
accurate execution of downstream tasks, such as grasping and motion planning
for robotic hand-over and manipulation. Homepage:
https://samsunglabs.github.io/HandNeRF-project-page/ | Hongsuk Choi, Nikhil Chavan-Dafle, Jiacheng Yuan, Volkan Isler, Hyunsoo Park | 2023-09-14T17:42:08Z | http://arxiv.org/abs/2309.07891v5 | # HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene
###### Abstract
This paper presents a new method to learn hand-object interaction prior for reconstructing a 3D hand-object scene from a single RGB image. The inference as well as training-data generation for 3D hand-object scene reconstruction is challenging due to the depth ambiguity of a single image and occlusions by the hand and object. We turn this challenge into an opportunity by utilizing the hand shape to constrain the possible relative configuration of the hand and object geometry. We design a generalizable implicit function, HandNeRF, that explicitly encodes the correlation of the 3D hand shape features and 2D object features to predict the hand and object scene geometry. With experiments on real-world datasets, we show that HandNeRF can reconstruct hand-object scenes of novel grasp configurations more accurately than comparable methods. Moreover, we demonstrate that object reconstruction from HandNeRF ensures more accurate execution of downstream tasks, such as grasping and motion planning for robotic hand-over and manipulation. The code will be released here: [https://github.com/SamsungLabs/HandNeRF](https://github.com/SamsungLabs/HandNeRF)
## I Introduction
The understanding of 3D hand-object interactions, i.e., semantic reconstruction of hand and object geometry, is key to applications such as human-to-robot object handover, and augmented and virtual reality. Most of the current methods are primarily based on template-based approaches, where a known 3D CAD model of a hand and an object is assumed. The major focus is predicting the transformation that fits the known 3D CAD model to input observation [1, 2, 3]. Even with these assumptions, the hand and object reconstruction from a single RGB image are both difficult tasks due to depth ambiguity, partial observation, and mutual occlusions.
The 3D hand reconstruction methods have seen significant advances [4, 5, 6] due to large-scale hand datasets and automated reliable 3D hand annotations [7, 8, 9, 10, 11]. In contrast, the progress in reconstruction of grasped objects from a single RGB image is relatively limited due to lack of data. Generating a 3D CAD model of large set of object and labeling 6D poses in hand-object interaction scenes are labor-intensive and challenging. The sparsity of views in realworld data collection settings makes the labeling ambiguous, often requiring manual initialization and post-processing for refining 6D object pose annotations [9, 12].
In this paper, we present a new method, named HandNeRF, that estimates a semantic neural radiance field of a hand-object interaction scene from a single RGB image and without using an object template. HandNeRF predicts the density (occupancy), color, and semantic label (hand, object, or background) for points in the 3D space which can be used for 3D semantic reconstruction and novel view synthesis. The key technical contribution of HandNeRF is that it alleviates the ill-posed 2D to 3D reconstruction problem by utilizing the hand shape to constrain the possible relative configuration of hand and object geometry. In particular, HandNeRF explicitly learns the correlation between hand and object geometry to regularize their semantic reconstruction.
HandNeRF is trained on multiple hand-object interaction scenes to learn the correlation between hand and object geometry. Each scene has synchronized sparse view RGB images, 3D hand mesh annotation, and 2D semantic segmentation. At the inference time, a single RGB image with a novel grasp configuration is given. Fig. 1 illustrates HandNeRF, which is trained with sparse view RGB images and generates high-quality 3D reconstruction and rendering of images from an unseen single RGB input. Note that we do not use depth information in the whole process, which is much more unconstrained setting for both training and testing.
We evaluate HandNeRF on realworld datasets including
Fig. 1: Given a single RGB image of a hand-object interaction scene, HandNeRF predicts the hand and object’s density, color, and semantics, which can be converted to reconstruction of 3D hand and object meshes and rendered to novel view images (RGB, depth, and semantic segmentation). HandNeRF learns the correlation between hand and object geometry from different types of hand-object interactions, supervised by sparse view images. HandNeRF is tested on a novel scene with an unseen hand-object interaction.
DexYCB [9] and HO-3D v3 [13] in terms of novel view synthesis and 3D reconstruction. We compare with the state-of-the-art template-free baselines [14, 15, 16], which we adapt to the task of reconstructing hand-object interaction without 3D object ground truth during training. Following the previous works [15, 17], we first keep the object used in training and testing the same, but the grasp configuration at testing is chosen to be significantly different from those during training. We further evaluate the generalization capability of HandNeRF by testing the model trained on 15 DexYCB objects on 4 unseen DexYCB objects. By learning the hand-object interaction prior with the explicit hand and object correlation, HandNeRF outperforms the baselines in generalization to novel hand grasps, which entail unseen occlusions and unseen object poses, and novel object shapes. Furthermore, we present qualitative results demonstrating HandNeRF's ability to reconstruct objects using in-house data. The annotation process for this data is fully automated in a casual environment, which uses only 7 sparse view RGB cameras, without the need for 3D CAD model generation or 6D object pose labeling. Finally, we experimentally demonstrate that HandNeRF enables more accurate execution of a downstream task of grasping for robotic hand-over and collision-free motion planning.
## II Related Work
Our work, HandNeRF, lies at the intersection of understanding 3D hand-object interaction and implicit neural representations. In this section, we first review the current approaches for 3D hand-object interaction reconstruction from a single RGB camera. Then, we discuss recent methods for sparse view-specific implicit neural representations, relevant to our work.
**3D reconstruction of hand-object interaction:** The study on the understanding of 3D hand-object interactions [12, 18, 19, 20] refers to semantic reconstruction of the hand and object geometry. In the context of this task, the existing methods for hand and object reconstruction are primarily based on template-based approaches, where the template indicates a known 3D CAD model of a hand and an object. The 3D hand reconstruction focuses on predicting mesh parameters, such as MANO [21], and has seen a significant advance due to large-scale datasets [7, 9, 10] and success of deep learning-based approaches [22, 6, 23]. Whereas, the 3D object reconstruction is approached as 6D pose estimation [1, 2, 3], which predicts the transformation that fits the known 3D CAD model to input observation.
The template-based approach for object reconstruction has two main limitations regarding collection of training data in the real world. First, it is costly and labor-intensive to obtain every object's 3D CAD model, requiring 3D laser scanning or a dense multi-view camera setup. Second, labeling 6D object poses in hand-object scenes is challenging due to hand occlusions and becomes more ambiguous if the captured views are not dense enough. In contrast, for training HandNeRF we require only a few sparse RGB views of hand-object interaction scenes and hand-pose annotations which can be automated [10, 11].
Recently, [19, 24, 15] proposed methods that reconstruct a hand-held object without a known template. The work of Hasson et al. [19] jointly estimated the MANO hand parameters and genus-0 object mesh by leveraging AtlasNet [25]. Karunratanakul et al. [24] characterized the surface of the hand and object with a signed distance field. Ye et al. [15] conditioned object reconstruction on the hand articulation and also predicted a signed distance field of an object. While these methods are template-free at the inference time, they still require 3D object meshes for training. Therefore, they suffer with the same data collection problems as the template-based methods.
**Implicit neural representation from sparse view RGB images:** The sparse view-specific NeRF (Neural Radiance Field) representations have gained attention in object reconstruction [26, 27, 14, 28] and 3D human body reconstruction [16, 29, 30, 31]. Without any 3D annotation, they reconstruct a plausible 3D scene when optimized over a single scene only with sparse views. These methods address the limitations of multi-view reconstruction approaches such as vanilla NeRF [32] and SfM (Structure from Motion), which require a dense capture setup and fail when given sparse views [33]. These representations are explored for generalization by learning object appearance and geometry priors from multiple scenes. When tested on novel scenes with unseen object poses or unseen objects, a partial 3D reconstruction is achieved, although with blurry textures and noisy geometry. This limited performance is inevitable due to sparsity of input views, but the practical applications of these methods is significant. Nevertheless, scenes with a single view or hand-held objects are are less studied.
Our work is most relevant to the work of Choi et al. [16], MonoNHR. It reconstructs a neural radiance field of a clothed human body from a single RGB image without ground truth 3D scans of clothed humans, by conditioning on a human body mesh [34]. While the task and approach are analogous to ours, MonoNHR does not explicitly encode correlation between the object (clothes) and hand (body).
## III Method
The motivation for HandNeRF is to tackle the challenges of 3D scene reconstruction from a 2D RGB image, such as depth uncertainties and partial observation. HandNeRF achieves this by learning hand-object interaction feature that correlates the hand and object geometry, given a 3D hand shape and 2D object segmentation.
### _Modeling Hand-object Interaction_
Consider a point on a 3D object, \(\mathbf{x}_{o}\in\mathds{R}^{3}\) where its occupancy or density is \(\sigma\in[0,1]\), i.e., one if occupied, and zero otherwise. The problem of 3D reconstruction of the object can be cast as learning a function that predicts the density given the location and associated 3D feature \(\mathbf{f}_{o}\):
\[f(\mathbf{x}_{o},\mathbf{f}_{o})=\sigma, \tag{1}\]
where \(f\) is an implicit function of which zero-level set defines the surface of the object. Despite the success of representing objects [14] and humans [35], Equation (1) has a limited capability to express complex scenes such as hand-object interaction scenes.
We extend Equation (1) by incorporating the interactions between the object and hand. Consider a 3D hand mesh \(\mathcal{M}=\{\mathbf{m}_{i}\}\) that is made of a set of faces, where \(\mathbf{m}_{i}\) is the \(i^{\mathrm{th}}\) face of the mesh. Each face in the mesh is associated with a 3D feature \(\mathbf{f}_{h}\). We marginalize the density of the object over the density predicted by the hand mesh:
\[f(\mathbf{x}_{o},\mathbf{f}_{o})=\sum_{\mathbf{x}_{h}\in\mathcal{M}}f( \mathbf{x}_{o},\mathbf{f}_{o}|\mathbf{x}_{h},\mathbf{f}_{h})f(\mathbf{x}_{h}, \mathbf{f}_{h}), \tag{2}\]
where \(\mathbf{x}_{h}\) is the centroid of the vertices of the face \(\mathbf{m}_{i}\), \(f(\mathbf{x}_{o},\mathbf{f}_{o}|\mathbf{x}_{h},\mathbf{f}_{h})\) is the conditional density given the hand pose and its feature, and \(f(\mathbf{x}_{h},\mathbf{f}_{h})=\{0,1\}\) is the hand occupancy provided by 3D hand mesh estimation.
Learning \(f(\mathbf{x}_{o},\mathbf{f}_{o}|\mathbf{x}_{h},\mathbf{f}_{h})\) is challenging due to the quadratic complexity of pairwise relationship between all possible pairs of hand and object points \((\mathbf{x}_{h},\mathbf{x}_{o})\). Instead, we propose to learn an interaction feature \(\mathcal{F}\), a correlation between \(\mathbf{f}_{o}\) and \(\mathbf{f}_{h}\) through a series of 3D convolutions:
\[\mathcal{F}=\phi_{n}\circ\cdots\circ\phi_{1}\circ\mathcal{V}, \tag{3}\]
where \(\mathcal{F}\in\mathds{R}^{w\times h\times d\times c}\) is the volume of the interaction features with \(w\) width, \(h\) height, \(d\) depth, \(c\) feature dimension, and \(\phi_{1},\cdots,\phi_{n}\) are the 3D convolutional filters. The interaction feature \(\mathcal{F}|_{\mathbf{x}_{o}}\) evaluated at an object point \(\mathbf{x}_{o}\) is expected to encode how hand surface points contribute to predicting the occupancy of the point \(\mathbf{x}_{o}\) of the object. The input to the 3D CNN is \(\mathcal{V}\in\mathds{R}^{w\times h\times d\times c^{\prime}}\), which is the feature volume with the \(c^{\prime}\) feature dimension that includes both hand and object features:
\[\mathcal{V}_{\mathbf{x}}=\left\{\begin{array}{ll}\mathbf{f}_{h}&\mathrm{if} \quad\mathbf{x}\in\{\overline{\mathbf{m}}_{i}\}\\ \mathbf{f}_{o}&\mathrm{else\ if}\quad\mathrm{II}\mathbf{x}\in\mathcal{O}\\ \mathbf{0}&\mathrm{otherwise}\end{array}\right., \tag{4}\]
where \(\mathcal{V}_{\mathbf{x}}\) is the feature at \(\mathbf{x}\), \(\{\overline{\mathbf{m}}_{i}\}\) is a set of the centroids of the hand mesh faces, \(\mathrm{II}\mathbf{x}\) is the camera projection of \(\mathbf{x}\) to the input image, and \(\mathcal{O}\) is the 2D input object mask.
With the interaction feature \(\mathcal{F}\), we extend Equation (1) to include the color, \(\mathbf{c}\in\mathds{R}^{3}\), and semantic label \(\mathbf{l}\in[0,1]^{L}\) where \(L=3\) is the number of semantic classes (i.e., hand, object, and background):
\[f(\mathbf{x}_{o},\mathbf{d},\mathbf{f}_{\mathrm{2D}},\mathcal{F}|_{\mathbf{x} _{o}})=(\sigma,\mathbf{c},\mathbf{l}), \tag{5}\]
where \(\mathbf{d}\) is the rendering viewing direction, and \(\mathbf{f}_{\mathrm{2D}}\) is the pixel-aligned image feature of \(\mathbf{x}_{o}\). With the prediction of the volume density, color radiance, and semantic label, we render each pixel with its label by integrating the density field. Please refer to Semantic-NeRF [36] for more technical detail of semantic neural rendering.
Fig. 2 shows the impact of the interaction feature \(\mathcal{F}\). Our method and MonoNHR [16] both use 3D CNNs to encode features volumetrically based on estimated 3D hand mesh. Unlike MonoNHR, ours explicitly learns hand-object interactions as elaborated, enabling robust object geometry reconstruction even for unobserved and occluded surfaces.
### _Implementation of HandNeRF_
We learn the representation of the hand-object interaction by minimizing the following loss:
\[\mathcal{L}=\sum_{\mathbf{p}\in\mathcal{R}}\left(\left\|\hat{C}(\mathbf{p})- C(\mathbf{p})\right\|_{2}^{2}-\sum_{i=1}^{L}L_{i}(\mathbf{p})\log(\hat{L}_{i}( \mathbf{p}))\right)\]
where \(\mathcal{R}\) is a set of pixels in multiview images, \(\hat{C}(\mathbf{p})\) and \(C(\mathbf{p})\) are the predicted and ground truth color of pixel \(\mathbf{p}\), respectively, and \(\hat{L}_{i}(\mathbf{p})\) and \(L_{i}(\mathbf{p})\) are the predicted and ground truth semantic label at pixel \(\mathbf{p}\).
We design a novel network called _HandNeRF_ that predicts a semantic neural radiance field from a single RGB image, as shown in Fig. 3. It is composed of ResNet-18 [37] for feature extraction, sparse 3D convolution layers [38] for volume feature encoding, and linear layers for the neural field estimation. During training, the estimated semantic neural radiance field is validated by projecting on sparse views.
**Input Features:** We deproject a 2D image feature extracted from an image to the points in the 3D volume \(\mathcal{V}\) to compose the 3D hand feature \(\mathbf{f}_{h}\) and the 2D object feature \(\mathbf{f}_{o}\) in Equation (4). The 3D hand feature is made of three components: \(\mathbf{f}_{h}=\begin{bmatrix}\mathbf{h}^{\mathsf{T}}&\phi(\overline{ \mathbf{m}}_{i})^{\mathsf{T}}&\psi(i)^{\mathsf{T}}\end{bmatrix}^{\mathsf{T}}\), where \(\mathbf{h}\) encodes the visual context of hand-object interaction that is obtained from the 2D image feature at the projection of \(\overline{\mathbf{m}}_{i}\). \(\phi(\overline{\mathbf{m}}_{i})\) is a positional encoding of the centroid's coordinate, and \(\psi(i)\) is the positional encoding of the face index. \(\psi(i)\) semantically differentiates a 3D hand point \(\mathbf{x}_{h}\) from different points that are possibly empty, belong to different hand faces, or an object. The 2D object feature is designed in a similar fashion: \(\mathbf{f}_{o}=\begin{bmatrix}\mathbf{o}^{\mathsf{T}}&\phi(\mathbf{x}_{o})^{ \mathsf{T}}&\mathbf{e}^{\mathsf{T}}\end{bmatrix}^{\mathsf{T}}\), where \(\mathbf{o}\) is the 2D image feature at the projection of \(\mathbf{x}_{o}\), and \(\mathbf{e}\) is a constant value for all \(\mathbf{x}_{o}\), where \(\mathbf{x}_{o}\in\{\mathbf{x}\mid\mathrm{II}\mathbf{x}\in\mathcal{O}\text{ and }\mathbf{x}\notin\{\overline{\mathbf{m}}_{i}\}\}\).
**3D CNN Design:** We correlate the 3D hand feature \(\mathbf{f}_{h}\) and the 2D object feature \(\mathbf{f}_{o}\) with a sparse 3D CNN [38] that takes the feature volume \(\mathcal{V}\) as input, to learn the interaction feature. \(\mathcal{V}\) rasterizes 3D point coordinates in the neural radiance field with a voxel size of 5mm\(\times\)5mm\(\times\)5mm. Before the rasterization, 3D coordinates of object points are perturbated by a random Gaussian noise during training, for augmentation.
Fig. 2: We visualize object reconstruction with the hand estimation from HandOccNet [5]. Using explicit hand-object interaction features, HandNeRF generates more accurate reconstruction.
The sparse 3D CNN produces multi-scale feature volumes, which conceptually add up to the interaction feature volume \(\mathcal{F}\) by concatenation along the feature channel dimension. In practice, we keep the feature volumes separated and extract the interaction feature \(\mathcal{F}|_{\mathbf{x}}\) of a query point \(\mathbf{x}\) per volume with tri-linear interpolation.
## IV Experiments
In this section, we first validate our design of HandNeRF method by conducting detailed ablation studies. Then, we evaluate against the state-of-the-art baselines [14, 15, 16] that are adapted to be trained on sparse view images without an object template and to be tested on a single image. We adapted IHOI [15] to training with sparse view images, instead of template-based object annotation as in the original paper, and named it as IHOINeRF. For IHOINeRF, training with semantic labels fails to converge where reconstruction accuracy for hand and object cannot be measured.
**Metrics:** To assess the rendering quality, we use four metrics by comparing with the ground truth images: peak signal-to-noise ratio (PSNR), semantic segmentation intersection over union (IoU), structural similarity index (SSIM), and LPIPS [39]. For the 3D reconstruction accuracy, we compare 3D distance with the ground truth by converting the reconstructed neural radiance field to a 3D mesh using Marching cubes algorithm [40]. F-scores at 5mm and 10mm thresholds, and Chamfer distance (CD) in millimeters are used. We evaluate hand and object separately using 3D segmentation from the predicted semantics.
**Datasets:** We use DexYCB [9] and HO-3D v3 [13] datasets for comparison. In DexYCB, a hand performing object handover is captured from 8 views, and 5 sequences per object are recorded where each sequence shows a distinct grasp pattern. Per object, we keep 4 sequences for training and 1 sequence for testing to validate generalization to novel hand grasps. In Ho-3D v3, an object grasped in a hand is captured from 5 views and 1 sequence per object is recorded where a grasping hand pose changes over time during the sequence. For every object, we split the data to training and testing sets such that the testing set has significantly different grasping hand poses that those in the training set.
### _Ablation Study_
In Table I, we summarize our ablation study to measure the impact of our method design choices. To focus on evaluation of generalization to novel grasp configurations, we train each method per object and average the metrics in the table. We use 4 objects from DexYCB dataset with distinct shapes: 'Cracker Box', 'Banana', 'Power Drill', and 'Coffee Can'. In inference time, all methods use the same 3D hand mesh and 2D object segmentation provided in DexYCB.
**Effect of explicit interaction encoding:** Our main hypothesis is that learning the correlation between hand and object geometry explicitly can regulate the 3D reconstruction of the grasped object given the 3D hand shape. We compare two methods to validate this hypothesis: (M2: \(\mathbf{f}_{\mathrm{2D}}\)) PixelNeRF [14] adapted to a single input image that uses the 2D image feature without the 3D hand feature and (M3: \(\mathbf{f}_{h},\mathbf{f}_{\mathrm{o}}\)) a method that uses the 3D hand and 2D object feature without the 2D image feature. As shown in Table I, by exploiting the 3D hand feature, M3 successfully imposes constraints on the relative geometries of hand and object, and provides
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c|c c c} \hline Method: features & Architect. & PSNR\(\uparrow\) & IoU\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & F\({}_{\mathrm{w}}\)-5 \(\uparrow\) & F\({}_{\mathrm{w}}\)-10 \(\uparrow\) & CD\({}_{\mathrm{w}}\)\(\downarrow\) & F\({}_{\mathrm{o}}\)-5 \(\uparrow\) & F\({}_{\mathrm{o}}\)-10 \(\uparrow\) & CD\({}_{\mathrm{o}}\)\(\downarrow\) & F\({}_{\mathrm{h}}\)-5 \(\uparrow\) & F\({}_{\mathrm{h}}\)-10 \(\uparrow\) & CD\({}_{\mathrm{o}}\)\(\downarrow\) \\ \hline M1: \(\mathbf{f}_{h},\mathbf{f}_{h}\)+\(\mathbf{f}_{\mathrm{2D}}\) & Transf. & 19.40 & 0.61 & 0.63 & 0.30 & 0.36 & 0.64 & 0.85 & 0.32 & 0.56 & 1.57 & 0.27 & 0.55 & 1.42 \\ M2: \(\mathbf{f}_{\mathrm{2D}}\) & & & 19.09 & 0.61 & 0.60 & 0.31 & 0.30 & 0.56 & 1.17 & 0.28 & 0.47 & 2.90 & 0.19 & 0.49 & 1.56 \\ M3: \(\mathbf{f}_{h},\mathbf{f}_{\mathrm{2D}}\) & & & 20.11 & 0.77 & 0.65 & 0.27 & 0.47 & 0.78 & 0.30 & 0.41 & 0.68 & 0.62 & **0.54** & **0.92** & 0.12 \\ M4: \(\mathbf{f}_{h}\)+\(\mathbf{f}_{\mathrm{2D}}\) & & & 20.31 & 0.72 & 0.68 & 0.26 & 0.40 & 0.68 & 0.79 & 0.30 & 0.53 & 1.73 & **0.54** & **0.92** & **0.09** \\
**M5 (ours): \(\mathbf{f}_{h},\mathbf{f}_{\mathrm{o}}\)+\(\mathbf{f}_{\mathrm{2D}}\)** & & & **21.66** & **0.79** & **0.70** & **0.24** & **0.47** & **0.79** & **0.27** & **0.43** & **0.70** & **0.56** & 0.53 & 0.91 & 0.10 \\ \hline \end{tabular}
\end{table} TABLE I: Ablation study. Our model M5 provides the highest rendering quality, F-scores, and lowest Chamfer Distances (CD) for novel hand-object interaction scenes’ object geometry. Subscripts \({}_{w}\), \({}_{o}\), and \({}_{h}\) indicate whole, object, and hand evaluation respectively.
Fig. 3: HandNeRF takes a single RGB image and predicts the volume density, color radiance, and semantic label of each query point in a neural field. Different from comparable works of Ye et al. [15] and Choi et al. [16] that implicitly learns the interaction between hand and object, it explicitly encodes the correlation between hand and object features in 3D space, which provides more accurate 3D reconstruction and novel view synthesis.
better generalization to novel hand-object interactions. The results support that our learned interaction feature \(\mathcal{F}\) that explicitly encodes hand-object correlations is effective to infer a 3D object geometry without requiring its 3D ground truth during training. The performance of the 3D feature is more pronounced for our method (M5: \(\mathbf{f}_{h},\mathbf{f}_{o}+\mathbf{f}_{2\mathrm{D}}\)) that leverages both the 2D image feature and the 3D feature.
**Significance of 2D object feature:** HandNeRF differs from the existing approaches [15, 16] by using explicit representation of an object with respect to a 3D hand. To verify the effectiveness of the 2D object feature, we compare two methods: (M4: \(\mathbf{f}_{h}+\mathbf{f}_{2\mathrm{D}}\)) a method that implicitly learns the hand-object interactions similar to MonoNHR [16] and (M5: \(\mathbf{f}_{h},\mathbf{f}_{o}+\mathbf{f}_{2\mathrm{D}}\)) our method that explicitly models the
\begin{table}
\begin{tabular}{l|c|c c c c|c c c|c c c|c c c} \hline Method & Dataset & PSNR\(\uparrow\) & IoU\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & \(\mathbf{F}_{\mathrm{w}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{w}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{w}}\)-\(10\uparrow\) & \(\mathbf{\mathrm{CD}}_{\mathrm{w}}\) & \(\downarrow\) & \(\mathbf{F}_{\mathrm{o}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{o}}\)-\(10\uparrow\) & \(\mathbf{\mathrm{CD}}_{\mathrm{o}}\) & \(\downarrow\) & \(\mathbf{F}_{\mathrm{o}}\)-\(5\uparrow\) & \(\mathbf{F}_{\mathrm{h}}\)-\(10\uparrow\) & \(\mathbf{\mathrm{CD}}_{\mathrm{h}}\)\(\downarrow\) \\ \hline \hline PixelNeRF & & 19.09 & 0.61 & 0.60 & 0.31 & 0.30 & 0.56 & 1.17 & 0.28 & 0.47 & 2.90 & 0.19 & 0.49 & 1.56 \\ IHOINeRF & & 18.49 & - & 0.60 & 0.31 & 0.31 & 0.60 & 1.15 & \(=\) & \(=\) & \(=\) & \(=\) & \(=\) & \(=\) \\ IHOINeRF\(\uparrow\) & & 19.82 & - & 0.64 & 0.27 & 0.38 & 0.69 & 0.54 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MonoNHR & DexYCB & 19.36 & 0.63 & 0.64 & 0.30 & 0.37 & 0.62 & 3.19 & 0.25 & 0.43 & 8.59 & 0.49 & 0.89 & 0.11 \\ MonoNHR\(\uparrow\) & & 19.66 & 0.68 & 0.66 & 0.29 & 0.40 & 0.64 & 3.05 & 0.26 & 0.45 & 8.58 & **0.55** & **0.92** & **0.08** \\ HandNeRF & & 21.19 & 0.75 & 0.68 & 0.25 & 0.44 & 0.77 & 0.31 & 0.42 & 0.68 & 0.59 & 0.46 & 0.88 & 0.12 \\ HandNeRF\(\uparrow\) & & **21.66** & **0.79** & **0.70** & **0.24** & **0.47** & **0.79** & **0.27** & **0.43** & **0.70** & **0.56** & 0.53 & 0.91 & 0.10 \\ \hline PixelNeRF & & 18.82 & 0.69 & 0.65 & 0.23 & 0.38 & 0.69 & 0.75 & 0.36 & 0.58 & 1.14 & 0.29 & 0.68 & 0.41 \\ IHOINeRF & & 18.55 & - & 0.65 & 0.23 & 0.28 & 0.56 & 1.15 & \(=\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ IHOINeRF\(\uparrow\) & & 19.40 & - & 0.68 & 0.21 & 0.41 & 0.73 & 0.65 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MonoNHR & HO3D v3 & 16.98 & 0.66 & 0.60 & 0.21 & 0.28 & 0.53 & 2.09 & 0.19 & 0.37 & 3.49 & 0.33 & 0.65 & 0.64 \\ MonoNHR\(\uparrow\) & & 19.34 & 0.75 & 0.70 & 0.22 & 0.45 & 0.74 & 0.97 & 0.36 & 0.59 & 1.50 & 0.52 & 0.92 & **0.08** \\ HandNeRF & & 18.04 & 0.68 & 0.63 & 0.23 & 0.38 & 0.70 & 0.35 & 0.40 & 0.66 & 0.41 & 0.45 & 0.51 & 0.78 \\ HandNeRF\(\uparrow\) & & **20.54** & **0.82** & **0.74** & **0.18** & **0.51** & **0.83** & **0.19** & **0.47** & **0.74** & **0.31** & **0.54** & **0.94** & **0.08** \\ \hline \end{tabular}
\end{table} TABLE II: Comparison with state-of-the-art baselines on DexYCB and HO-3D v3. Subscripts \(w\), \(o\), and \(h\) indicate whole, object, and hand evaluation, respectively. \(\dagger\) indicates use of ground truth 3D hand meshes for inputs, otherwise HandOccNet [5]’s estimation is used. MPJPEs (mean per joint position error) of the estimation are 12mm and 34mm in DexYCB and HO3D v3, respectively.
Fig. 4: Qualitative results of novel view synthesis (image, depth, and semantic segmentation) and 3D mesh on DexYCB and HO3D v3. Ground truth hand meshes are used as input.
Fig. 5: Qualitative results of novel view synthesis (image, depth, and semantic segmentation) and 3D mesh on DexYCB [9] and HO3D v3 [41], given hand mesh estimation of HandOccNet [5]. The bottom results for scissors are using ground truth hand mesh for reference.
interactions through the 2D object feature. As shown in Table I, M4 tends to overfit to hand reconstruction and produces poor results for object reconstruction. This implies that without the explicitly defined 2D object feature and its correlation with the hand pose, a strong prior coming from the given hand pose information dominates the prediction while ignoring object information from an input image.
**Effect of 3D CNN:** Learning hand-object interaction from Equation (2) is challenging due to the complex quadratic pairwise relationship, requiring a large number of data. Instead, we approximate the quadratic relationship in 3D space using a 3D CNN in Equation (3). We compare against a method (M1) that directly learns the all pairwise relationship between hand and object points using Transformer [43]. As shown in Table I, using the 3D CNN (M5) outperforms M1 in all metrics. Considering the higher gap between training and testing PSNR of M1 (e.g., M1: 4.55 vs. M5: 3.6), the result indicates that our method based on a 3D CNN is resilient to overfitting. Moreover, the model complexity of M1 is over 10 times than ours, comparing the number of model parameters (M1: 27.1K vs. M5: 2.1K).
### _Comparison with State-of-the-art Methods_
We first evaluate the generalization to novel grasps on the objects seen during training. We assess the generalization to novel object shape by training on 15 DexYCB objects and testing on 4 unseen ones. Finally, we demonstrate the use of reconstruction for grasp planning for robotic handover.
**Generalization to novel grasps:** Table II and Fig. 4 present quantitative and qualitative evaluation on DexYCB and
\begin{table}
\begin{tabular}{l|l|c} \hline Input & Reconstruction & Grasp proposal success ratio\(\uparrow\) \\ \hline \multirow{3}{*}{RGB} & PixelNeRF & 0.46 \\ & HOINeRF & 0.42 \\ & MonoNHR & 0.36 \\ & HandNeRF & **0.63** \\ \hline RGBD & - & 0.26 \\ \hline GT mesh & - & 0.77 \\ \hline \end{tabular}
\end{table} TABLE IV: Downstream grasping: We compare Contact-GraspNet [42] grasp quality on HandNeRF versus baseline reconstructions for DexYCB handover scenes. HandOccNet hand estimation is used for the methods.
Fig. 8: Qualitative results of Contact-GraspNet [42]’s grasp proposals on reconstructed meshes of HandNeRF and the baselines. The ground truth MANO hand mesh, which is used as input and grasp collision filtering, is also visualized.
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c c|c c c} \hline Method & PSNR\(\uparrow\) & IoU\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & F\(\text{-}5\uparrow\) F\(\text{-}10\uparrow\) & CD\(\text{-}\downarrow\) & F\(\text{-}5\uparrow\) F\(\text{-}10\uparrow\) & CD\(\text{-}\downarrow\) & F\(\text{-}5\uparrow\) F\(\text{-}10\uparrow\) & CD\(\text{-}\downarrow\) \\ \hline PixelNeRF & 18.85 & 0.58 & 0.58 & 0.34 & 0.21 & 0.47 & 1.09 & 0.22 & 0.44 & 1.31 & 0.10 & 0.31 & 2.09 \\ & HOINeRF & 17.89 & - & 0.58 & 0.34 & 0.21 & 0.45 & 571.55 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline & HOINeRF\(\uparrow\) & 19.94 & - & 0.64 & 0.30 & 0.43 & 0.67 & 1.03 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MonoNHR & 17.14 & 0.45 & 0.53 & 0.37 & 0.27 & 0.51 & 1.70 & 0.17 & 0.33 & 54.89 & 0.40 & 0.73 & 0.73 \\ MonoNHR\(\uparrow\) & 19.27 & 0.67 & 0.63 & 0.31 & 0.47 & 0.71 & 1.22 & 0.41 & 0.62 & 1.77 & **0.54** & **0.82** & **0.50** \\ HandNeRF & 18.85 & 0.56 & 0.55 & 0.33 & 0.30 & 0.61 & 0.62 & 0.25 & 0.49 & 0.85 & 0.36 & 0.69 & 0.88 \\ HandNeRF\(\uparrow\) & **20.83** & **0.72** & **0.66** & **0.27** & **0.51** & **0.75** & **0.72** & **0.46** & **0.68** & **1.11** & 0.51 & 0.80 & 0.52 \\ \hline \end{tabular}
\end{table} TABLE III: Comparison with state-of-the-art baselines on unseen objects of DexYCB [9]. Subscripts \(w\), \(o\), and \(h\) indicate whole, object, and hand evaluation, respectively. The mesh estimate inputs are from HandOccNet [5]. \(\uparrow\) indicates using ground truth 3D hand meshes.
Fig. 6: HandNeRF generalizes to significantly different grasp configurations at the inference time on in-house data. The reconstructed object meshes are visualized with the input hand mesh.
Fig. 7: Qualitative results of generalization to novel object shapes. The reconstructed 3D object mesh is visualized along with the ground truth MANO hand mesh, which is used as input.
HO3D v3. Given the ground truth grasping hand shape, HandNeRF shows the highest rendering quality scores and the highest F-scores, and the lowest CD (mm) for the whole and object geometry of the novel hand-object interaction scenes. For example, HandNeRF achieves approximately 1.5 times higher F-scores and significantly lower CD for object reconstruction than those of PixelNeRF [14] and MonoNHR [16]. This demonstrates HandNeRF's effective hand-object interaction priors compared to the baselines.
We also demonstrate HandNeRF's robustness to erroneous input hand meshes in Fig. 5, which is quantitatively verified in Table II. When using the hand pose estimation from HandOccNet [5], instead of the ground truth, small pose errors in the input hand mesh significantly impact IHOINeRF and MonoNHR outputs. These methods fail to recover half the scissors, implying limitations of implicit interaction encoding. In contrast, given inaccurate 3D hand input, HandNeRF retains reasonable reconstruction quality and renders more accurate novel views far from the input.
We further qualitatively demonstrate HandNeRF's generalization capability on in-house data in Fig. 6. Only 7 RGB cameras are used for data collection and annotation of 3D hand mesh and 2D object segmentation is fully automated. Without ground truth of 3D object geometry during training, HandNeRF exhibits good generalization to significantly different grasping poses and reasonably reconstructs the grasped object from a single RGB image, leveraging the learned hand-object interaction prior.
**Generalization to novel object shape:** As shown in Table III and Fig. 7, HandNeRF consistently outperforms baselines on most metrics, especially reconstructing robust object meshes despite substantial shape dissimilarity between training and test sets. Due to depth ambiguity in a single 2D RGB image, baselines fail to recover overall object shape. For instance, the 'Banana' is only partially reconstructed as its length is unclear from the input. These results demonstrate HandNeRF's superior generalization, likely due to the explicit hand and object geometry encoding effectively regularizing plausible novel object geometry.
**Application to grasp planning for handover:** We evaluate grasp proposals from Contact-GraspNet [42] on reconstructed meshes of HandNeRF and the baselines [14, 15, 16], RGBD pointcloud of the input image, and ground truth meshes. Grasps colliding with the hand mesh are filtered out before evaluation. We measure the ratio of successful grasp proposals, where a grasp is counted as successful if it envelops a part of the ground truth object mesh without colliding with it. Unseen scenes of two DexYCB objects ('Banana', 'Power Drill') are used, as Contact-GraspNet performed reliably on their ground truth meshes.
HandNeRF's object reconstruction enables a 1.5 times higher grasp proposal success ratio compared to baselines, as shown in Table IV. Without depth information, HandNeRF achieves a 63% grasp success ratio, approaching the 77% achieved by Contact-GraspNet using ground truth meshes and far exceeding the 26% from the input image pointcloud. Fig. 8 visually demonstrates how HandNeRF's more accurate reconstruction increases successful grasp proposals. Although the surface is locally coarse, HandNeRF's reconstructed global geometry, including the unobserved regions, enables more accurate grasp planning.
## V Limitation and Future work
The limitation of our method in practice is that it strongly depends on hand mesh estimation of off-the-shelf methods. Despite the advance of the recent methods [5, 6, 23], when the hand is severely occluded by the object, the estimated mesh is not accurate enough for inferring further correlation between the hand and object geometry. In such cases, the wrongly estimated hand information can rather heart the object reconstruction. In the future, we will explore to integrate the hand mesh estimation into our system along with the uncertainty modeling to adjust the hand mesh's impact to the final output.
Despite outperformance, our synthesized RGB images are still blurry, when rendered from significantly different view from an input view. Inspired by recent progress on 3D scene generation via language grounding [44], another avenue for future research will be to leverage self-supervised perceptual supervision, such as CLIP [45] feature consistency and object coherency.
## VI Conclusion
This work investigates representation learning for hand-object interactions from a single RGB image. We propose HandNeRF, a method that predicts the semantic neural radiance field of the interaction scenes. The key novelty is the utilization of hand shape to constrain the relative 3D configuration of hands and objects, encoding their correlation explicitly. Unlike existing works, HandNeRF does not require object templates for training and testing, avoiding expensive 3D labeling. Instead, it is supervised with sparse view RGB images, where conventional multi-view reconstruction methods, such as Sf (Structure from Motion), do not apply. HandNeRF outperforms state-of-the-art baselines in rendering and reconstruction on real-world data. Further, we demonstrate improved performance on downstream tasks resulting from HandNeRF's more accurate object meshes, both quantitatively and qualitatively.
|
2302.14221 | Gröbner-Shirshov bases and linear bases for free multi-operated
algebras over algebras with applications to differential Rota-Baxter algebras
and integro-differential algebras | Quite much recent studies has been attracted to the operated algebra since it
unifies various notions such as the differential algebra and the Rota-Baxter
algebra. An $\Omega$-operated algebra is a an (associative) algebra equipped
with a set $\Omega$ of linear operators which might satisfy certain operator
identities such as the Leibniz rule. A free $\Omega$-operated algebra $B$ can
be generated on an algebra $A$ similar to a free algebra generated on a set. If
$A$ has a Gr\"{o}bner-Shirshov basis $G$ and if the linear operators $\Omega$
satisfy a set $\Phi$ of operator identities, it is natural to ask when the
union $G\cup \Phi$ is a Gr\"{o}bner-Shirshov basis of $B$. A previous work
answers this question affirmatively under a mild condition, and thereby obtains
a canonical linear basis of $B$.
In this paper, we answer this question in the general case of multiple linear
operators. As applications we get operated Gr\"{o}bner-Shirshov bases for free
differential Rota-Baxter algebras and free integro-differential algebras over
algebras as well as their linear bases. One of the key technical difficulties
is to introduce new monomial orders for the case of two operators, which might
be of independent interest. | Zuan Liu, Zihao Qi, Yufei Qin, Guodong Zhou | 2023-02-28T00:54:36Z | http://arxiv.org/abs/2302.14221v3 | Grobner-Shirshov bases and linear bases for free multi-operated algebras over algebras with applications to differential rota-Baxter algebras and integro-differential algebras
###### Abstract.
Quite much recent studies has been attracted to the operated algebra since it unifies various notions such as the differential algebra and the Rota-Baxter algebra. An \(\Omega\)-operated algebra is a an (associative) algebra equipped with a set \(\Omega\) of linear operators which might satisfy certain operator identities such as the Leibniz rule. A free \(\Omega\)-operated algebra \(B\) can be generated on an algebra \(A\) similar to a free algebra generated on a set. If \(A\) has a Grobner-Shirshov basis \(G\) and if the linear operators \(\Omega\) satisfy a set \(\Phi\) of operator identities, it is natural to ask when the union \(G\cup\Phi\) is a Grobner-Shirshov basis of \(B\). A previous work answers this question affirmatively under a mild condition, and thereby obtains a canonical linear basis of \(B\).
In this paper, we answer this question in the general case of multiple linear operators. As applications we get operated Grobner-Shirshov bases for free differential Rota-Baxter algebras and free integro-differential algebras over algebras as well as their linear bases. One of the key technical difficulties is to introduce new monomial orders for the case of two operators, which might be of independent interest.
2010 Mathematics Subject Classification: 16Z10 03C05 08B20 12H05 16S10 17B38
###### Contents
* 1 Introduction
* 1.1 Operated GS basis theory: from a single operator to multiple operators
* 1.1.1 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.2 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.3 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.1.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.1 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.2 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.3 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.24 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.25 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.26 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.27 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.28 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.29 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.30 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.30 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.31 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.4 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.5 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.6 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.13 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.14 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.15 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.16 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.17 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.18 The linear operators \(\Omega\) and
* 1.2.19 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.20 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.21 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.22 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.23 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.24 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.25 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.26 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.7 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.8 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.9 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.10 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.11 The linear operators \(\Omega\) and \(\Omega\)
* 1.2.12 The linear operators \(\Omega\) and \(
4.2 Case of nonunital algebras with \(\lambda=0\)4.3 Case of unital algebras4.4 Differential Rota-Baxter algebras vs integro-differential algebras
## Introduction
This paper extends the results of [17] to algebras endowed with several operators, with applications to differential Rota-Baxter algebras and integro-differential algebras.
### Operated GS basis theory: from a single operator to multiple operators
Since its introduction by Shirshov [20] and Buchberger [4] in the sixties of last century, Grobner-Shirshov (=GS) basis theory has become one of the main tools of computational algebra; see for instance [10, 1, 3]. In order to deal with algebras endowed with operators, Guo and his coauthors introduced a GS basis theory in a series of papers [11, 23, 15, 6] (see also [2]) with the goal to attack Rota's program [19] to classify "interesting" operators on algebras. Guo et al. considered operators satisfying some polynomial identities, hence called operated polynomial identities (aka. OPIs) [11, 23, 15, 6]. Via GS basis theory and the somewhat equivalent theory: rewriting systems, they could define when OPIs are GS. They are mainly interested into two classes of OPIs: differential type OPIs and Rota-Baxter type OPIs, which are carefully studied in [15, 23, 6]. For the state of art, we refer the reader to the survey paper [8] and for recent development, see [22, 12, 17, 21].
In these papers [11, 23, 15, 6], the operated GS theory and hence Rota's classification program have been carried out only for algebras endowed with a single operator. It would be very interesting to carry out further Rota's program for the general case of multiple linear operators.
The paper [2] contains a first step of this program by developing the GS basis theory in this generalised setup. We will review and update the GS basis theory in the multi-operated setup in Section 2.
Another direction is to generalise from operated algebras over a base field to operated algebras over a base ring. While previous papers [17, 18] considered this aspect for single operator case, this paper is aimed to deal with this aspect for multiple linear operator case. In particular, some new monomial orders for the two operator case will be constructed which enable us to study operated GS bases for free operated algebras generated by algebras, while it seems that the monomial orders appeared in previous papers can be applied directly when the base ring is not a field any more.
### Free operated algebras over algebras
Recently, there is a need to develop free operated algebras satisfying some OPIs over a fixed algebras and construct GS bases and linear bases for these free algebras as long as a GS basis is known for the given algebra. Ebrahimi-Fard and Guo [5] used rooted trees and forests to give explicit constructions of free noncommutative Rota-Baxter algebras on modules and sets; Lei and Guo [16] constructed the linear basis of free Nijenhuis algebras over associative algebras; Guo and Li [12] gave a linear basis of the free differential algebra over associative algebras by introducing the notion of differential GS bases.
In a previous paper [17], the authors considered a question which can be roughly stated as follows:
**Question 0.1**.: Given a (unital or nonunital) algebra \(A\) with a GS basis \(G\) and a set \(\Phi\) of OPIs, assume that these OPIs \(\Phi\) are GS in the sense of [2, 15, 23, 6]. Let \(B\) be the free operated algebra satisfying \(\Phi\) over \(A\). When will \(\Phi\cup G\) be a GS basis for \(B\)?
They answer this question in the affirmative under a mild condition in [17, Theorem 5.9]. When this condition is satisfied, \(\Phi\cup G\) is a GS basis for \(B\) and as a consequence, we also get a linear basis of \(B\). This result has been applied to all Rota-Baxter type OPIs, a class of differential type OPIs, averaging OPIs and Reynolds OPI in [17]. It was also applied to differential type OPIs by introducing some new monomial orders [18].
In this paper, we consider a similar question for multi-operated algebras.
Let \(\Omega\) be a nonempty set which will be the index set of operators. Algebras endowed with operators indexed by \(\Omega\) will be called \(\Omega\)-algebras. OPIs can be extended to the multi-operated setup and one can introduce the notion of \(\Omega\)-GS for OPIs.
**Question 0.2**.: Let \(\Phi\) be a set of OPIs of a set of operators indexed by \(\Omega\). Let \(A\) be a (unital) algebra together with a GS basis \(G\). Assume that these OPIs \(\Phi\) are GS in the sense of Section 2. Let \(B\) be the free \(\Omega\)-algebra over \(A\) such that the operators satisfy \(\Phi\). When will \(\Phi\cup G\) be an \(\Omega\)-GS basis for \(B\)?
We extend the main result of [17] to multi-operated cases; see Theorem 2.12 for unital algebras and Theorem 2.13 for nonunital algebras.
### Differential Rota-Baxter algebras and integro-differential algebras
The main motivation of this paper comes, in fact, from differential Rota-Baxter algebras and integro-differential algebras.
Differential Rota-Baxter algebras were introduced by Guo and Keigher [13] which reflect the relation between the differential operator and the integral operator as in the First Fundamental Theorem of Calculus. Free differential Rota-Baxter algebras were constructed by using various tools including angularly decorated rooted forests and GS basis theory [13, 2].
Integro-differential algebras (of zero weight) were defined for the algebraic study of boundary problems for linear systems of linear ordinary differential equations. Guo, Regensburger and Rosenkranz [14] introduced Integro-differential algebras with weight. Free objects and their linear bases were constructed by using GS basis theory [14, 9, 7]
The main goal of this paper is to study free differential Rota-Baxter algebras and free integro-differential algebras over algebras from the viewpoint of operated GS basis theory. In particular, when the base algebra is reduced to \(\mathbf{k}\), our results also give GS bases and linear bases for free differential Rota-Baxter algebras and free integro-differential algebras.
However, the original monomial orders used in [2, 14, 9, 7] do not satisfy the hypothesis in Theorems 2.12 and 2.13 for free multi-operated algebras over algebras, and we have to introduce a new monomial order \(\leq_{\text{PD}}\) (resp. \(\leq_{\text{uPD}}\)) to overcome the problem; see Section 1.3.
In contrast to the use different monomial orders while dealing with free differential Rota-Baxter algebras and free integro-differential algebras in [2] and [7] respectively, we will demonstrate that our monomial ordering \(\leq_{\text{PD}}\) can be applied to both types of algebras simultaneously, as we shall see in Sections 3 and 4. Moreover, since the case of the unital algebras was not discussed in [2], this aspect is addressed in Subsection 3.3 by using our monomial order \(\leq_{\text{uPD}}\).
### Outline of the paper
This paper is organized as follows.
The first section contains remainder on free objects in multi-operated setting and on the construction of free \(\Omega\)-semigroups and related structures, and introduces some new monomial orders for the case of two operators, which will be the key technical tool of this paper.
In the second section, we recall the theory of GS bases for the multi-operated setting. After introducing OPIs, GS property for OPIs and \(\Omega\)-GS bases for multi-operated algebras are defined; after giving some facts about free multi-operated \(\Phi\)-algebras on algebras, answers to Question 0.2 are presented.
In the third section, multi-operated GS bases and linear bases for free differential Rota-Baxter algebras on algebras are studied and the fourth section contains our investigation for free integro-differential algebras on algebras.
**Notation:** Throughout this paper, \(\mathbf{k}\) denotes a base field. All the vector spaces and algebras are over \(\mathbf{k}\).
## 1. New monomial orders on free multi-operated semigroups and monoids
In this section, we recall free objects in multi-operated setting and the construction of free \(\Omega\)-semigroups and related structures, and define two new monomial orders \(\leq_{\text{PD}}\) and \(\leq_{\text{uPD}}\) on free multi-operated semigroups and monoids. The main results of this paper will highly depend on these new monomial orders.
For a set \(Z\), denote by \(\mathbf{k}Z\) (resp. \(\mathcal{S}(Z)\), \(\mathcal{M}(Z)\)) the free \(\mathbf{k}\)-vector space (resp. free semigroup, free monoid) generated by \(Z\). Denote the category of sets (resp. semigroups, monoids) by \(\mathfrak{S}\mathfrak{e}\mathfrak{t}\) (resp. \(\mathfrak{S}\mathfrak{e}\mathfrak{t}\), \(\mathfrak{M}\mathfrak{o}\mathfrak{t}\)). Denote the categories of \(\mathbf{k}\)-algebras and unital \(\mathbf{k}\)-algebras by \(\mathfrak{H}\mathfrak{l}\mathfrak{g}\) and \(\mathfrak{u}\mathfrak{H}\mathfrak{l}\mathfrak{g}\) respectively.
Throughout this section, let \(\Omega\) be a nonempty set which will be the index set of operators.
### Free objects in the multi-operated setup
**Definition 1.1**.: An operated set with an operator index set \(\Omega\) or simply an \(\Omega\)-set is a set \(S\) endowed with a family of maps \(P_{\omega}:S\to S\) indexed by \(\omega\in\Omega\). The morphisms between \(\Omega\)-sets can be defined in the obvious way. Denote the category of \(\Omega\)-sets by \(\Omega\)-\(\mathfrak{S}\mathfrak{e}\mathfrak{t}\).
Similarly, we can define \(\Omega\)-semigroups and \(\Omega\)-monoids. Their categories are denoted by \(\Omega\)-\(\mathfrak{S}\mathfrak{e}\mathfrak{t}\) and \(\Omega\)-\(\mathfrak{M}\mathfrak{o}\mathfrak{t}\) respectively.
\(\Omega\)-vector spaces, nonunital or unital \(\Omega\)-algebras can be defined in a similar way, except asking, moreover, that all the operators are \(\mathbf{k}\)-linear maps. Denote the category of \(\Omega\)-vector spaces, (resp. nonunital \(\Omega\)-algebras) by \(\Omega\)-\(\mathfrak{S}\mathfrak{e}\mathfrak{t}\) (resp. \(\Omega\)-\(\mathfrak{M}\mathfrak{l}\mathfrak{g}\), \(\Omega\)-\(\mathfrak{u}\mathfrak{H}\mathfrak{l}\mathfrak{g}\)) with obvious morphisms.
As in [17], there exists the following diagram of functors:
In this diagram, all functors from right to left, from below to above and from southwest to northeast are the obvious forgetful functors. The other functors are free object functors which are left adjoint to the forgetful functors.
Our notations for free object functors are analogous to those in [17]. For instance, \(\mathcal{F}_{\mathfrak{Alg}}^{\Omega\text{-}\mathfrak{Alg}}\) denotes the free object functor from the category of algebras to that of nonunital \(\Omega\)-algebras.
We could give similar constructions of these free object functors as in Sections 1-3 of [17]. However, as we don't need the details, we will not repeat them. The curious readers could consult [17] and extend the constructions in [17] without essential difficulties.
### Free multi-operated semigroups and monoids
Now we explain the construction of the free \(\Omega\)-semigroup generated by a set \(Z\).
For \(\omega\in\Omega\), denote by \([Z]_{\omega}\) the set of all formal elements \([z]_{\omega}\), \(z\in Z\) and put \([Z]_{\Omega}=\sqcup_{\omega\in\Omega}\,[Z]_{\omega}\). The inclusion into the first component \(Z\hookrightarrow Z\sqcup[Z]_{\Omega}\) induces an injective semigroup homomorphism
\[i_{0,1}:\,\mathfrak{S}_{\Omega,0}(Z):=\mathcal{S}(Z)\hookrightarrow\mathfrak{ S}_{\Omega,1}(Z):=\mathcal{S}(Z\sqcup[Z]_{\Omega}).\]
For \(n\geq 2\), assume that we have constructed \(\mathfrak{S}_{\Omega,n-2}(Z)\) and \(\mathfrak{S}_{\Omega,n-1}(Z)=\mathcal{S}(Z\sqcup[\mathfrak{S}_{\Omega,n-2}(Z )]_{\Omega})\) endowed with an injective homomorphism of semigroups \(i_{n-2,n-1}:\,\mathfrak{S}_{\Omega,n-2}(Z)\hookrightarrow\mathfrak{S}_{ \Omega,n-1}(Z).\) We define the semigroup
\[\mathfrak{S}_{\Omega,n}(Z):=\mathcal{S}(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n- 1}(Z)\rfloor_{\Omega})\]
and the natural injection
\[\operatorname{Id}_{Z}\sqcup\lfloor i_{n-2,n-1}\rfloor_{\Omega}:Z\sqcup \lfloor\mathfrak{S}_{\Omega,n-2}(Z)\rfloor_{\Omega}\hookrightarrow Z\sqcup \lfloor\mathfrak{S}_{\Omega,n-1}(Z)\rfloor_{\Omega}\]
induces an injective semigroup homomorphism
\[i_{n-1,n}:\,\mathfrak{S}_{\Omega,n-1}(Z)=\mathcal{S}(Z\sqcup\lfloor\mathfrak{ S}_{\Omega,n-2}(Z)\rfloor_{\Omega})\hookrightarrow\mathfrak{S}_{\Omega,n}(Z)= \mathcal{S}(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n-1}(Z)\rfloor_{\Omega}).\]
Define \(\mathfrak{S}_{\Omega}(Z)=\varinjlim\mathfrak{S}_{\Omega,n}(Z)\) and the maps sending \(u\in\mathfrak{S}_{\Omega,n}(Z)\) to \(\lfloor u\rfloor_{\omega}\in\mathfrak{S}_{\Omega,n+1}(Z)\) induces a family of operators \(\mathcal{P}_{\omega},\omega\in\Omega\) on \(\mathfrak{S}_{\Omega}(Z)\).
The construction of the free \(\Omega\)-monoid \(\mathfrak{M}_{\Omega}(M)\) over a set \(Z\) is similar, by just replacing \(\mathcal{S}(Z)\) by \(\mathcal{M}(Z)\) everywhere in the construction.
**Remark 1.2**.: We will use another construction of \(\mathfrak{M}_{\Omega}(Z)\). In fact, add some symbols \(\lfloor 1\rfloor_{\Omega}=\{\lfloor 1\rfloor_{\omega},\omega\in\Omega\}\) to \(Z\) and form \(\mathfrak{S}_{\Omega}(Z\sqcup\lfloor 1\rfloor_{\Omega})\), then \(\mathfrak{M}_{\Omega}(Z)\) can be obtained from \(\mathfrak{S}_{\Omega}(Z\sqcup\lfloor 1\rfloor_{\Omega})\) by just adding the empty word \(1\).
It is easy to see that \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\)(resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) is the free nonunital (resp. unital) \(\Omega\)-algebra generated by \(Z\).
### Monomial orders
In this subsection, we introduce some new monomial orders on free \(\Omega\)-semigroups and free \(\Omega\)-monoids. We only consider the case of two operators, say \(\Omega=\{P,D\}\) as the main examples in mind are differential Rota-Baxter algebras and integro-differential algebras following the convention from [7].
We first recall the definitions of well orders and monomial orders.
**Definition 1.3**.: Let \(Z\) be a nonempty set.
1. A preorder \(\leq\) is a binary relation on \(Z\) that is reflexive and transitive, that is, for all \(x,y,z\in Z\), we have 1. \(x\leq x\); and 2. if \(x\leq y,y\leq z\), then \(x\leq z\). In the presence of a preorder \(\leq\), we denote \(x=_{Z}y\) if \(x\leq y\) and \(x\geq y\); if \(x\leq y\) but \(x\neq y\), we write \(x<y\) or \(y>x\).
2. A pre-linear order \(\leq\) on \(Z\) is a preorder \(\leq\) such that either \(x\leq y\) or \(x\geq y\) for all \(x,y\in Z\).
3. A linear order or a total order \(\leq\) on \(Z\) is a pre-linear order \(\leq\) such that \(\leq\) is symmetric, that is, \(x\leq y\) and \(y\leq x\) imply \(x=y\).
4. A preorder \(\leq\) on \(Z\) is said to satisfy the descending chain condition, if for each descending chain \(x_{1}\geq x_{2}\geq x_{3}\geq\cdots\), there exists \(N\geq 1\) such that \(x_{N}=_{Z}x_{N+1}=_{Z}\cdots\). A linear order satisfying the descending chain condition is called a well order.
Before giving the definition of monomial orders, we need to introduce the following notions generalising the case of one operator.
**Definition 1.4**.: Let \(Z\) be a set and \(\star\) a symbol not in \(Z\).
1. Define \(\mathfrak{M}_{\Omega}^{\star}(Z)\) to be the subset of \(\mathfrak{M}_{\Omega}(Z\cup\star)\) consisting of elements with \(\star\) occurring only once.
2. For \(q\in\mathfrak{M}_{\Omega}^{\star}(Z)\) and \(u\in\mathfrak{M}_{\Omega}(Z)\), we define \(q|_{u}\in\mathfrak{M}_{\Omega}(Z)\) to be the element obtained by replacing the symbol \(\star\) in \(q\) by \(u\). In this case, we say \(u\) is a subword of \(q|_{u}\).
3. For \(q\in\mathfrak{M}_{\Omega}^{\star}(Z)\) and \(s=\sum_{i}c_{i}u_{i}\in\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with \(c_{i}\in\mathbf{k}\) and \(u_{i}\in\mathfrak{M}_{\Omega}(Z)\), we define \[q|_{s}:=\sum_{i}c_{i}q|_{u_{i}}.\]
4. Define \(\mathfrak{S}_{\Omega}^{\star}(Z)\) to be the subset of \(\mathfrak{S}_{\Omega}(Z\cup\star)\) consisting of elements with \(\star\) occurring only once. It is easy to see \(\mathfrak{S}_{\Omega}^{\star}(Z)\) is a subset of \(\mathfrak{M}_{\Omega}^{\star}(Z)\), so we also have notations in (a)-(c) for \(\mathfrak{S}_{\Omega}^{\star}(Z)\) by restriction.
**Definition 1.5**.: Let \(Z\) be a set.
1. A monomial order on \(\mathcal{S}(Z)\) is a well-order \(\leq\) on \(\mathcal{S}(Z)\) such that \[u<v\Rightarrow uw<vw\text{ and }wu<wv\text{ for any }u,v,w\in\mathcal{S}(Z);\]
2. a monomial order on \(\mathcal{M}(Z)\) is a well-order \(\leq\) on \(\mathcal{M}(Z)\) such that \[u<v\Rightarrow wuz<wv\text{ for any }u,v,w,z\in\mathcal{M}(Z);\]
3. a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) is a well-order \(\leq\) on \(\mathfrak{S}_{\Omega}(Z)\) such that \[u<v\Rightarrow q|_{u}<q|_{v}\quad\text{for all }u,v\in\mathfrak{S}_{\Omega}(Z)\text{ and }q\in\mathfrak{S}_{\Omega}^{\star}(Z);\]
4. a monomial order on \(\mathfrak{M}_{\Omega}(Z)\) is a well-order \(\leq\) on \(\mathfrak{M}_{\Omega}(Z)\) such that \[u<v\Rightarrow q|_{u}<q|_{v}\quad\text{for all }u,v\in\mathfrak{M}_{\Omega}(Z)\text{ and }q\in\mathfrak{M}_{\Omega}^{\star}(Z).\]
Let us recall some known preorders.
**Definition 1.6**.: For two elements \(u,v\in\mathfrak{S}_{\Omega}(Z)\),
1. define \[u\leq_{\mathrm{D}}v\Leftrightarrow\deg_{D}(u)\leq\deg_{D}(v),\] where the \(D\)-degree \(\deg_{D}(u)\) of \(u\) is the number of occurrence of \(\lfloor\ \rfloor_{D}\) in \(u\);
2. define \[u\leq_{\mathrm{P}}v\Leftrightarrow\deg_{P}(u)\leq\deg_{P}(v),\] where the \(P\)-degree \(\deg_{P}(u)\) of \(u\) is the number of occurrence of \(\lfloor\ \rfloor_{P}\) in \(u\);
3. define \[u\leq_{\mathrm{d}Z}v\Leftrightarrow\deg_{Z}(u)\leq\deg_{Z}(v),\] where the \(Z\)-degree \(\deg_{Z}(u)\) is the number of elements of \(Z\) occurring in \(u\) counting the repetitions;
**Definition 1.7**.: Let \(Z\) be a set endowed with a well order \(\leq_{Z}\). Introduce the degree-lexicographical order \(\leq_{\mathrm{dlex}}\) on \(\mathcal{S}(Z)\) by imposing, for any \(u\neq v\in\mathcal{S}(Z)\), \(u<_{\mathrm{dlex}}v\) if
1. either \(\deg_{Z}(u)<\deg_{Z}(v)\), or
2. \(\deg_{Z}(u)=\deg_{Z}(v)\), and \(u=mu_{i}n\), \(v=mv_{i}n^{\prime}\) for some \(m,n,n^{\prime}\in\mathcal{M}(Z)\) and \(u_{i},v_{i}\in Z\) with \(u_{i}<_{Z}v_{i}\).
It is obvious that the degree-lexicographic order \(\leq_{\mathrm{dlex}}\) on \(\mathcal{S}(Z)\) is a well order.
We now define a preorder \(\leq_{\mathrm{Dlex}}\) on \(\mathfrak{S}_{\Omega}(Z)\), by the following recursion process:
1. For \(u,v\in\mathfrak{S}_{\Omega,0}(Z)=\mathcal{S}(Z)\), define \[u\leq_{\mathrm{Dlex}_{0}}v\Leftrightarrow u\leq_{\mathrm{dlex}}v.\]
2. Assume that we have constructed a well order \(\leq_{\mathrm{Dlex}_{n}}\) on \(\mathfrak{S}_{\Omega,n}(Z)\) for \(n\geq 0\) extending all \(\leq_{\mathrm{Dlex}_{n}}\) for any \(0\leq i\leq n-1\). The well order \(\leq_{\mathrm{Dlex}_{n}}\) on \(\mathfrak{S}_{\Omega,n}(Z)\) induces a well order on \(\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\) (resp. \(\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D}\)), by imposing \([u]_{P}\leq[v]_{P}\) (resp. \([u]_{D}\leq[v]_{D}\)) whenever \(u\leq_{\mathrm{Dlex}_{n}}v\in\mathfrak{S}_{\Omega,n}(Z)\). By setting \(u<v<w\) for all \(u\in Z\), \(v\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D}\), and \(w\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\), we obtain a well order on \(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\sqcup\lfloor\mathfrak{S} _{\Omega,n}(Z)\rfloor_{D}\). Let \(\leq_{\mathrm{Dlex}_{n+1}}\) be the degree lexicographic order on \(\mathfrak{S}_{\Omega,n+1}(Z)=\mathcal{S}(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n }(Z)\rfloor_{P}\sqcup\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D})\) induced by that on \(Z\sqcup\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{P}\sqcup\lfloor\mathfrak{S} _{\Omega,n}(Z)\rfloor_{D}\).
Obviously \(\leq_{\mathrm{Dlex}_{n+1}}\) extends \(\leq_{\mathrm{Dlex}_{n}}\). By a limit process, we get a preorder on \(\mathfrak{S}_{\Omega}(Z)\) which will be denoted by \(\leq_{\mathrm{Dlex}}\). As is readily seen, \(\leq_{\mathrm{Dlex}}\) is a linear order.
**Remark 1.8**.: It is easy to see that the above construction of \(\leq_{\mathrm{Dlex}}\) can be extended to the case of more than two operators.
In fact, for a given well order \(\leq_{\Omega}\) in the index set \(\Omega\), the defining process of \(\leq_{\mathrm{Dlex}}\) on \(\mathfrak{S}_{\Omega}(Z)\) is the same as above except one detail in step (b), where we need to put \(u<v<w\) for all \(u\in Z\), \(v\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{\omega_{1}}\) and \(w\in\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{\omega_{2}}\) with \(\omega_{1}\leq_{\Omega}\omega_{2}\in\Omega\).
**Definition 1.9**.: For any \(u\in\mathfrak{S}_{\Omega}(Z)\), let \(u_{1},\ldots,u_{n}\in Z\) be all the elements occurring in \(u\) from left to right. If a right half bracket \(\rfloor_{D}\) locates in the gap between \(u_{i}\) and \(u_{i+1}\), where \(1\leq i<n\), the GD-degree of this right half bracket is defined to be \(n-i\); if there is a right half bracket \(\rfloor_{D}\) appearing on the right of \(u_{n}\), we define the GD-degree of this half bracket to be \(0\). We define the GD-degree of \(u\), denoted by \(\deg_{GD}(u)\), to be the sum of the GD-degrees of all the half right brackets in \(u\).
For example, the GD-degrees of the half right brackets in \(u=\lfloor x\rfloor_{D}\lfloor y\rfloor_{D}\) with \(x,y\in Z\) are respectively \(1\) and \(0\) from left to right, so \(\deg_{GD}(u)=1\) by definition.
For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), define the GD-degree order \(\leq_{\mathrm{GD}}\) by
\[u\leq_{\mathrm{GD}}v\Leftrightarrow\deg_{GD}(u)\leq\deg_{GD}(v).\]
**Definition 1.10**.: For any \(u\in\mathfrak{S}_{\Omega}(Z)\), let \(u_{1},\ldots,u_{n}\in Z\) be all the elements occurring in \(u\) from left to right. If there are \(i\) elements in \(Z\) contained in a bracket \(\lfloor\ \rfloor_{P}\), the GP-degree of this bracket is defined to be \(n-i\). We denote by \(\deg_{GP}(u)\) the sum of the GP-degree of all the brackets \(\lfloor\ \rfloor_{P}\) in \(u\).
For example, the the GP-degrees of the brackets \(\lfloor\ \rfloor_{P}\) of \(u=\lfloor xy\rfloor_{P}\lfloor z\rfloor_{P}\) with \(x,y,z\in Z\) are respectively \(1\) and \(2\) from left to right, so \(\deg_{GP}(u)=3\) by definition.
For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), define the GD-degree order \(\leq_{\mathrm{GD}}\) by
\[u\leq_{\mathrm{GP}}v\Leftrightarrow\deg_{GP}(u)\leq\deg_{GP}(v).\]
It is easy to obtain the following lemma whose proof is thus omitted.
**Lemma 1.11**.: _The orders \(\leq_{\mathrm{D}},\ \leq_{\mathrm{P}},\ \leq_{\mathrm{dZ}},\ \leq_{\mathrm{GD}}\) and \(\leq_{\mathrm{GP}}\) are pre-linear orders satisfying the descending chain condition._
Combining all the orders above, we can now construct an order \(\leq_{\mathrm{PD}}\) of \(\mathfrak{S}_{\Omega}(Z)\):
\[u\leq_{\mathrm{PD}}v\Leftrightarrow\left\{\begin{array}{l}u\leq_{\mathrm{D} }v,\text{or}\\ u=_{\mathrm{D}}v\text{ and }u\leq_{\mathrm{P}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v\text{ and }u\leq_{\mathrm{dZ}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v,u=_{\mathrm{dZ}}v\text{ and }u\leq_{\mathrm{GD}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v,u=_{\mathrm{dZ}}v,u=_{\mathrm{GD}}v\text{ and }u\leq_{\mathrm{GP}}v,\text{or}\\ u=_{\mathrm{D}}v,u=_{\mathrm{P}}v,u=_{\mathrm{dZ}}v,u=_{\mathrm{GD}}v,u=_{ \mathrm{GP}}v\text{ and }u\leq_{\mathrm{Dlex}}v.\end{array}\right.\]
To prove that the \(\leq_{\mathrm{PD}}\) is a well-order, we need some preparation.
**Definition 1.12**.:
1. Given some preorders \(\leq_{\alpha_{1}},\ldots,\leq_{\alpha_{k}}\) on a set \(Z\) with \(k\geq 2\), introduce another preorder \(\leq_{\alpha_{1},\ldots,\alpha_{k}}\) by imposing recursively \[u\leq_{\alpha_{1},\ldots,\alpha_{k}}v\Leftrightarrow\left\{\begin{array}{l}u <_{\alpha_{1}}v,\text{ or}\\ u=_{\alpha_{1}}v\text{ and }u\leq_{\alpha_{2},\ldots,\alpha_{k}}v.\end{array}\right.\]
2. Let \(k\geq 2\) and let \(\leq_{\alpha_{i}}\) be a pre-linear order on \(Z_{i},\ 1\leq i\leq k\). Define the lexicographical product order \(\leq_{\mathrm{clex}}\) on the cartesian product \(Z_{1}\times Z_{2}\times\cdots\times Z_{k}\) by defining \[(x_{1},\cdots,x_{k})\leq_{\mathrm{clex}}(y_{1},\cdots,y_{k})\Leftrightarrow \left\{\begin{array}{l}x_{1}<_{\alpha_{1}}y_{1},\text{or}\\ x_{1}=_{Z_{1}}y_{1}\text{ and }(x_{2},\cdots,x_{k})\leq_{\mathrm{clex}}(y_{2}, \cdots,y_{k})\,,\end{array}\right.\] where \((x_{2},\cdots,x_{k})\leq_{\mathrm{clex}}(y_{2},\cdots,y_{k})\) is defined by induction, with the convention that \(\leq_{\mathrm{clex}}\) is the trivial relation when \(k=1\).
**Lemma 1.13** ([18, Lemma 1.7]).:
1. _For_ \(k\geq 2\)_, let_ \(\leq_{\alpha_{1}},\ldots,\leq_{\alpha_{k-1}}\) _be pre-linear orders on_ \(Z\)_, and_ \(\leq_{\alpha_{k}}\) _a linear order on_ \(Z\)_. Then_ \(\leq_{\alpha_{1},\ldots,\alpha_{k}}\) _is a linear order on_ \(Z\)_._
2. _Let_ \(\leq_{\alpha_{i}}\) _be a well order on_ \(Z_{i}\)_,_ \(1\leq i\leq k\)_. Then the lexicographical product order_ \(\leq_{\mathrm{clex}}\) _is a well order on the cartesian product_ \(Z_{1}\times Z_{2}\times\cdots\times Z_{k}\)_._
**Proposition 1.14**.: _The order \(\leq_{\mathrm{PD}}\) is a well order on \(\mathfrak{S}_{\Omega}(Z)\)._
Proof.: Since \(\leq_{\rm Dlex}\) is a linear order, so is \(\leq_{\rm PD}\) by Lemma 1.11 and Lemma 1.13(a).
It suffices to verify that \(\leq_{\rm PD}\) satisfies the descending chain condition. Let
\[v_{1}\geq_{\rm PD}v_{2}\geq_{\rm PD}v_{3}\geq_{\rm PD}\cdots\in\mathfrak{S}_{ \Omega}(Z)\]
be a descending chain. By Lemma 1.11, there exist \(N\geq 1\) such that
\[\deg_{D}(v_{N})=\deg_{D}(v_{N+1})=\deg_{D}(v_{N+2})=\cdots=:k,\] \[\deg_{P}(v_{N})=\deg_{P}(v_{N+1})=\deg_{P}(v_{N+2})=\cdots=:p,\] \[\deg_{Z}(v_{N})=\deg_{Z}(v_{N+1})=\deg_{Z}(v_{N+2})=\cdots\] \[\deg_{GD}(v_{N})=\deg_{GD}(v_{N+1})=\deg_{GD}(v_{N+2})=\cdots,\]
and
\[\deg_{GP}(v_{N})=\deg_{GP}(v_{N+1})=\deg_{GP}(v_{N+2})=\cdots.\]
Thus all \(v_{i}\) with \(i\geq N\) belong to \(\mathfrak{S}_{\Omega,k+p}(Z)\). The restriction of the order \(\leq_{\rm Dlex}\) to \(\mathfrak{S}_{\Omega,k+p}(Z)\) equals to the well order \(\leq_{\rm Dlex_{k+p}}\), which by definition satisfies the descending chain condition, so the chain \(v_{1}\geq_{\rm PD}v_{2}\geq_{\rm PD}v_{3}\geq_{\rm PD}\cdots\) stabilizes after finite steps.
**Definition 1.15** ([23, Definition 5.6]).: A preorder \(\leq_{\alpha}\) on \(\mathfrak{S}_{\Omega}(Z)\) is called bracket compatible (resp. left compatible, right compatible) if
\[u\leq_{\alpha}v\Rightarrow\lfloor u\rfloor_{D}\leq_{\alpha}\lfloor v\rfloor_{D} \text{ and }\lfloor u\rfloor_{P}\leq_{\alpha}\lfloor v\rfloor_{P},\text{ (resp. }wu\leq_{\alpha}wv,\text{ }uw\leq_{\alpha}vw,\text{ }\text{ for all }w\in\mathfrak{S}_{\Omega}(Z))\]
**Lemma 1.16** ([23, Lemma 5.7]).: _A well order \(\leq\) is a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) if and only if \(\leq\) is bracket compatible, left compatible and right compatible._
Now we can prove the main result of this section which is the main technical point of this paper.
**Theorem 1.17**.: _The well order \(\leq_{\rm PD}\) is a monomial order on \(\mathfrak{S}_{\Omega}(Z)\)._
Proof.: Let \(u\leq_{\rm PD}v\). It is obvious that preorders \(\leq_{\rm D}\), \(\leq_{\rm P}\) and \(\leq_{\rm AZ}\) are bracket compatible, left compatible and right compatible. This solves the three cases \(u<_{\rm D}v\); \(u=_{\rm D}v\), \(u<_{\rm dgp}v\); \(u=_{\rm D}v\), \(u=_{\rm dep}v\) and \(u<_{\rm dgv}v\).
If \(u=_{\rm D}v\), \(u=_{\rm P}v\), \(u=_{\rm AZ}v\) and \(u<_{\rm GD}v\), obviously \(\lfloor u\rfloor_{D}<_{\rm GD}\lfloor v\rfloor_{D}\), \(\lfloor u\rfloor_{P}<_{\rm GD}\lfloor v\rfloor_{P}uw\)\(<_{\rm GD}vw\) and \(wu<_{\rm GD}wv\) for \(w\in\mathfrak{S}_{\Omega}(Z)\). So \(\lfloor u\rfloor_{D}<_{\rm PD}\lfloor v\rfloor_{D}\), \(\lfloor u\rfloor_{P}<_{\rm PD}\lfloor v\rfloor_{P}\), \(uw<_{\rm PD}vw\) and \(wu<_{\rm PD}wv\).
The case that \(u=_{\rm D}v\), \(u=_{\rm P}v\), \(u=_{\rm AZ}v\), \(u=_{\rm GD}v\) and \(u<_{\rm GP}v\) is similar to the above one.
It remains to consider the case that \(u=_{\rm D}v\), \(u=_{\rm P}v\), \(u=_{\rm AZ}v\), \(u=_{\rm GD}v\), \(u=_{\rm GP}v\) and \(u<_{\rm Dlex}v\). Let \(n\geq\deg_{D}(u),\deg_{P}(u)\). Since \(u,v\in\mathfrak{S}_{\Omega,n}(Z)\), thus \(u\leq_{\rm Dlex_{n}}v\). By the fact that the restriction of \(\leq_{\rm Dlex_{n+1}}\) to \(\lfloor\mathfrak{S}_{\Omega,n}(Z)\rfloor_{D}\) is induced by \(\leq_{\rm Dlex_{n+1}}\), \(\lfloor v\rfloor_{D}\), \(\lfloor u\rfloor_{D}\leq_{\rm Dlex}\lfloor v\rfloor_{D}\), and \(\lfloor u\rfloor_{D}\leq_{\rm PD}\lfloor v\rfloor_{D}\). Similarly \(\lfloor u\rfloor_{P}\leq_{\rm PD}\lfloor v\rfloor_{P}\). Let \(w\in\mathfrak{S}_{\Omega,m}(Z)\). One can obtain \(uw\leq_{\rm Dlex}\), \(vw\) and \(wu\leq_{\rm Dlex}\), \(wv\) for \(r=\max\{m,n\}\), so \(uw\leq_{\rm PD}vw\) and \(wu\leq_{\rm PD}wv\).
We are done.
Now let's move to the unital case. Now we extend \(\leq_{\rm PD}\) from \(\mathfrak{S}_{\Omega}(Z)\) to \(\mathfrak{M}_{\Omega}(Z)\) by using Remark 1.2.
**Definition 1.18**.: Let \(Z\) be a set with a well order. Let \(\dagger_{P}\) (resp. \(\dagger_{D}\) ) be a symbol which is understood to be \(\lfloor 1\rfloor_{P}\) (resp. \(\lfloor 1\rfloor_{D}\)) and write \(Z^{\prime}=Z\sqcup\{\dagger_{P},\dagger_{D}\}\). Consider the free operated semigroup \(\mathfrak{S}_{\Omega}(Z^{\prime})\) over the set \(Z^{\prime}\). The well order on \(Z\) extends to a well order \(\leq\) on \(Z^{\prime}\) by setting \(\dagger_{P}>z>\dagger_{D}\), for any \(z\in Z\). Besides, we impose \(\deg_{P}(\dagger_{P})=1\) and \(\deg_{GP}(\dagger_{P})=0\). Then the monomial order \(\leq_{\rm PD}\) on \(\mathfrak{S}_{\Omega}(Z^{\prime})\) induces a well order \(\leq_{\rm uPD}\) on \(\mathfrak{M}_{\Omega}(Z)=\mathfrak{S}_{\Omega}(Z^{\prime})\sqcup\{1\}\) (in which \(\lfloor 1\rfloor_{P}\) and \(\lfloor 1\rfloor_{D}\) is identified with \(\dagger_{P}\) and \(\dagger_{D}\) respectively), by setting \(u>_{\rm uPD}1\) for any \(u\in\mathfrak{S}_{\Omega}(Z^{\prime})\).
**Theorem 1.19**.: _The well order \(\leq_{\mathrm{uPD}}\) is a monomial order on \(\mathfrak{M}_{\Omega}(Z)\)._
Proof.: Obviously, the well order \(\leq_{\mathrm{uPD}}\) is bracket compatible on \(\mathfrak{M}_{\Omega}(Z)\backslash\{1\}\). Let \(x\in\mathfrak{M}_{\Omega}(Z)\backslash\{1\}\). By definition, \(x>_{\mathrm{uPD}}1\). We have \([x]_{P}>_{\mathrm{Dlex}}[1]_{P}\) which implies \([x]_{P}>_{\mathrm{uPD}}\dagger_{P}\). It is ready to see that \([x]_{D}>_{\mathrm{uPD}}x>_{\mathrm{uPD}}\dagger_{D}\). Thus \(\leq_{\mathrm{uPD}}\) is bracket compatible.
Clearly, \(\leq_{\mathrm{uPD}}\) is left and right compatible.
We record several important conclusions which will be useful later.
**Proposition 1.20**.: _For any \(u,v\in\mathfrak{M}_{\Omega}(Z)\backslash\{1\}\), we have_
1. \([u]_{P}[1]_{P}>_{\mathrm{uPD}}[u]_{P}\geq_{\mathrm{uPD}}[\lfloor u\rfloor_{P}] _{P}\)_,_
2. \([1]_{P}[v]_{P}>_{\mathrm{uPD}}[\lfloor v\rfloor_{P}]_{P}\geq_{\mathrm{uPD}}[ \lfloor 1\rfloor_{P}v]_{P}\)_,_
3. \([1]_{P}[1]_{P}>_{\mathrm{uPD}}[\lfloor 1\rfloor_{P}]_{P}\)_,_
4. \([1]_{P}[v]_{D}>_{\mathrm{uPD}}[\lfloor 1\rfloor_{P}]_{D}\)_,_
5. \([u]_{D}[1]_{P}>_{\mathrm{uPD}}[u]_{D}\)_._
Proof.: Let \(u,v\in\mathfrak{M}_{\Omega}(Z)\backslash\{1\}=\mathfrak{S}_{\Omega}(Z^{\prime})\).
1. It is easy to see \([\lfloor u\rfloor_{P}]_{P}\) have lowest \(\deg_{Z^{\prime}}\) among \([u]_{P}[1]_{P},[u[1]_{P}]_{P},[\lfloor u\rfloor_{P}]_{P}\), and we also have \(\deg_{Gp}([u]_{P}[1]_{P})>\deg_{Gp}([u[1]_{P}]_{P})\).
2. It is similar to \((a)\).
3. It follows from \(\deg_{Z^{\prime}}([1]_{P}[1]_{P})>\deg_{Z^{\prime}}([1]_{P}]_{P})\).
4. It can be deduced from \([1]_{P}[v]_{D}>_{\mathrm{Dlex}}[\lfloor 1\rfloor_{P}v]_{D}\) by Definition 1.7.
5. It holds because \(\deg_{GD}([u]_{D}[1]_{P})>\deg_{GD}([u[1]_{P}]_{D})\).
## 2. Operator polynomial identities and multi-operated GS bases
In this section, we extend the theory of operated GS bases due to [2, 15, 23, 6] from the case of single operator to multiple operators case. The presentation is essentially contained in [7].
### Operator polynomial identities
In this subsection, we give some basic notions and facts related to operator polynomial identities. Throughout this section, \(X\) denotes a set.
**Definition 2.1**.: We call an element \(\phi(x_{1},\ldots,x_{n})\in\mathbf{k}\mathfrak{S}_{\Omega}(X)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(X)\)) with \(n\geq 1,x_{1},\ldots,x_{n}\in X\) an operated polynomial identity (aka OPI).
_From now on, we always assume that OPIs are multilinear, that is, they are linear in each \(x_{i}\)._
**Definition 2.2**.: Let \(\phi(x_{1},\ldots,x_{n})\) be an OPI. A (unital) \(\Omega\)-algebra \(A=(A,\{P_{\omega}\}_{\omega\in\Omega})\) is said to satisfy the OPI \(\phi(x_{1},\ldots,x_{n})\) if \(\phi(r_{1},\ldots,r_{n})=0\), for all \(r_{1},\ldots,r_{n}\in A.\) In this case, \((A,\{P_{\omega}\}_{\omega\in\Omega})\) is called a (unital) \(\phi\)-algebra.
Generally, for a family \(\Phi\) of OPIs, we call a (unital) \(\Omega\)-algebra \((A,\{P_{\omega}\}_{\omega\in\Omega})\) a (unital) \(\Phi\)-algebra if it is a (unital) \(\phi\)-algebra for any \(\phi\in\Phi\). Denote the category of \(\Phi\)-algebras (resp. unital \(\Phi\)-algebras) by \(\Phi\)-\(\mathfrak{M}\mathfrak{I}_{\Omega}\) (resp. \(\Phi\)-\(\mathfrak{M}\mathfrak{I}_{\Omega}\)).
**Definition 2.3**.: An \(\Omega\)-ideal of an \(\Omega\)-algebra is an ideal of the associative algebra closed under the action of the operators. The \(\Omega\)-ideal generated by a subset \(S\subseteq A\) is denoted by \(\langle S\rangle_{\Omega\cdot\mathfrak{M}\mathfrak{I}_{\Omega}}\) (resp. \(\langle S\rangle_{\Omega\cdot\mathfrak{M}\mathfrak{I}_{\Omega}}\)).
Obviously the quotient of an \(\Omega\)-algebra (resp. unital \(\Omega\)-algebra) by an \(\Omega\)-ideal is naturally an \(\Omega\)-algebra (resp. \(\Omega\)-unital algebra).
From now on, \(\Phi\) denotes a family of OPIs in \(\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(X)\) or \(\mathbf{k}\mathfrak{M}_{\Omega}(X)\). For a set \(Z\) and a subset \(Y\) of \(\mathfrak{M}_{\Omega}(Z)\), introduce the subset \(S_{\Phi}(Y)\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) to be
\[S_{\Phi}(Y):=\{\phi(u_{1},\ldots,u_{k})\mid u_{1},\ldots,u_{k}\in Y,\ \phi(x_{1},\ldots,x_{k})\in\Phi\}.\]
### Multi-operated GS bases for \(\Phi\)-algebras
In this subsection, operated GS basis theory is extended to algebras with multiple operators following closely [2].
**Definition 2.4**.: Let \(Z\) be a set, \(\leq\) a linear order on \(\mathfrak{M}_{\Omega}(Z)\) and \(f\in\mathbf{k}\mathfrak{M}_{\Omega}(Z)\).
* Let \(f\notin\mathbf{k}\). The leading monomial of \(f\), denoted by \(\bar{f}\), is the largest monomial appearing in \(f\). The leading coefficient of \(f\), denoted by \(c_{f}\), is the coefficient of \(\bar{f}\) in \(f\). We call \(f\) monic with respect to \(\leq\) if \(c_{f}=1\).
* Let \(f\in\mathbf{k}\) (including the case \(f=0\)). We define the leading monomial of \(f\) to be \(1\) and the leading coefficient of \(f\) to be \(c_{f}=f\).
* A subset \(S\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) is called monicized with respect to \(\leq\), if each nonzero element of \(S\) has leading coefficient \(1\).
Obviously, each subset \(S\subseteq\mathfrak{M}_{\Omega}(Z)\) can be made monicized if we divide each nonzero element by its leading coefficient.
We need another notation. Let \(Z\) be a set. For \(u\in\mathfrak{M}_{\Omega}(Z)\) with \(u\neq 1\), as \(u\) can be uniquely written as a product \(u_{1}\cdots u_{n}\) with \(u_{i}\in Z\cup\lfloor\mathfrak{M}_{\Omega}(Z)\rfloor_{\Omega}\) for \(1\leq i\leq n\), call \(n\) the breadth of \(u\), denoted by \(|u|\); for \(u=1\), we define \(|u|=0\).
**Definition 2.5**.: Let \(\leq\) be a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) and \(f,g\in\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) be monic.
* If there are \(p,u,v\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) such that \(p=\bar{f}u=v\bar{g}\) with \(\max\{|\bar{f}|,|\bar{g}|\}<|p|<|\bar{f}|+|\bar{g}|\), we call \[(f,g)_{p}^{u,v}:=fu-vg\] the intersection composition of \(f\) and \(g\) with respect to \(p\).
* If there are \(p\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) and \(q\in\mathfrak{S}_{\Omega}^{\star}(Z)\) (resp. \(\mathfrak{M}_{\Omega}^{\star}(Z)\)) such that \(p=\bar{f}=q|_{\bar{g}}\), we call \[(f,g)_{p}^{q}:=f-q|_{g}\] the inclusion composition of \(f\) and \(g\) with respect to \(p\).
**Definition 2.6**.: Let \(Z\) be a set and \(\leq\) a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)). Let \(\mathcal{G}\subseteq\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)).
* An element \(f\in\mathbf{k}\,\widetilde{\Sigma}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) is called trivial modulo \((\mathcal{G},p)\) for \(p\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)) if \[f=\sum_{i}c_{i}q_{i}|_{s_{i}}\ \text{with}\ q_{i}|_{s_{i}}<p\text{, where }c_{i}\in\mathbf{k},\ q_{i}\in \mathfrak{S}_{\Omega}^{\star}(Z)\ \text{(resp. }\mathfrak{M}_{\Omega}^{\star}(Z)\ \text{) and }s_{i}\in\mathcal{G}.\] If this is the case, we write \[(f,g)_{p}\equiv 0\ \text{mod}\ (\mathcal{G},p).\] In general, for any \(u,v\in\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathfrak{M}_{\Omega}(Z)\)), \(u\equiv v\ \text{mod}\ (\mathcal{G},p)\) means that \(u-v=\sum c_{i}q_{i}|_{s_{i}}\), with \(q_{i}|_{s_{i}}<p\), where \(c_{i}\in\mathbf{k},\ q_{i}\in\widetilde{\Sigma}_{\Omega}^{\star}(Z)\) (resp. \(\mathfrak{M}_{\Omega}^{\star}(Z)\)) and \(s_{i}\in\mathcal{G}\).
2. The subset \(\mathcal{G}\) is called a GS basis in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) (resp. \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)) with respect to \(\leq\) if, for all pairs \(f,g\in\mathcal{G}\) monicized with respect to \(\leq\), every intersection composition of the form \((f,g)_{p}^{u,v}\) is trivial modulo \((\mathcal{G},p)\), and every inclusion composition of the form \((f,g)_{p}^{q}\) is trivial modulo \((\mathcal{G},p)\).
_To distinguish from usual GS bases for associative algebras, from now on, we shall rename GS bases in multi-operated contexts by \(\Omega\)-GS bases._
**Theorem 2.7**.: _(Composition-Diamond Lemma) Let \(Z\) be a set, \(\leq\) a monomial order on \(\mathfrak{M}_{\Omega}(Z)\) and \(\mathcal{G}\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(Z)\). Then the following conditions are equivalent:_
1. \(\mathcal{G}\) _is an_ \(\Omega\)_-GS basis in_ \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\)_._
2. _Denote_ \[\mathrm{Irr}(\mathcal{G}):=\mathfrak{M}_{\Omega}(Z)\setminus\left\{q|_{ \overline{s}}\mid s\in\mathcal{G},\ \ q\in\mathfrak{M}_{\Omega}^{\star}(Z)\right\}.\] _As a_ \(\mathbf{k}\)_-space,_ \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)=\mathbf{k}\mathrm{Irr}(\mathcal{G})\oplus \langle\mathcal{G}\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\) _and_ \(\mathrm{Irr}(\mathcal{G})\) _is a_ \(\mathbf{k}\)_-basis of_ \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle\mathcal{G}\right\rangle_{ \Omega\cdot\mathfrak{M}_{\Omega}}\)_._
**Theorem 2.8**.: _(Composition-Diamond Lemma) Let \(Z\) be a set, \(\leq\) a monomial order on \(\mathfrak{S}_{\Omega}(Z)\) and \(\mathcal{G}\subseteq\mathbf{k}\mathfrak{S}_{\Omega}(Z)\). Then the following conditions are equivalent:_
1. \(\mathcal{G}\) _is an_ \(\Omega\)_-GS basis in_ \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\)_._
2. _Denote_ \[\mathrm{Irr}(\mathcal{G}):=\mathfrak{S}_{\Omega}(Z)\setminus\left\{q|_{ \overline{s}}\mid s\in\mathcal{G},\ \ q\in\mathfrak{S}_{\Omega}^{\star}(Z)\right\}.\] _As a_ \(\mathbf{k}\)_-space,_ \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)=\mathbf{k}\mathrm{Irr}(\mathcal{G})\oplus \langle\mathcal{G}\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\) _and_ \(\mathrm{Irr}(\mathcal{G})\) _is a_ \(\mathbf{k}\)_-basis of_ \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)/\left\langle\mathcal{G}\right\rangle_{ \Omega\cdot\mathfrak{M}_{\Omega}}\)_._
**Definition 2.9** ([6, Definiton 2.21(a)]).:
1. Let \(\Phi\subseteq\mathbf{k}\mathfrak{S}_{\Omega}(X)\) be a family of OPIs. Let \(Z\) be a set and \(\leq\) a monomial order on \(\mathfrak{S}_{\Omega}(Z)\). We call \(\Phi\)\(\Omega\)-GS on \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq\) if \(S_{\Phi}(\mathfrak{S}_{\Omega}(Z))\) is an \(\Omega\)-GS basis in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq\).
2. Let \(\Phi\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(X)\) be a family of OPIs. Let \(Z\) be a set and \(\leq\) a monomial order on \(\mathfrak{M}_{\Omega}(Z)\). We call \(\Phi\)\(\Omega\)-GS on \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq\) if \(S_{\Phi}(\mathfrak{M}_{\Omega}(Z))\) is an \(\Omega\)-GS basis in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq\).
### Multi-operated GS basis for free \(\Phi\)-algebras over algebras
In this subsection, we consider multi-operated GS basis for free \(\Phi\)-algebras over algebras and generalise the main result of [17] to multi-operated cases.
We will use the following results without proof as they are counterparts in multi-operated setup of [17, Propositions 4.8].
**Proposition 2.10**.:
1. _Let_ \(\Phi\subset\mathbf{k}\mathfrak{S}_{\Omega}(X)\) _and_ \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) _an algebra. Then_ \[\mathcal{F}_{\mathfrak{M}_{\Omega}}^{\Phi\cdot\mathfrak{M}_{\Omega}}(A):= \mathbf{k}\mathfrak{S}_{\Omega}(Z)/\left\langle S_{\Phi}(\mathfrak{S}_{\Omega} (Z))\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\] _is the free_ \(\Phi\)_-algebra generated by_ \(A\)_._
2. _Let_ \(\Phi\subset\mathbf{k}\mathfrak{M}_{\Omega}(X)\) _and_ \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) _a unital algebra. Then_ \[\mathcal{F}_{\mathfrak{M}_{\Omega}}^{\Phi\cdot\mathfrak{M}_{\Omega}}(A):= \mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_{\Phi}(\mathfrak{M}_{\Omega} (Z))\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{M}_{\Omega}}\] _is the free unital_ \(\Phi\)_-algebra over_ \(A\)_._
As in [17], we consider the following question:
**Question 2.11**.: Let \(A\) be a (unital) algebra together with a Grobner-Shirshov basis \(G\). Assume that a set \(\Phi\) of operated polynomial identities is \(\Omega\)-GS in the sense of Definition 2.9. Considering the free (unital) \(\Phi\)-algebra \(B\) over \(A\), when will the union "\(\Phi\cup G\)" be a \(\Omega\)-GS basis for \(B\)?
It is surprising that the answer of the corresponding question given in [17] can be generalised to multi-operated case without much modifications.
**Theorem 2.12**.: _Let \(X\) be a set and \(\Phi\subseteq\mathbf{k}\mathfrak{M}_{\Omega}(X)\) a system of OPIs. Let \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) be a unital algebra with generating set \(Z\). Assume that \(\Phi\) is \(\Omega\)-GS on \(Z\) with respect to a monomial order \(\leq\) in \(\mathfrak{M}_{\Omega}(Z)\) and that \(G\) is a GS basis of \(I_{A}\) in \(\mathbf{k}\mathcal{M}(Z)\) with respect to the restriction of \(\leq\) to \(\mathcal{M}(Z)\)._
_Suppose that the leading monomial of any OPI \(\phi(x_{1},\ldots,x_{n})\in\Phi\) has no subword in \(\mathcal{M}(X)\backslash X\), and that \(\phi(u_{1},\ldots,u_{n})\) vanishes or its leading monomial is \(\overline{\phi}(u_{1},\ldots,u_{n})\) for all \(u_{1},\ldots,u_{n}\in\mathfrak{M}_{\Omega}(Z)\). Then \(S_{\Phi}(\mathfrak{M}_{\Omega}(Z))\cup G\) is an \(\Omega\)-GS basis of \(\langle S_{\Phi}(\mathfrak{M}_{\Omega}(Z))\cup I_{A}\rangle_{\Omega\text{-} \mathfrak{M}\mathfrak{M}\mathfrak{M}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq\)._
Proof.: The proof of [17, Theorem 5.9] carries verbatim over multi-operated case, because it reveals that the key point is that the leading monomial of any OPI \(\phi(x_{1},\ldots,x_{n})\in\Phi\) has no subword in \(\mathcal{M}(X)\backslash X\).
For details, see the proof of [17, Theorem 5.9].
There exists a nonunital version of the above result, which is also a multi-operated version of [18, Theorem 2.15].
**Theorem 2.13**.: _Let \(X\) be a set and \(\Phi\subseteq\mathbf{k}\mathfrak{S}_{\Omega}(X)\) a system of OPIs. Let \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) be an algebra with generating set \(Z\). Assume that \(\Phi\) is \(\Omega\)-GS on \(Z\) with respect to a monomial order \(\leq\) in \(\mathfrak{S}_{\Omega}(Z)\) and that \(G\) is a GS basis of \(I_{A}\) in \(\mathbf{k}\mathcal{S}(Z)\) with respect to the restriction of \(\leq\) to \(\mathcal{S}(Z)\)._
_Suppose that the leading monomial of any OPI \(\phi(x_{1},\ldots,x_{n})\in\Phi\) has no subword in \(\mathcal{S}(X)\backslash X\), and that for all \(u_{1},\ldots,u_{n}\in\mathfrak{S}_{\Omega}(Z)\), \(\phi(u_{1},\ldots,u_{n})\) vanishes or its leading monomial is \(\overline{\phi}(u_{1},\ldots,u_{n})\). Then \(S_{\Phi}(\mathfrak{S}_{\Omega}(Z))\cup G\) is an \(\Omega\)-GS basis of \(\langle S_{\Phi}(\mathfrak{S}_{\Omega}(Z))\cup I_{A}\rangle_{\Omega\text{-} \mathfrak{M}\mathfrak{M}\mathfrak{M}}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq\)._
## 3. Free differential Rota-Baxter algebras over algebras
In this section, we apply Theorems 2.12 and 2.13 to differential Rota-Baxter algebras.
From now on, let \(\Omega=\{D,P\}\), fix a set \(X=\{x,y\}\) with two elements such that variables in OPIs will take values in \(X\). When talking about algebras or reductions of OPIs, fix a set \(Z\) and we understand that variables in OPIs will be replaced by elements of \(\mathfrak{S}_{\Omega}(Z)\) or \(\mathfrak{M}_{\Omega}(Z)\).
We first recall the definition of differential Rota-Baxter algebras. We use \(D(\ )\) and \(P(\ )\) instead of the linear operators \(\lfloor\ \rfloor_{D}\) and \(\lfloor\ \rfloor_{P}\).
**Definition 3.1** ([7, Definition 2.1]).: Let \(\lambda\in\mathbf{k}\) be fixed.
* A (unital) differential \(\mathbf{k}\)-algebra of weight \(\lambda\) (also called a (unital) \(\lambda\)-differential \(\mathbf{k}\)-algebra) is a (unital) associative \(\mathbf{k}\)-algebra \(R\) together with a linear operator \(D:R\to R\) such that \[D(uv)=D(u)v+uD(v)+\lambda D(u)D(v)\quad\text{ for all }u,v\in R;\] when \(R\) has a unity \(1\), it is asked that \(D(1)=0\).
* A Rota-Baxter \(\mathbf{k}\)-algebra of weight \(\lambda\) is an associative \(\mathbf{k}\)-algebra \(R\) together with a linear operator \(P:R\to R\) such that \[P(u)P(v)=P(uP(v))+P(P(u)v)+\lambda P(uv)\quad\text{ for all }u,v\in R.\]
* A (unital) differential Rota-Baxter \(\mathbf{k}\)-algebra of weight \(\lambda\) (also called a (unital) \(\lambda\)-differential Rota-Baxter \(\mathbf{k}\)-algebra) is a (unital) differential k-algebra \((R,D)\) of weight \(\lambda\) and a Rota-Baxter operator \(P\) of weight \(\lambda\) such that \[D\circ P=\ \mathrm{id}\.\]
When we consider free differential Rota-Baxter algebras on algebras, it is disappointing to see that the traditional order (see [2]) would not meet the condition of Theorems 2.12 and 2.13. This is the intention of the new monomial orders \(\leq_{\mathrm{PD}}\) and \(\leq_{\mathrm{uPD}}\) introduced in Section 1.3.
### Case of nonunital algebras with \(\lambda\neq 0\)
Assume in this subsection that \(\lambda\neq 0\). Denote
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xy)\),
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy)\),
3. \(\phi_{3}(x)=D(P(x))-x\).
We first consider nonunital free differential Rota-Baxter algebras on algebras.
**Proposition 3.2**.: _For any \(u,v\in\mathfrak{S}_{\Omega}(Z)\), the leading monomials of \(\phi_{1}(u,v)\), \(\phi_{2}(u,v)\) and \(\phi_{3}(u)\) under \(\leq_{\mathrm{PD}}\) are respectively \(P(u)P(v),D(u)D(v)\) and \(D(P(u))\)._
Proof.: Let \(u_{1},\cdots,u_{n}\) and \(v_{1},\cdots,v_{m}\) be all the elements of \(Z\) occurring in \(u\) and \(v\) from left to right.
For \(\phi_{1}(u,v)=P(u)P(v)-P(uP(v))-P(P(u)v)-\lambda P(uv)\), we have \(\deg_{P}(P(uv))\) is smaller than those of the other three terms, while the \(\deg_{D},\deg_{Z}\) and \(\deg_{GD}\) of the other elements are the same. And one can see
\[\deg_{GP}(P(u)P(v))-\deg_{GP}(P(uP(v)))=m>0,\] \[\deg_{GP}(P(u)P(v))-\deg_{GP}(P(P(u)v))=n>0,\]
so the leading monomial of \(\phi_{1}(u,v)\) is \(P(u)P(v)\).
The statements about \(\phi_{2}(u,v)\) and \(\phi_{3}(u)\) are obvious by comparing \(\deg_{D}\).
Now let
\[\Phi_{\mathsf{DRB}}{}^{\prime}:=\{\phi_{1}(x,y),\phi_{2}(x,y),\phi_{3}(x)\}\,.\]
However, \(\Phi_{\mathsf{DRB}}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\).
**Example 3.3**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[f = \phi_{2}(P(u),v)=D(P(u))D(v)+\lambda^{-1}D(P(u))v+\lambda^{-1}P( u)D(v)-\lambda^{-1}D(P(u)v),\] \[g = \phi_{3}(u)=D(P(u))-u,\] \[q = \star D(v),\] \[p = D(P(u))D(v)=\bar{f}=\,q|_{\bar{g}}\,.\]
Then
\[(f,g)_{p}^{q}=f-\,q|_{g}\equiv\lambda^{-1}(P(u)D(v)-D(P(u)v)+uv+\lambda uD(v)).\]
Let
\[\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y).\]
It is clear that the leading monomial of \(\phi_{4}(u,v)\) is \(P(u)D(v)\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
**Example 3.4**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[f = \phi_{2}(u,P(v))=D(u)D(P(v))+\lambda^{-1}D(u)P(v)+\lambda^{-1}uD (P(v))-\lambda^{-1}D(uP(v)),\] \[g = \phi_{3}(v)=D(P(v))-v,\] \[q = D(u)\star,\] \[p = D(u)D(P(v))=\bar{f}=\,q|_{\bar{g}}\,.\]
Then
\[(f,g)_{p}^{q}=f-\,q|_{g}\equiv\lambda^{-1}(D(u)P(v)-D(uP(v))+uv+\lambda D(u)v).\]
Let
\[\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y.\]
It is clear that the leading monomial of \(\phi_{5}(u,v)\) is \(D(u)P(v)\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
Now denote \(\Phi_{\mathrm{DRB}}\) to be the set of the following OPIs:
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xy)\),
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy)\),
3. \(\phi_{3}(x)=D(P(x))-x\),
4. \(\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)\),
5. \(\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y\).
It is obvious that \(\left\langle S_{\Phi_{\mathrm{DRB}}}(Z)\right\rangle_{\Omega\cdot\mathbb{M} \mathbb{M}_{\mathbb{M}}}=\left\langle S_{\Phi_{\mathrm{DRB}}}(Z)\right\rangle_ {\Omega\cdot\mathbb{M}_{\mathbb{M}}}\) for each set \(Z\).
Next we will show that \(\Phi_{\mathrm{DRB}}\) is \(\Omega\)-GS with respect to \(\leq_{\mathrm{PD}}\). Before that, we need the following lemma to simplify our proof.
**Lemma 3.5**.: _Let \(\phi(x_{1},\ldots,x_{n})\) and \(\psi(y_{1},\ldots,y_{m})\) be two OPIs. Let \(Z\) be a set. Suppose that, for any \(u_{1},\ldots,u_{n},v_{1},\ldots,v_{m}\in\mathfrak{S}_{\Omega}(Z)\), the leading monomial of \(\phi(u_{1},\ldots,u_{n})\) is \(\bar{\phi}(u_{1},\ldots,u_{n})\) and leading monomial of \(\psi(v_{1},\ldots,v_{m})\) is \(\bar{\psi}(v_{1},\ldots,v_{m})\)._
_Now write \(f=\phi(u_{1},\ldots,u_{n})\) and \(g=\psi(v_{1},\ldots,v_{m})\) for fixed \(u_{1},\ldots,u_{n}\), \(v_{1},\ldots,v_{m}\in\mathfrak{S}_{\Omega}(Z)\). If there exists \(i\)\((1\leq i\leq n)\) and \(r\in\mathfrak{S}_{\Omega}{}^{\star}(Z)\) such that \(u_{i}=r|_{\bar{g}}\), then the inclusion composition \((f,g)_{p}^{q}=f-q|_{g}\) with \(p=\bar{f}\) and \(q=\bar{\phi}(u_{1},\ldots,u_{i-1},r,u_{i+1},\ldots,u_{n})\), is trivial modulo \((S_{\{\phi,\phi\}}(Z),w)\). We call this type of inclusion composition as complete inclusion composition._
Proof.: The assertion follows from
\[(f,g)_{p}^{q} =f-q|_{g}\] \[=(\phi-\bar{\phi})(u_{1},\ldots,u_{i-1},r|_{\bar{g}},u_{i+1}, \ldots,u_{n})-\bar{\phi}(u_{1},\ldots,u_{i-1},r|_{g-\bar{g}},u_{i+1},\ldots,u_{ n})\] \[=(\phi-\bar{\phi})(u_{1},\ldots,u_{i-1},r|_{g},u_{i+1},\ldots,u_{ n})-\phi(u_{1},\ldots,u_{i-1},r|_{g-\bar{g}},u_{i+1},\ldots,u_{n}).\]
**Remark 3.6**.: Lemma 3.5 extends [6, Theorem 4.1(b)] to the case of multiple operators.
Now we can prove \(\Phi_{\mathrm{DRB}}\) is \(\Omega\)-GS.
**Theorem 3.7**.: \(\Phi_{\mathrm{DRB}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
Proof.: We write \(i\wedge j\) the composition of OPIs of \(\phi_{i}\) and \(\phi_{j}\), which means \(\phi_{i}\) lies on the left and \(\phi_{j}\) lies on the right for intersection composition or \(\phi_{j}\) is included in \(\phi_{i}\) for inclusion composition. The ambiguities of all possible compositions in \(\Phi_{\mathrm{DRB}}\) are listed as below: for arbitrary \(u,v,w\in\mathfrak{S}_{\Omega}(Z)\) and \(q\in\mathfrak{S}_{\Omega}{}^{\star}(Z)\),
\[\begin{array}{ll}1\wedge 1&\frac{P(u)P(v)P(w)}{P\left(q|_{P(u)P(v)} \right)P\left(w\right)}\\ 1\wedge 2&\frac{P\left(u\right)P(v)P(w)}{P\left(q|_{D(u)D(v)}\right)P\left(w \right)},\quad P\left(u\right)P\left(q|_{D(u)D(v)}\right),\\ 1\wedge 3&P\left(q|_{D(P(u))}\right)P\left(v\right),\quad P\left(u\right)P\left(q| _{D(P(v))}\right),\\ 1\wedge 4&\frac{P(u)P(v)D(w)}{P\left(q|_{D(u)D(v)}\right)P\left(w \right)},\quad P\left(u\right)P\left(q|_{P(v)D(w)}\right),\\ 1\wedge 5&\frac{P\left(u\right)P(v)P(w)}{P\left(q|_{D(u)P(v)}\right)P\left(w \right)},\quad P\left(u\right)P\left(q|_{D(v)P(w)}\right),\\ 2\wedge 1&D\left(q|_{P(u)(P(v))}D\left(w\right),\quad D\left(u\right)D\left(q| _{P(v)P(w)}\right),\\ 2\wedge 2&\frac{D(u)D(v)D(w)}{P\left(q|_{D(u)D(v)}\right)D\left(w \right)},\quad D\left(u\right)D\left(q|_{D(v)D(w)}\right),\end{array}\]
\[\begin{split} 2\wedge 3&\frac{D(P(u))D(v)}{D\left(q|_{P(u) }D(v)\right)},\quad D\left(u\right)D\left(q|_{D(P(u))}\right)D\left(v\right), \quad D\left(u\right)D\left(q|_{D(P(v))}\right),\\ 2\wedge 4&\frac{D(P(u))D(v)}{D\left(q|_{P(u)}D(v) \right)}D\left(w\right),\quad D\left(u\right)D\left(q|_{D(P(v))}\right),\\ 2\wedge 5&\frac{D(u)D(v)P(w)}{D\left(q|_{D(u)P(v)} \right)}D\left(w\right),\quad D\left(u\right)D\left(q|_{D(v)P(w)}\right),\\ 3\wedge 1&\frac{D\left(P\left(q|_{D(u)D(v)} \right)\right)}{D\left(P\left(q|_{P(u)P(v)}\right)\right)},\\ 3\wedge 2&D\left(P\left(q|_{D(u)D(v)}\right) \right),\\ 3\wedge 3&D\left(P\left(q|_{D(P(u))}\right) \right),\\ 3\wedge 4&D\left(P\left(q|_{D(u)D(v)}\right) \right),\\ 3\wedge 5&D\left(P\left(q|_{D(u)P(v)}\right) \right),\\ 4\wedge 1&P\left(q|_{P(u)P(v)}\right)D\left(w\right),\quad P\left(u\right)D\left(q|_{P(v)P(w)}\right),\\ 4\wedge 2&\frac{P(u)D(v)D(w)}{D\left(q|_{D(u)D(v) }\right)}D\left(w\right),\quad P\left(u\right)D\left(q|_{D(v)D(w)}\right),\\ 4\wedge 3&\frac{P(u)D(P(v))}{D\left(q|_{D(v)} \right)}D\left(v\right),\quad P\left(u\right)D\left(q|_{D(P(v))}\right),\\ 4\wedge 4&\frac{P(u)D(v)D(v)}{D\left(q|_{P(u)D(v) }\right)}D\left(w\right),\quad P\left(u\right)D\left(q|_{P(v)D(w)}\right),\\ 4\wedge 5&\frac{P(u)D(v)P(w)}{D\left(u\right)P(v)},\quad P\left(u\right)D\left(q|_{D(v)P(w)}\right),\\ 5\wedge 1&\frac{D(u)P(v)P(w)}{D\left(q|_{D(u)P(v) }\right)}D\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)P(w)}\right),\\ 5\wedge 2&\frac{D(u)D(v)P(w)}{D\left(q|_{D(u)D(v) }\right)}P\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)D(w)}\right),\\ 5\wedge 3&\frac{D(P(u))P(v)}{D\left(q|_{D(P(u) )}\right)}P\left(v\right),\quad D\left(u\right)P\left(q|_{D(P(v))}\right),\\ 5\wedge 4&\frac{D(u)P(v)D(w)}{D\left(q|_{D(u)D(v) }\right)}P\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)D(w)}\right),\\ 5\wedge 5&\frac{D(u)P(v)P(w)}{D\left(q|_{D(u)P(v) }\right)}P\left(w\right),\quad D\left(u\right)P\left(q|_{D(v)P(w)}\right). \end{split}\]
Notice that all compositions above but the underlined ones can be dealt with by Lemma 3.5. There remains to consider the underlined compositions. We only give the complete proof for the case \(5\wedge 1\), the other cases being similar. For the case \(5\wedge 1\), write \(f=\phi_{5}(u,v)\), \(g=\phi_{1}(v,w)\) and \(p=D(u)P(v)P(w)\). So we have
\[(f,g)_{p}^{P(w),D(u)} =-D(uP(v))P(w)+uvP(w)+\lambda D(u)vP(w)\] \[\quad+D(u)P(vP(w))+D(u)P(P(v)w)+\lambda D(u)P(vw)\] \[\equiv-D(uP(v)P(w))+uP(v)w+\lambda D(uP(v))w+uvP(w)+\lambda D(u)vP (w)\] \[\quad+D(uP(P(v)w))-uP(v)w-\lambda D(u)P(v)w\] \[\quad+D(uP(vP(w)))-uvP(w)-\lambda D(u)vP(w)\] \[\quad+\lambda D(uP(vw))-\lambda uvw-\lambda^{2}D(u)vw\] \[\equiv-D(uP(v)P(w))+D(uP(vP(w)))+D(uP(P(v)w))+\lambda D(uP(vw))\] \[\quad-\lambda D(u)P(v)w+\lambda D(uP(v))w-\lambda uvw-\lambda^{2 }D(u)vw\] \[=-D(u\phi_{1}(v,w))-\lambda\phi_{5}(u,v)w\] \[\equiv 0\ \mathrm{mod}\ \left(S_{\phi_{\mathrm{DRB}}}(Z),p\right).\]
We are done.
**Theorem 3.8**.: _Let \(Z\) be a set, \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}}}}}}}}}}}}}}}(A)= \mathbf{k}\widetilde{\Sigma}_{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrakmathfrak{\mathfrak{\mathfrak{\mathfrak }}}}}}}}}}}}}}}(Z)\cup I_{A} \right)_{\mathfrak{\Omega}\cdot\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}_{\mathfrak{M}}_{\mathfrak{
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\rm{dlex}}\). Then \(S_{\Phi_{\sf{DRB}}}(Z)\cup G\) is an operated GS basis of \(\langle S_{\Phi_{\sf{DRB}}}(Z)\cup I_{A}\rangle_{\Omega\cdot\mathfrak{sl}_{\Omega }}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\rm{PD}}\)._
Proof.: Since the leading monomial in \(\Phi_{\sf{DRB}}\) has no subword in \(\mathcal{S}(X)\backslash X\), the result follows immediately from Theorem 3.7 and Theorem 2.13.
As a consequence, we obtain a linear basis.
**Theorem 3.9**.: _Let \(Z\) be a set, \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\rm{dlex}}\). Then the set \({\rm{Irr}}(S_{\Phi_{\sf{DRB}}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)D(v)},q|_{D(P(u))},q|_{P(u)D(v)},q|_{D(u) P(v)},\,s\in G,q\in\mathfrak{S}_{\Omega}^{\star}(Z),u,v\in\mathfrak{S}_{ \Omega}(Z)\}\]
_in \(\mathfrak{S}_{\Omega}(Z)\) is a linear basis of the free nonunital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{sl}_{\Omega}}^{\Phi_{\sf{DRB}}\cdot\mathfrak{sl}_{ \Omega}}(A)\) over \(A\)._
Proof.: It can be induced directly by Theorem 2.8.
**Remark 3.10**.: Since the monomial order used in [2] does not satisfy the conditions of Theorem 2.13, we have to make use of a new monomial order while treating free differential Rota-Baxter algebras over an algebra. In fact, since the leading monomials are different, even for free differential Rota-Baxter algebras over a field, our monomial order will provide new operated GS basis and linear basis.
### Case of nonunital algebras with \(\lambda=0\)
Now we consider nonunital free differential Rota-Baxter algebras on algebras with \(\lambda=0\). This case can be studied similarly to the case \(\lambda\neq 0\), so we omit the details in this subsection.
Denote \(\phi_{1}(x,y)\) with \(\lambda=0\) by \(\phi_{1}^{0}(x,y)\). Let
\[\phi_{2}^{0}(x,y)=D(x)y+xD(y)-D(xy).\]
We also write \(\phi_{3}^{0}(x)=\phi_{3}(x)\) for convenience.
**Proposition 3.11**.: _For any \(u,v\in\mathfrak{S}_{\Omega}(Z)\), the leading monomials of \(\phi_{1}^{0}(u,v)\), \(\phi_{2}^{0}(u,v)\) and \(\phi_{3}^{0}(u)\) with respect to \(\leq_{\rm{PD}}\) are \(P(u)P(v),D(u)v\) and \(D(P(u))\) respectively._
Let
\[\Phi_{\sf{DRB}}^{0}{}^{\prime}:=\left\{\,\phi_{1}^{0}(x,y),\phi_{2}^{0}(x,y), \phi_{3}^{0}(x)\right\}.\]
By the following example, one can see that \(\Phi_{\sf{DRB}}^{0}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\rm{PD}}\).
**Example 3.12**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{2}^{0}(P(u),v)=D(P(u))v+P(u)D(v)-D(P(u)v),\\ g&=&\phi_{3}^{0}(u)=D(P(u))-u,\\ q&=&\star v,\\ p&=&D(P(u))v=\bar{f}=\,q|_{\bar{g}}\,.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\,q|_{g}\equiv P(u)D(v)-D(P(u)v)+uv.\]
Let
\[\phi_{4}^{0}(x,y)=P(x)D(y)-D(P(x)y)+xy.\]
It is clear that the leading monomial of \(\phi_{4}^{0}(u,v)\) with \(u,v\in\mathfrak{S}_{\Omega}(Z)\) is \(P(u)D(v)\) with respect to \(\leq_{\rm{PD}}\) which cannot be reduced further.
Now denote \(\Phi_{\sf{DRB}}^{0}\) to be the set of the following OPIs:
1. \(\phi_{1}^{0}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)\),
2. \(\phi_{2}^{0}(x,y)=D(x)y+xD(y)-D(xy)\),
3. \(\phi_{3}^{0}(x)=D(P(x))-x\),
4. \(\phi_{4}^{0}(x,y)=P(x)D(y)-D(P(x)y)+xy\).
It is obvious that \(\left<S_{\phi_{\mathsf{DRB}}^{0}}{}^{\prime}(Z)\right>_{\Omega\cdot\mathfrak{ MI}_{\mathsf{lg}}}=\left<S_{\phi_{\mathsf{DRB}}^{0}}(Z)\right>_{\Omega\cdot \mathfrak{MI}_{\mathsf{lg}}}\) for arbitrary set \(Z\).
Similar to the case \(\lambda\neq 0\), it can be proved that \(\Phi_{\mathsf{DRB}}^{0}\) is \(\Omega\)-GS with respect to \(\leq_{\mathrm{PD}}\).
**Remark 3.13**.: Note that \(\phi_{4}^{0}(x,y)\) is just \(\phi_{4}(x,y)\) with \(\lambda=0\), and for \(u,v\in\mathfrak{S}_{\Omega}(Z)\)
\[\phi_{2}^{0}(u,P(v))=D(u)P(v)+uD(P(v))-D(uP(v))\equiv D(u)P(v)+uv-D(uP(v)),\]
which is exactly \(\phi_{5}(u,v)\) with \(\lambda=0\). So \(\phi_{5}(x,y)\) (\(\lambda=0\)) does not appear in \(\Phi_{\mathsf{DRB}}^{0}\).
**Theorem 3.14**.: \(\Phi_{\mathsf{DRB}}^{0}\) _is \(\Omega\)-GS in \(\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
Proof.: As in the proof of Theorem 3.7, we write \(i\wedge j\) the composition of OPIs of \(\phi_{i}\) and \(\phi_{j}\). There are two kinds of ambiguities of all possible compositions in \(\Phi_{\mathsf{DRB}}^{0}\). Since \(\phi_{1}^{0}(x,y)\), \(\phi_{3}^{0}(x)\), and \(\phi_{4}^{0}(x,y)\) have the same leading monomials as in the case \(\lambda\neq 0\), the corresponding ambiguities \(i\wedge j\) with \(i,j\in\{1,3,4\}\) are the same in the proof of Theorem 3.7. Since \(\phi_{2}^{0}(x,y)\) has a different leading monomial, the ambiguities of the case \(i\wedge j\) with \(i=2\) or \(j=2\) are the following: for arbitrary \(u,v,w\in\mathfrak{S}_{\Omega}(Z),q\in\mathfrak{S}_{\Omega}{}^{\star}(Z)\) and \(s\in\mathfrak{S}_{\Omega}(Z)\) or \(\emptyset\),
\[\begin{array}{ll}1\wedge 2&P\left(q|_{D(uv)}\right)P\left(w\right),\quad P \left(u\right)P\left(q|_{D(v)w}\right);\\ 2\wedge 1&D\left(q|_{P(u)P(v)}\right)w,\quad D\left(u\right)q|_{P(u)P(v)};\\ 2\wedge 2&\dfrac{D(u)sD(v)w}{D\left(q|_{D(u)}\right)}w,\quad D\left(u\right)q|_{D (v)w};\\ 2\wedge 3&\dfrac{D\left(P(u)\right)v}{D\left(q|_{D(P(u))}\right)}v,\quad D \left(u\right)q|_{D(P(v))};\\ 2\wedge 4&\dfrac{D(u)sP(v)D(w)}{D\left(P\left(q|_{D(u)P}\right)\right)}w,\quad D \left(u\right)q|_{P(v)D(w)};\\ 3\wedge 2&\dfrac{P(u)D(v)w}{D\left(P\left(q|_{D(u)v}\right)\right)};\\ 4\wedge 2&\dfrac{P(u)D(v)w}{D\left(q|_{D(u)v}\right)D\left(w\right)}.\end{array}\]
Almost all the cases can be treated similarly as in the proof of Theorem 3.7, except a slight difference in the case \(2\wedge 2\). In fact, let \(f=\phi_{2}^{0}(u,sD(v))\), \(g=\phi_{2}^{0}(v,w)\) and \(p=D(u)sD(v)w\). So we have
\[(f,g)_{p}^{D(u)s,D(w)} =uD(sD(v))w-D(usD(v))w-D(u)svD(w)+D(u)sD(vw)\] \[\equiv-usD(v)D(w)+uD(sD(v)w)+usD(v)D(w)-D(usD(v)w)\] \[\quad+uD(svD(w))-D(usvD(w))-uD(sD(vw))+D(usD(vw))\] \[=uD(s\phi_{2}^{0}(v,w))-D(us\phi_{2}^{0}(v,w))\] \[\equiv 0\ \mathrm{mod}\left(S_{\phi_{\mathsf{DRB}}^{0}}(Z),p \right).\]
We are done.
**Theorem 3.15**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{MI}_{\mathsf{lg}}}^{\phi_{\mathsf{DRB}}^{0}\cdot \mathfrak{MI}_{\mathsf{lg}}}(A)=\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)/ \left<S_{\phi_{\mathsf{DRB}}^{0}}(Z)\cup I_{A}\right>_{\Omega\cdot\mathfrak{ MI}_{\mathsf{lg}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\phi_{\mathsf{DRB}}^{0}}(Z)\cup G\) is an operated GS basis of \(\left<S_{\phi_{\mathsf{DRB}}^{0}}(Z)\cup I_{A}\right>_{\Omega\cdot\mathfrak{ MI}_{\mathsf{lg}}}\) in \(\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
As a consequence, we obtain a linear basis.
**Theorem 3.16**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{N}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi_{\Omega\mathsf{DBB}}^{0}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{s},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},s\in G,q\in \mathfrak{S}_{\Omega}^{\star}(Z),u,v\in\mathfrak{S}_{\Omega}(Z)\}\]
_in \(\mathfrak{S}_{\Omega}(Z)\) is a linear basis of the free nonunital \(0\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{u}\mathfrak{Alg}}^{\Phi_{\mathsf{DBB}}^{0}\cdots \mathfrak{Alg}}(A)\) over \(A\)._
### Case of unital algebras
Now we consider unital differential Rota-Baxter algebras. Since the proofs are similar to those in the previous subsections, we omit most of them. The study still divided into cases of \(\lambda\neq 0\) and \(\lambda=0\).
When \(\lambda\neq 0\), since unital differential Rota-Baxter algebras have the condition \(D(1)=0\), put \(\Phi_{\mathsf{UDRB}}\) to be the union of \(\Phi_{\mathsf{DRB}}\) with \(\{D(1)\}\), but by abuse of notation, in \(\Phi_{\mathsf{DRB}}\), \(x,y\) take their values in \(\mathfrak{M}_{\Omega}(Z)\) instead of \(\mathfrak{S}_{\Omega}(Z)\).
**Remark 3.17**.: We have:
\[\left\{\begin{array}{ll}\phi_{2}(u,v)\equiv 0&\text{when $u=1$ or $v=1$;}\\ \phi_{4}(u,v)\equiv-D(P(u))+u=-\phi_{3}(u)&\text{when $v=1$;}\\ \phi_{5}(u,v)\equiv-D(P(v))+v=-\phi_{3}(v)&\text{when $u=1$.}\end{array}\right.\]
So adding of the unity \(1\) will not produce new OPIs. Moreover, it is clear that except the above cases, the leading monomials of OPIs in \(\Phi_{\mathsf{DRB}}\) are the same with respect to \(\leq_{\mathrm{PD}}\) and \(\leq_{\mathsf{uPD}}\) by Proposition 1.20.
With similar proofs as in Subsection 3.1, we can prove the following results.
**Theorem 3.18**.: \(\Phi_{\mathsf{UDRB}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathsf{uPD}}\)._
**Theorem 3.19**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{u}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}}^{\Phi_{\mathsf{ UDRB}}-\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}}(A)=\mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_ {\Phi_{\mathsf{UDRB}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathsf{UDRB}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi_{\mathsf{UDRB}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathsf{uPD}}\)._
**Theorem 3.20**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi_{\mathsf{UDRB}}}(Z)\cup G)\) which is by definition the complement of_
\[[q|_{s},q|_{P(u)P(v)},q|_{D(u)D(v)},q|_{D(P(u))},q|_{P(u)D(v)},q|_{D(u)P(v)},q|_{ D(u)P(v)},q|_{D(1)},s\in G,q\in\mathfrak{M}_{\Omega}^{\star}(Z),u,v\in\mathfrak{M}_{ \Omega}(Z)\}\]
_in \(\mathfrak{M}_{\Omega}(Z)\) is a linear basis of the free unital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{u}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}}^{\Phi_{\mathsf{UDRB}}-\mathfrak{r}\mathfrak{r}\mathfrak{r} \mathfrak{r}\mathfrak{r}\mathfrak{r}\mathfrak{r}}(A)\) over \(A\)._
When \(\lambda=0\), denote \(\Phi_{\mathsf{UDRB}}^{0}:=\Phi_{\mathsf{DRB}}^{0}\) (again by abuse of notation, \(\Phi_{\mathsf{DRB}}^{0}\) is understood that \(u,v\) take their values in \(\mathfrak{M}_{\Omega}(X)\) instead of \(\mathfrak{S}_{\Omega}(X)\)).
**Remark 3.21**.: In \(\Phi_{\mathsf{uDRB}}^{0}\), we have
\[\phi_{2}^{0}(1,1)=D(1)+D(1)-D(1)=D(1),\]
so it is not necessary to add \(D(1)\) into \(\Phi_{\mathsf{uDRB}}^{0}\).
Note that \(\phi_{4}^{0}(u,1)\equiv-D(P(v))+v=-\phi_{3}^{0}(v)\), so adding the unity \(1\) will not induce any new OPI.
By using similar proofs in Subsection 3.2, one can show the following results.
**Theorem 3.22**.: \(\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{p}\mathfrak{R}\mathfrak{B}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with \(\leq_{\mathfrak{u}\mathfrak{p}\mathfrak{D}}\)._
**Theorem 3.23**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}^{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{p}\mathfrak{R}} \mathfrak{u}\mathfrak{M}_{\Omega}}_{\mathfrak{u}\mathfrak{M}_{\Omega}}(A)= \mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_{\Phi^{0}_{\mathfrak{u} \mathfrak{u}\mathfrak{D}\mathfrak{R}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathfrak{u}\mathfrak{M}_{\Omega}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{D}\mathfrak{R}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{D}\mathfrak{R}} }(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{u}\mathfrak{M}_{\Omega}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathfrak{u}\mathfrak{p}\mathfrak{D}}\)._
**Theorem 3.24**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi^{0}_{\mathfrak{u}\mathfrak{u}\mathfrak{D}\mathfrak{R}}}(Z )\cup G)\) which is by definition the complement of_
\[\{q|_{\bar{s}},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},s\in G,q\in \mathfrak{M}^{\star}_{\Omega}(Z),u,v\in\mathfrak{M}_{\Omega}(Z)\}\]
_in \(\mathfrak{M}_{\Omega}(Z)\) is a linear basis of the free unital \(0\)-differential Rota-Baxter algebra \(\mathcal{F}^{\Phi^{0}_{\mathfrak{u}\mathfrak{D}\mathfrak{R}}\mathfrak{u} \mathfrak{M}_{\Omega}}_{\mathfrak{u}\mathfrak{M}_{\Omega}}(A)\) over \(A\)._
So far, we have completed the study of differential Rota-Baxter algebras.
## 4. Free integro-differential algebras over algebras
In this section, we carry the study of GS bases of free integro-differential algebras over algebras. It reveals that integro-differential algebras can be investigated by using a method similar to differential Rota-Baxter algebras, but the details are more difficult.
We first recall the definition of integro-differential algebras.
**Definition 4.1**.: Let \(\lambda\in\mathbf{k}\). An integro-differential \(\mathbf{k}\)-algebra of weight \(\lambda\) (also called a \(\lambda\)-integro-differential \(\mathbf{k}\)-algebra) is a differential \(\mathbf{k}\)-algebra \((R,d)\) of weight \(\lambda\) with a linear operator \(P:R\to R\) which satisfies (c) in Definition 3.1:
\[D\circ P=\ \mathrm{id}\,\]
and such that
\[\begin{array}{ll}P(D(u)P(v))=uP(v)-P(uv)-\lambda P(D(u)v)&\text{for all $u,v\in R$},\\ P(P(u)D(v))=P(u)v-P(uv)-\lambda P(uD(v))&\text{for all $u,v\in R$}.\end{array}\]
Recall that
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xy)\),
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy)\),
3. \(\phi_{3}(x)=D(P(x))-x\),
4. \(\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)\),
5. \(\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y\),
and denote
1. \(\phi_{6}(x,y)=P(D(x)P(y))-xP(y)+P(xy)+\lambda P(D(x)y)\),
2. \(\phi_{7}(x,y)=P(P(x)D(y))-P(xy)+P(xy)+\lambda P(xD(y))\).
Notice that for \(u,v\in\mathfrak{S}_{\Omega}(Z)\), since \(P(D(u)P(v))\) (resp. \(P(P(u)D(v))\)) has the largest \(P\)-degree in \(\phi_{6}(u,v)\) (resp. \(\phi_{7}(u,v)\)), the leading monomial of \(\phi_{6}(u,v)\) (resp. \(\phi_{7}(u,v)\)) with respect to \(\leq_{\mathrm{PD}}\) is \(P(D(u)P(v))\) (resp. \(P(P(u)D(v))\)).
### Case of nonunital algebras with \(\lambda\neq 0\)
Assume in this subsection that \(\lambda\neq 0\). We first consider nonunital free integro-differential \(\mathbf{k}\)-algebras over algebras.
According to the definition of integro-differential algebras, define
\[\Phi_{\mathsf{ID}}{}^{\prime}:=\left\{\ \phi_{2}(x,y),\phi_{3}(x),\phi_{6}(x,y), \phi_{7}(x,y)\ \right\}.\]
By Example 3.3, Example 3.4, Example 4.2 and Example 4.3, \(\Phi_{\mathsf{ID}}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(X)\) with respect to \(\leq_{\mathrm{PD}}\).
**Example 4.2**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{7}(u,v)=P(P(u)D(v))-P(u)v+P(uv)+\lambda P(uD(v)), \\ g&=&\phi_{4}(u,v)=P(u)D(v)-D(P(u)v)+uv+\lambda uD(v),\\ q&=&P(\star),\\ p&=&P(P(u)D(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\left.q\right|_{g}\equiv-P(D(P(u)v))+P(u)v.\]
Let
\[\phi_{8}(x,y)=P(D(P(x)y))-P(x)y.\]
It is clear that the leading monomial of \(\phi_{8}(u,v)\) is \(P(D(P(u)v))\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
**Example 4.3**.: For \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{6}(u,v)=P(D(u)P(v))-uP(v)+P(uv)+\lambda P(D(u)v), \\ g&=&\phi_{5}(u,v)=D(u)P(v)-D(uP(v))+uv+\lambda D(u)v,\\ q&=&P(\star),\\ p&=&P(D(u)P(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\left.q\right|_{g}\equiv-P(D(uP(v)))+uP(v).\]
Let
\[\phi_{9}(x,y)=P(D(xP(y)))-xP(y).\]
It is clear that the leading monomial of \(\phi_{9}(u,v)\) is \(P(D(uP(v)))\) with respect to \(\leq_{\mathrm{PD}}\) which cannot be reduced further.
**Remark 4.4**.: Note that the OPI \(\phi_{1}(x,y)\) can be induced by \(\phi_{3}(x,y)\) and \(\phi_{6}(x,y)\). So an integro-differential algebra can be seen as a differential Rota-Baxter algebra. Explicitly, for \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi_{6}(P(u),v)=P(D(P(u))P(v))-P(u)P(v)+P(P(u)v)+ \lambda P(D(P(u))v),\\ g&=&\phi_{3}(u)=D(P(u))-u,\\ q&=&P(\star P(v)),\\ p&=&P(D(P(u))P(v))=\bar{f}=\left.q\right|_{\bar{g}}.\end{array}\]
Then
\[(f,g)_{p}^{q}=f-\left.q\right|_{g}\equiv P(u)P(v)-P(uP(v))-P(P(u)v)-\lambda P (uv)=\phi_{1}(u,v).\]
Now denote \(\Phi_{\mathsf{ID}}\) to be the set of the following OPIs:
1. \(\phi_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)-\lambda P(xv),\)
2. \(\phi_{2}(x,y)=D(x)D(y)+\lambda^{-1}D(x)y+\lambda^{-1}xD(y)-\lambda^{-1}D(xy),\)
3. \(\phi_{3}(x)=D(P(x))-x\),
4. \(\phi_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy+\lambda xD(y)\),
5. \(\phi_{5}(x,y)=D(x)P(y)-D(xP(y))+xy+\lambda D(x)y\),
6. \(\phi_{8}(x,y)=P(D(P(x)y))-P(x)y\),
7. \(\phi_{9}(x,y)=P(D(xP(y)))-xP(y)\).
Notice that \(\Phi_{\text{ID}}=\Phi_{\text{DRB}}\cup\{\phi_{8}(x,y),\phi_{9}(x,y)\}\).
**Proposition 4.5**.: \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M} _{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{I}}}}}=\left\langle S_{\Phi_{\text{ID} }}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{ \mathfrak{I}}}}}\) _for each set \(Z\)._
Proof.: We firstly prove \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{ \mathfrak{I}_{\mathfrak{I}}}}\subseteq\left\langle S_{\Phi_{\text{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{ I}}}}}\), which follows from
\[\left\{\begin{array}{l}\phi_{1}(u,v)\in\left\langle\phi_{3}(u,v),\phi_{6}(u,v)\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{ \mathfrak{I}}}}}\text{ by Remark \ref{lem:2.1},}\\ \phi_{4}(u,v)\in\left\langle\phi_{2}(u,v),\phi_{3}(u)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{I}}}}}\text{ by Example \ref{lem:2.1},}\\ \phi_{5}(u,v)\in\left\langle\phi_{2}(u,v),\phi_{3}(u)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{\mathfrak{I}}}}}\text{ by Example \ref{lem:2.1},}\\ \phi_{8}(u,v)\in\left\langle\phi_{4}(u,v),\phi_{7}(u,v)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\text{ by Example \ref{lem:2.1},}\\ \phi_{9}(u,v)\in\left\langle\phi_{5}(u,v),\phi_{6}(u,v)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\text{ by Example \ref{lem:2.1},}\end{array}\right.\]
where \(u,v\in\Xi_{\Omega}(Z)\).
Next we show \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{ \mathfrak{I}_{\mathfrak{I}}}}\subseteq\left\langle S_{\Phi_{\text{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\). Note that
\[P(\phi_{4}(u,v)) =P(P(u)D(v))-P(D(P(u)v))+P(uv)+\lambda P(uD(v))\] \[=P(P(u)D(v))-P(u)v+P(uv)+\lambda uD(v)-P(D(P(u)v))+P(u)v\] \[=\phi_{7}(u,v)-\phi_{8}(u,v),\]
and
\[P(\phi_{5}(u,v)) =P(D(u)P(v))-P(D(uP(v)))+P(uv)+\lambda P(D(u)v)\] \[=P(D(u)P(v))-uP(v)+P(uv)+\lambda D(u)v-P(D(uP(v)))+uP(v)\] \[=\phi_{6}(u,v)-\phi_{9}(u,v).\]
So we have
\[\left\{\begin{array}{l}\phi_{6}(u,v)\in\left\langle\phi_{5}(u,v),\phi_{9}(u, v)\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}_{ \mathfrak{I}}}}},\\ \phi_{7}(u,v)\in\left\langle\phi_{4}(u,v),\phi_{8}(u,v)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}.\end{array}\right.\]
It proves \(\left\langle S_{\Phi_{\text{ID}}}(Z)\right\rangle_{\Omega\cdot\mathfrak{M}_{ \mathfrak{I}_{\mathfrak{I}}}}\subseteq\left\langle S_{\Phi_{\text{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathfrak{I}_{\mathfrak{I}}}}\).
We are done.
Now we can prove \(\Phi_{\text{ID}}\) is \(\Omega\)-GS.
**Theorem 4.6**.: \(\Phi_{\text{ID}}\) _is \(\Omega\)-GS in \(\mathbf{k}\widetilde{\Sigma}_{\Omega}(Z)\) with respect to \(\leq_{\text{PD}}\)._
Proof.: Since the ambiguities \(i\wedge j\) with \(i,j=1,2,3,4,5\) in \(\Phi_{\text{ID}}\) are the same as in Theorem 3.7, we only need to consider the ambiguities involving \(\phi_{8}\) and \(\phi_{9}\). The cases that cannot be dealt with directly by Lemma 3.5 are listed below: for arbitrary \(u,v,w\in\Xi_{\Omega}(Z),q\in\Xi_{\Omega}{}^{\star}(Z)\) and \(s\in\Xi_{\Omega}(Z)\) or \(\emptyset\),
\[\begin{array}{ll}1\wedge 8&P\left(D\left(P\left(u\right)v\right)\right)P\left(w \right),&P\left(u\right)P\left(D\left(P\left(v\right)w\right)\right),\\ 3\wedge 8&D\left(P\left(D\left(P\left(u\right)v\right)\right)\right),\\ 4\wedge 8&P\left(D\left(P(u)v\right)\right)D\left(w\right),\\ 5\wedge 8&D\left(u\right)P\left(D\left(P\left(v\right)w\right)\right),\\ 1\wedge 9&P\left(D\left(uP(v)\right)\right)P\left(w\right),&P\left(u\right)P\left(D \left(vP(w)\right)\right),\\ 3\wedge 9&D\Big{(}P\big{(}D\left(uP(v)\right)\big{)}\Big{)},\\ 4\wedge 9&P\big{(}D\left(uP(v)\right)\big{)}D\left(w\right),\\ 5\wedge 9&D\left(u\right)P\left(D\left(vP\left(w\right)\right)\right),\end{array}\]
\[\begin{array}{ll}8\wedge 1&P\big{(}D\left(P(u)P(v)s\right)\big{)},\\ 8\wedge 4&P\big{(}D\left(P(u)D(v)s\right)\big{)},\\ 9\wedge 1&P\big{(}D\left(sP(u)P(v)\right)\big{)},\\ 9\wedge 5&P\big{(}D\left(sD(u)P(v)\right)\big{)},\\ 8\wedge 8&P\big{(}D\Big{(}P\big{(}D\left(P(u)v\right)w\big{)}\Big{)},\\ 8\wedge 9&P\big{(}D\Big{(}P\big{(}D\left(uP(v)\right)\big{)}w\big{)}\Big{)},\\ 9\wedge 8&P\big{(}D\Big{(}uP(D\left(P(v)w\right)\big{)}\big{)}\big{)}.\end{array}\]
All these compositions can be treated similarly as in the proof of Theorem 3.7. We only give the complete proof for the case \(8\wedge 1\). Take \(f=\phi_{8}(u,P(v)s)\), \(g=\phi_{1}(u,v)\), \(p=P(D(P(u)P(v)s))\) and \(q=P(D(\star s))\). Then we have
\[\begin{array}{rl}(f,g)_{p}^{q}&=-P(u)P(v)s+P(D(P(uP(v))s))+P(D(P(P(u)v)s))+ \lambda P(D(P(uv)s))\\ &\equiv-P(uP(v))s-P(P(u)v)s-\lambda P(uv)s+P(uP(v))s+P(P(u)v)s+\lambda P(uv)s \\ &\equiv 0\ \mathrm{mod}\left(S_{\Phi_{\mathrm{D}}}(Z),p\right).\end{array}\]
We are done.
**Theorem 4.7**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}_{\mathfrak{M}}_{ \mathfrak{M}}_{\mathfrak{M}}}}}}}}}}}}}^{\Phi_{\mathrm{D}}\cdot\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}}{\mathfrak{M}_{\mathfrak{M}}}}}}}}}^{\Phi_{\mathrm{D}}\cdot \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}}{\mathfrak{M}_{\mathfrak{M}}}}}}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathrm{D}}}(Z)\cup G\) is an operated GS basis of \(\big{\langle}S_{\Phi_{\mathrm{D}}}(Z)\cup I_{A}\big{\rangle}_{\Omega\cdot \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}}}}}}}}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
Proof.: Since the leading monomial in \(\Phi_{\mathrm{ID}}\) has no subword in \(\mathcal{S}(X)\backslash X\), the result follows immediately from Theorem 4.6 and Theorem 2.13.
As a consequence, we obtain a linear basis.
**Theorem 4.8**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then a linear basis of the free nonunital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{ \mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}}}}}}}^{\Phi_{\mathrm{D}\cdot\mathfrak{ M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}_{\mathfrak{M}}}}}}}(A)\) over \(A\) is given by the set \(\mathrm{Irr}(S_{\Phi_{\mathrm{D}}}(Z)\cup G)\), which is by definition the complement in \(\mathfrak{S}_{\Omega}(Z)\) of the subset consisting of \(q|_{w}\) where \(w\) runs through_
\[\bar{s},P(u)P(v),D(u)D(v),D(P(u)),P(u)D(v),D(u)P(v),P(D(P(u)v)),P(D(uP(v)))\]
_for arbitrary \(s\in G\), \(q\in\mathfrak{S}_{\Omega}^{\star}(Z),u,v\in\mathfrak{S}_{\Omega}(Z)\)._
Proof.: It can be induced directly from Theorem 2.8.
**Remark 4.9**.: Since the monomial order \(\leq_{\mathrm{PD}}\) is different from that used in [7], our operated GS basis and linear basis are different from theirs. The reason is that the monomial order in [7] does not satisfy the condition of Theorem 2.13, thus cannot enable us to discuss free integro-differential algebras over algebras.
**Remark 4.10**.: Define a new OPI \(\phi_{10}(x)=P(D(x))\), and let
\[\Phi_{\mathrm{ID}}=\{\,\phi_{1}(x,y),\phi_{2}(x,y),\phi_{3}(x),\phi_{10}(x)\,\}.\]
A \(\Phi_{\mathrm{ID}}\)-algebra is just a nonunital \(\lambda\)-integro-differential algebra with the operators \(P\) and \(D\) being the inverse operator of each other, so we call such an operated algebra an invertible integro-differential algebra. One can show that \(\Phi_{\mathrm{ID}}\cup\{\phi_{4}(x,y),\phi_{5}(x,y)\}\) is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\).
### Case of nonunital algebras with \(\lambda=0\)
Now we consider nonunital free integro-differential algebras on algebras with \(\lambda=0\). This case can be studied similarly as the case \(\lambda\neq 0\), so we omit the details in this subsection.
As in Subsection 3.2, for a OPI \(\phi\), we denote \(\phi^{0}\) for \(\phi\) with \(\lambda=0\) and also write \(\phi^{0}=\phi\) when \(\lambda\) does not appear in \(\phi\) for convenience. Let
\[\Phi^{0}_{\mathsf{ID}}{}^{\prime}:=\left\{\,\phi^{0}_{2}(x,y),\phi^{0}_{3}(x), \phi^{0}_{6}(x,y),\phi^{0}_{7}(x,y)\right\}.\]
Again, \(\Phi^{0}_{\mathsf{ID}}{}^{\prime}\) is not \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\).
**Remark 4.11**.: By Example 4.2, we can get \(\phi^{0}_{8}(u,v)\) from \(\phi^{0}_{4}(u,v)\) and \(\phi^{0}_{7}(u,v)\).
One can not obtain \(\phi^{0}_{9}(u,v)\) from \(S_{\Phi^{0}_{\mathsf{ID}}}{}^{\prime}(Z)\) as in Example 4.3, since \(\phi_{5}\) does not belong to \(\Phi^{0}_{\mathsf{ID}}{}^{\prime}\). However, we can still generate \(\phi^{0}_{9}(u,v)\) as follows: for \(u,v\in\mathfrak{S}_{\Omega}(Z)\), let
\[\begin{array}{rcl}f&=&\phi^{0}_{6}(u,v)=P(D(u)P(v))-uP(v)+P(uv),\\ g&=&\phi^{0}_{2}(u,P(v))=D(u)P(v)+uD(P(v))-D(uP(v)),\\ q&=&P(\star),\\ w&=&P(D(u)P(v))=\bar{f}=\,q|_{\bar{g}}\,.\end{array}\]
Then
\[(f,g)_{w}=f-\,q|_{g}\equiv P(D(uP(v)))-uP(v)=\phi^{0}_{9}(u,v).\]
Now denote \(\Phi^{0}_{\mathsf{ID}}\) to be the set of the following OPIs:
1. \(\phi^{0}_{1}(x,y)=P(x)P(y)-P(xP(y))-P(P(x)y)\),
2. \(\phi^{0}_{2}(x,y)=D(x)y+xD(y)-D(xy)\),
3. \(\phi^{0}_{3}(x)=D(P(x))-x\),
4. \(\phi^{0}_{4}(x,y)=P(x)D(y)-D(P(x)y)+xy\),
5. \(\phi^{0}_{8}(x,y)=P(D(P(x)y))-P(x)y\),
6. \(\phi^{0}_{9}(x,y)=P(D(xP(y)))-xP(y)\).
As in the previous subsection, one can prove the following results.
**Proposition 4.12**.: \(\left\langle S_{\Phi^{0}_{\mathsf{ID}}}{}^{\prime}(Z)\right\rangle_{\Omega \cdot\mathfrak{M}_{\mathsf{IB}}}=\left\langle S_{\Phi^{0}_{\mathsf{ID}}}(Z) \right\rangle_{\Omega\cdot\mathfrak{M}_{\mathsf{IB}}}\) _for any set \(Z\)._
**Theorem 4.13**.: \(\Phi^{0}_{\mathsf{ID}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
**Theorem 4.14**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathbf{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}^{\Phi^{0}_{\mathsf{ID}}\cdot\mathfrak{M}_{\mathsf{IB}}}_{\mathsf{IB }}(A)=\mathbf{k}\mathfrak{S}_{\Omega}(Z)/\left\langle S_{\Phi^{0}_{\mathsf{ID} }}(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathfrak{M}_{\mathsf{IB}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi^{0}_{\mathsf{ID}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi^{0}_{\mathsf{ID}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathfrak{M}_{\mathsf{IB}}}\) in \(\mathbf{k}\mathfrak{S}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{PD}}\)._
**Theorem 4.15**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathbf{S}(Z)/I_{A}\) a nonunital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi^{0}_{\mathsf{ID}}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{s},q|_{P(u)P(v)},q|_{D(u)v},q|_{D(P(u))},q|_{P(u)D(v)},q|_{P(D(P(u)v))},q| _{P(D(uP(v)))},s\in G,q\in\mathfrak{S}^{\star}_{\Omega}(Z),u,v\in\mathfrak{S} _{\Omega}(Z)\}\]
_in \(\mathfrak{S}_{\Omega}(Z)\) is a linear basis of the free nonunital \(0\)-integro-differential algebra \(\mathcal{F}^{\Phi^{0}_{\mathsf{ID}}\cdot\mathfrak{M}_{\mathsf{IB}}}_{\mathsf{IB }}(A)\) over \(A\)._
### Case of unital algebras
Now we consider unital integro-differential algebras. Since the proofs are similar to those in the previous subsections, we omit most of them. The study still is divided into cases of \(\lambda\neq 0\) and \(\lambda=0\).
When \(\lambda\neq 0\), since unital integro-differential algebras have the condition \(D(1)=0\), we put \(\Phi_{\mathrm{ulD}}:=\Phi_{\mathrm{lD}}\cup\{D(1)\}\).
**Theorem 4.16**.: \(\Phi_{\mathrm{ulD}}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.17**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathrm{uHTl}_{\mathrm{uHTl}_{\mathrm{u}}}}^{\Phi_{\mathrm{ulD}} \cdot\mathrm{uHTl}_{\mathrm{u}}}(A)=\mathbf{k}\mathfrak{M}_{\Omega}(Z)/ \left\langle S_{\Phi_{\mathrm{ulD}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathrm{uHTl}_{\mathrm{u}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathrm{ulD}}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi_{\mathrm{ulD}}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot \mathrm{uHTl}_{\mathrm{u}}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.18**.: _Let \(Z\) be a set, \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then a linear basis of the free unital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathrm{uHTl}_{\mathrm{uHTl}_{\mathrm{u}}}}^{\Phi_{\mathrm{ulD}} \cdot\mathrm{uHTl}_{\mathrm{u}}}(A)\) over \(A\) is given by the set \(\mathrm{Irr}(S_{\Phi_{\mathrm{ulD}}}(Z)\cup G)\), which is by definition the complement in \(\mathfrak{M}_{\Omega}(Z)\) of the subset consisting of \(q|_{w}\) where \(w\) runs through_
\[\bar{s},P(u)P(v),D(u)D(v),D(P(u)),P(u)D(v),D(u)P(v),P(D(P(u)v)),D(1)\]
_for arbitrary \(s\in G,q\in\mathfrak{M}_{\Omega}^{\star}(Z),u,v\in\mathfrak{M}_{\Omega}(Z)\)._
When \(\lambda=0\), denote \(\Phi_{\mathrm{ulD}}^{0}:=\Phi_{\mathrm{lD}}^{0}\).
**Theorem 4.19**.: \(\Phi_{\mathrm{ulD}}^{0}\) _is \(\Omega\)-GS in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.20**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra. Then we have:_
\[\mathcal{F}_{\mathrm{uHTl}_{\mathrm{u}}}^{\Phi_{\mathrm{ulD}}^{0}\cdot\mathrm{ uHTl}_{\mathrm{u}}}(A)=\mathbf{k}\mathfrak{M}_{\Omega}(Z)/\left\langle S_{\Phi_{ \mathrm{ulD}}^{0}}(Z)\cup I_{A}\right\rangle_{\Omega\cdot\mathrm{uHTl}_{ \mathrm{u}}}.\]
_Moreover, assume \(I_{A}\) has a GS basis \(G\) with respect to the degree-lexicographical order \(\leq_{\mathrm{dlex}}\). Then \(S_{\Phi_{\mathrm{ulD}}^{0}}(Z)\cup G\) is an operated GS basis of \(\left\langle S_{\Phi_{\mathrm{ulD}}^{0}}(Z)\cup I_{A}\right\rangle_{\Omega \cdot\mathrm{uHTl}_{\mathrm{u}}}\) in \(\mathbf{k}\mathfrak{M}_{\Omega}(Z)\) with respect to \(\leq_{\mathrm{uPD}}\)._
**Theorem 4.21**.: _Let \(Z\) be a set and \(A=\mathbf{k}\mathcal{M}(Z)/I_{A}\) a unital \(\mathbf{k}\)-algebra with a GS basis \(G\) with respect to \(\leq_{\mathrm{dlex}}\). Then the set \(\mathrm{Irr}(S_{\Phi_{\mathrm{ulD}}^{0}}(Z)\cup G)\) which is by definition the complement of_
\[\{q|_{s},q|_{P(u)P(v)},q|_{D(u)w},q|_{D(P(u))},q|_{P(u)D(v)},q|_{P(D(P(u)v))}, q|_{P(D(P(u)v))},s\in G,q\in\mathfrak{M}_{\Omega}^{\star}(Z),u,v\in\mathfrak{M}_{ \Omega}(Z)\}\]
_in \(\mathfrak{M}_{\Omega}(Z)\) is a linear basis of the free unital \(0\)-integro-differential algebra \(\mathcal{F}_{\mathrm{uHTl}_{\mathrm{u}}}^{\Phi_{\mathrm{ulD}}^{0}\cdot\mathrm{ uHTl}_{\mathrm{u}}}(A)\) over \(A\)._
### Differential Rota-Baxter algebras vs integro-differential algebras
Since integro-differential algebras have one more defining relation than differential Rota-Baxter algebras, by Proposition 2.10, the free integro-differential algebra over an algebra \(A\) is a quotient of the free differential Rota-Baxter algebra over \(A\) in general. However, by using the descriptions of \(\Phi_{\mathrm{DRB}}\) and \(\Phi_{\mathrm{lD}}\) and Theorems 3.7 and 4.6, we could also show that the former one is a differential Rota-Baxter algebra subalgebra of the later one.
**Theorem 4.22**.: _The free nonunital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathfrak{M}_{\mathrm{lg}}}^{\Phi_{\mathrm{lD}}\cdot\mathrm{uHTl}_ {\mathrm{q}}}(A)\) over an algebra \(A\) is differential Rota-Baxter subalgebra of the free nonunital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{M}_{\mathrm{lg}}}^{\Phi_{\mathrm{lD}}\cdot\mathrm{uHTl}_ {\mathrm{u}}}(A)\) over \(A\)._
Proof.: We have the observation mentioned before
\[\Phi_{\text{ID}}=\Phi_{\text{DRB}}\cup\{\phi_{8}(x,y),\phi_{9}(x,y)\}.\]
That is to say, the operated Grobner-Shirshov basis of the free nonunital \(\lambda\)-differential Rota-Baxter algebra \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DRB}}\cdot \mathfrak{nil}_{\mathfrak{g}}}(A)\) over an algebra \(A\) is a subset of that of the free nonunital \(\lambda\)-integro-differential algebra \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\) over \(A\). So by Diamond Lemma, \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\) is a subspace of \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\). It is obvious that \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DC}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\) is also differential Rota-Baxter subalgebra of \(\mathcal{F}_{\mathfrak{nil}_{\mathfrak{g}}}^{\Phi_{\text{DRB}}\cdot\mathfrak{ nil}_{\mathfrak{g}}}(A)\).
**Remark 4.23**.: Gao and Guo [7] also studied GS bases of the free integro-differential algebras and free differential Rota-Baxter algebra both generated by sets, and they deduced that the free integro-differential algebra generated by a set is a subalgebra of the free differential Rota-Baxter algebra generated by the same set. Theorem 4.22 proves an analogous fact for these free algebras generated by algebras. However, our method is completely different from theirs.
**Remark 4.24**.: By using the descriptions of \(\Phi_{\text{DRB}}^{0}\) and \(\Phi_{\text{ID}}^{0}\) (resp. \(\Phi_{\text{uDRB}}\) and \(\Phi_{\text{uID}}\), \(\Phi_{\text{uDRB}}^{0}\) and \(\Phi_{\text{uID}}^{0}\)) and Theorems 3.14 and 4.13 (resp. Theorems 3.18 and 4.16, Theorems 3.22 and 4.19), we always have the same result in both unital and nonunital cases with any \(\lambda\) (zero or nonzero).
**Acknowledgements:** The authors were supported by NSFC (No. 11771085, 12071137) and by STCSM (22DZ2229014).
|
2309.06231 | Steady-state selection in multi-species driven diffusive systems | We introduce a general method to determine the large scale non-equilibrium
steady-state properties of one-dimensional multi-species driven diffusive
systems with open boundaries, generalizing thus the max-min current principle
known for systems with a single type of particles. This method is based on the
solution of the Riemann problem of the associated system of conservation laws.
We demonstrate that the effective density of a reservoir depends not only on
the corresponding boundary hopping rates but also on the dynamics of the entire
system, emphasizing the interplay between bulk and reservoirs. We highlight the
role of Riemann variables in establishing the phase diagram of such systems. We
apply our method to three models of multi-species interacting particle systems
and compare the theoretical predictions with numerical simulations. | Luigi Cantini, Ali Zahra | 2023-09-12T13:49:56Z | http://arxiv.org/abs/2309.06231v1 | # Steady-state selection in multi-species driven diffusive systems
###### Abstract
We introduce a general method to determine the large scale non-equilibrium steady-state properties of one-dimensional multi-species driven diffusive systems with open boundaries, generalizing thus the max-min current principle known for systems with a single type of particles. This method is based on the solution of the Riemann problem of the associated system of conservation laws. We demonstrate that the effective density of a reservoir depends not only on the corresponding boundary hopping rates but also on the dynamics of the entire system, emphasizing the interplay between bulk and reservoirs. We highlight the role of Riemann variables in establishing the phase diagram of such systems. We apply our method to three models of multi-species interacting particle systems and compare the theoretical predictions with numerical simulations.
Driven diffusive systems appear in various areas across physics, chemistry, and theoretical biology [1; 2; 3] and are widely regarded as a fundamental playground in order to understand the behavior of complex systems away from thermal equilibrium [4]. A classic illustration of such systems involves particles moving within a lattice and subject to hard-core exclusion. The introduction of a bias in their movement, simulating the influence of an external driving force, builds up macroscopic currents in the stationary state. A particularly relevant setting consists in putting a one-dimensional system in contact with boundary particles reservoir, the interplay between boundary dynamics and bulk driving leading to genuinely out of equilibrium phenomena such as boundary induced phase transitions [5]. In this case, when the system presents a single species of particles, a simple general principle known as the max-min current principle [5; 6; 7; 8] allows to determine the phase diagram for the steady state current and particle density as a function of the boundary reservoir densities. Despite the success of this principle in treating one-dimensional open boundary problems, its generalization to systems containing several different species of particles has been a long-standing challenge [9; 10; 11; 12].
The goal of the present paper is to put forward a scheme that permits to determine the steady state average particle densities and currents of one-dimensional multi-species driven system with open boundaries. Such a scheme is based essentially on the sole knowledge of the bulk hydrodynamic behavior of the model. As a starting point, similarly to the max-min principle, one supposes the boundary densities to be known. In a systems with \(n\) different particle species, these are denoted by \(\mathbf{\rho}^{L}=\{\rho_{1}^{L},\rho_{2}^{L},\ldots,\rho_{n}^{L}\}\) for the left boundary and \(\mathbf{\rho}^{R}=\{\rho_{1}^{R},\rho_{2}^{R},\ldots,\rho_{n}^{R}\}\) for the right boundary. Then the bulk density is determined by the solution of the associated Riemann problem at the origin (\(\mathrm{RP}_{0}\))
\[(\mathbf{\rho}^{L},\mathbf{\rho}^{R})\xrightarrow{\mathrm{RP}_{0}}\mathbf{\rho}^{B}. \tag{1}\]
As a first argument in support of this claim, we shall show that this principle is equivalent to to Krug's max-min current principle when applied to the case of single-species model. We shall moreover present a further heuristic justification of it based on a vanishing viscosity regularization of the associated conservation laws which applies to general multi-species case.
By itself the principle (1) is not enough to determine the bulk densities since one has at the same time to make sense of the boundary densities. If one supposes that the boundary currents are functions of the boundary densities alone, then current conservation through the entire systems provides the missing conditions to completely determine both bulk and boundary densities. We apply this scheme to three models, where we have access to the particle currents as functions of the particle densities (which is necessary in order to solve numerically the associated Riemann problem): 2-TASEP with arbitrary bulk hopping rates, hierarchical 2-ASEP and a 3-TASEP. In all these three model we find good agreement with numerical simulations.
## I The scheme
The large scale behavior of driven diffusive system consisting of \(n\) species of particles is generally governed by a system of conservation laws
\[\partial_{t}\mathbf{\rho}+\partial_{x}\mathbf{J}=0 \tag{2}\]
where the \(n\) locally conserved quantities are the coarse-grained particle densities \(\mathbf{\rho}(x,t)=(\rho_{1}(x,t),...,\rho_{n}(x,t))\), with associated currents \(\mathbf{J}(\mathbf{\rho})=(J_{1}(\mathbf{\rho}),..,J_{n}(\mathbf{\rho}))\). When the system is defined on a finite interval \(x\in[0,L]\) and coupled to two reservoirs with densities \(\mathbf{\rho}^{L}\) and \(\mathbf{\rho}^{R}\) the system reaches in the limit \(t\to\infty\) a steady state with uniform bulk densities \(\mathbf{\rho}^{B}(\mathbf{\rho}^{L},\mathbf{\rho}^{R})\). We claim that for \(L\to\infty\), these bulk densities are determined by solving a Riemann problem. Such a problem is formulated on an infinite line \(x\in\mathbb{R}\) with an initial condition consisting of two regions of uniform densities, on the left and on the
right of the origin \(x=0\)
\[\mathbf{\rho}(x,0)=\mathbf{\rho}^{L}\mathds{1}_{x<0}(x)+\mathbf{\rho}^{R}\mathds{1}_{x>0}(x) \quad x\in\mathbb{R}.\]
The solution of the Riemann problem is invariant under the rescaling \((x,t)\rightarrow(\lambda x,\lambda t)\) and therefore takes the form \(\mathbf{\rho}(x,t)=\mathbf{\rho}(\frac{\pi}{t})\). In particular, for \(t>0\), \(\mathbf{\rho}(0,t)\) is independent of time, so we define: \(\mathbf{\rho}|_{0}(\mathbf{\rho}^{L},\mathbf{\rho}^{R}):=\mathbf{\rho}(0,t)\) and we call it _the solution to the Riemann problem at the origin_. Our claim is that the bulk densities for the open boundary problem with given boundary densities coincide with the solution at zero of the corresponding Riemann problem, namely:
\[\boxed{\mathbf{\rho}^{B}(\mathbf{\rho}^{L},\mathbf{\rho}^{R})=\mathbf{\rho}|_{0}(\mathbf{\rho}^{L},\mathbf{\rho}^{R})} \tag{3}\]
The exact meaning of the boundary conditions is a mathematically subtle issue [13; 14; 15]. We define them in an operative way as the densities of the first and last site of the lattice, meaning that the two boundary sites can be conceptually considered as part of their nearby reservoirs. Let us be more specific about the boundary dynamics we shall consider. At each boundary a particle can either enter or exit the system, or it can change its own species. If we identify to empty sites as particles of a species \(0\), the dynamics is fully encoded in the rates \(\mathbf{\nu}^{L}=\{\nu^{L}_{i,j},0\leq i\neq j\leq n\}\) at the left and \(\mathbf{\nu}^{R}=\{\nu^{R}_{i,j},0\leq i\neq j\leq n\}\) at the right boundary
\[j\xrightarrow{\nu^{L}_{i,j}}i\qquad i\xrightarrow{\nu^{R}_{i,j}}j\]
The boundary densities \(\mathbf{\rho}^{L}\) and \(\mathbf{\rho}^{R}\), as well as the bulk ones are then functions of the boundary rates.
Since the boundary hopping rates are independent of the rest of the system, we can write the current on a given boundary as a function of the density of that boundary only
\[J^{L}_{i}(\mathbf{\rho}^{L}) =\sum_{j=1}^{n}\rho_{j}\nu^{L}_{ij}-\rho_{i}\sum_{j=1}^{n}\nu^{L}_ {ji} \tag{4}\] \[J^{R}_{i}(\mathbf{\rho}^{R}) =\rho_{i}\sum_{j=1}^{n}\nu^{R}_{ij}-\sum_{j=1}^{n}\rho_{j}\nu^{R}_ {ji}\]
In the steady state, we have
\[\mathbf{J}^{L}(\mathbf{\rho}^{L})=\mathbf{J}(\mathbf{\rho}^{B})=\mathbf{J}^{R}(\mathbf{\rho}^{R}) \tag{5}\]
In conclusion, eqs.(3,5) provide a system of equation enabling to determine the bulk and boundary densities of the system.
### Reformulation of the max-minximal Current Principle
A first argument in favor of the principle eq.(3) is the fact that in the case of a single species of particle it coincides with Krug's max-min current principle. According to this principle, the steady-state current is obtained as: [5; 7; 8; 16]
\[j=\begin{cases}\max_{\rho\in[\rho^{L},\rho^{L}]}J(\rho)&\text{if }\rho^{L}>\rho^{R}\\ \min_{\rho\in[\rho^{L},\rho^{R}]}J(\rho)&\text{if }\rho^{L}<\rho^{R}\end{cases} \tag{6}\]
Let's compare this result with what one would obtain by applying eq.(3). Let's start with the case where \(\rho^{R}>\rho^{L}\), which corresponds to a minimum current phase. When considering the associated Riemann problem we can assume the current \(J\) to be a convex function of the density in the interval \([\rho^{L},\rho^{R}]\), otherwise one has to replace it with its the convex hull in the interval \([\rho^{L},\rho^{R}]\)[17]. The solution to the Riemann problem can be expressed as a function of \(u=\frac{\pi}{t}\):
\[\rho(u)=\rho^{L}\mathbf{1}_{u<v(\rho^{L})}+\rho^{R}\mathbf{1}_{u>v(\rho^{R})}+v^{-1}(u )\mathbf{1}_{v(\rho^{L})<u<v(\rho^{R})} \tag{7}\]
where \(v(\rho):=\frac{dJ}{d\rho}\). To compare the solution at zero with the density predicted by the minimum current phase, we can identify three cases:
1) If \(v(\rho^{L})>0\), then the solution at zero has a value of \(\rho^{L}\), and simultaneously, the minimum \(\min_{\rho\in[\rho^{L},\rho^{R}]}(J(\rho))\) is reached at \(\rho^{L}\). In this case, the bulk has the same density as the left boundary, which we refer to as the _left-induced phase_.
2) If \(v(\rho^{R})<0\), then the solution at zero has a value of \(\rho^{R}\), and simultaneously, the minimum \(\min_{\rho\in[\rho^{L},\rho^{R}]}(J(\rho))\) is attained at \(\rho^{R}\). This is referred to as a _right-induced phase_.
3) If neither of the two previous statements is true, there exists, due to the monotonicity of the derivative, a unique value \(\rho^{B}\in[\rho^{L},\rho^{R}]\) for which \(v(\rho^{B})=0\). This value corresponds to both the Riemann solution at zero and the minimum \(\min_{\rho\in[\rho^{L},\rho^{R}]}(J(\rho))\). We refer to this situation as the _bulk-induced phase_.
When \(\rho^{R}<\rho^{L}\), a similar reasoning can be applied, but we replace \(J(\rho)\) with its concave hull over the interval \([\rho^{R},\rho^{L}]\). So we conclude that the max-min current principle and the eq.(3) give the same answer.
As an example, in the case of a single-species TASEP, we have \(v(\rho^{B})=1-2\rho^{B}\). When \(v>0\), we have \(\rho^{B}<\frac{1}{2}\), which corresponds to the low-density phase, and the bulk is left-induced. The high-density regime corresponds to a right-induced bulk density. The maximal current phase, where \(\rho=\frac{1}{2}\), is not induced from either the left or the right.
### Multiple Conserved Quantities
In this section we shall provide a plausibility argument for eq.(3). It will be by no means a proof of that equation, but more support will come from the comparison with simulations, discussed in the next section. Our argument is based on a vanishing viscosity approach. This involves adding a diffusive component to the current such that
the total current, which remains constant in the steady state, is given by:
\[\mathbf{J}^{total}=\mathbf{J}(\mathbf{\rho})-\epsilon D(\mathbf{\rho})\frac{\partial\mathbf{\rho}}{ \partial x} \tag{8}\]
Here, \(\epsilon>0\) and \(D(\mathbf{\rho})\) is a positive-definite matrix. Since the conservation laws become locally scalar in the directions of the eigenvectors of the Jacobian \(\frac{\partial J_{i}}{\partial\rho_{j}}\) we assume that this property extends to the viscous case, implying that \(D(\mathbf{\rho})\) commutes with the Jacobian. This assumption ensures a mathematically stable regularization scheme for the boundary problem.
For the rest of the argument we shall assume that the conservation laws eq.(2) admit \(n\) independent Riemann variables \(\mathbf{\xi}=(\xi_{1},\ldots,\xi_{n})\). These are functions of the densities \(\mathbf{\xi}(\mathbf{\rho})\), that "diagonalize" the conservation equations eq.(2), in the sense
\[\partial_{t}\xi_{i}(x,t)+v_{i}(\mathbf{\xi})\partial_{x}\xi_{i}(x,t)=0,\]
where it can be shown that the speeds \(v_{k}\) are the eigenvalues of the Jacobian matrix \(\frac{\partial J_{i}}{\partial\rho_{j}}(\mathbf{\rho})\). We remark that the existence of the Riemann variables is ensured for \(n=1,2\) (for \(n=1\) the Riemann variable is the density itself). Now, rewriting eq.(8) in terms of the Riemann variables we get the ordinary differential equation:
\[\frac{\partial\mathbf{\xi}}{\partial x}=\epsilon^{-1}M^{-1}D^{-1}(J(\mathbf{\xi})-J^ {total}):=F(\mathbf{\xi}) \tag{9}\]
where \(M_{ij}=\frac{\partial\rho_{j}}{\partial\xi_{j}}\). In the limit \(\epsilon\to 0\) we have as expected \(J(\mathbf{\xi})=J^{total}\) on all the system, with the possible exception of microscopic regions close to the boundaries. This means that the bulk value \(\mathbf{\xi}^{B}\) represents a stationary point of the ODE (9), \(F(\mathbf{\xi}^{B})=0\). In order to determine the relation between the bulk and boundary values of each Riemann variable, we linearize the ODE around the the stationary point. It is not difficult to show that the Jacobian matrix \(\frac{\partial F}{\partial\xi}\) is diagonal at the stationary point \(\frac{\partial F}{\partial\xi_{j}}(\mathbf{\xi}^{B})=\epsilon^{-1}d_{i}^{-1}v_{i }\delta_{ij}\), where \(d_{i}>0\) are the eigenvalues of the diffusion matrix \(D\). An illustrative example of the field associated to the ODE for a two-component system is in figure 2
* When \(v_{i}<0\), then \(\xi_{i}(x)\) experiences exponential decay towards the stationary bulk value. The decay rate is given by \(\mu_{i}=\epsilon^{-1}d_{i}^{-1}v_{i}\). In this scenario, the bulk stationary value is attained on the left side after a boundary layer of typical size \(1/\mu_{i}\), which is proportional to \(\epsilon\). On the right boundary, the system simply extends the bulk behavior, indicating a right-induced phase.
* When \(v_{i}>0\), using a similar argument we can infer that \(\xi_{i}\) is induced from the left, and the boundary layer is located on the right.
* When \(v_{i}=0\); the size of boundary layer diverges for finite \(\epsilon\). The flow of the ODE in the direction of the associated eigenvector indeed ceases to be exponential and becomes rather polynomial. The bulk is therefore not induced by any boundary, however, it belongs to the manifold \(v_{i}(\mathbf{\xi})=0\). We say that we are in a bulk-induced phase for \(\xi_{i}\).
This is the same result one would obtain by considering the solution of Riemann problem at the origin. Let's point out that the idea of looking at the signs of eigenvalues governing the phase transition in multi-species driven diffusive systems has already been discussed in [18], however without reference to the Riemann variables.
## II Application to multi-components interacting particles systems
In this section we consider three different driven diffusive systems. The first two contain each two species of particles. More specifically the first one is the 2-TASEP introduced in [19; 20], while the second one is a hierarchical 2-species ASEP. The third model is a particular case of 3-species TASEP. For all this models we compare numerical simulations with the predictions of the system of equations eq.(3) and eq.(5).
This system of equations cannot be solved analytically therefore we make use of an iterative procedure: we begin by selecting random initial densities for the boundaries. Then, we determine the bulk density using equation 3, which provides information about the current. Subsequently, we calculate the boundary densities by inverting equation 4. We continue this iteration process between the boundaries and the bulk until convergence is achieved. However, it is worth noticing that this algorithm may encounter cyclic trajectories. To prevent this issue, we introduce a damping parameter \(\gamma\), which should be chosen sufficiently small. The updated equation becomes: \(\mathbf{x}^{n+1}=\gamma\mathbf{f}(\mathbf{x}^{n})+(1-\gamma)\mathbf{x}^{n}\) Here, \(\mathbf{x}^{n}\) represents the set of variables after the \(n\)-th iteration, and \(\mathbf{f}\) represents the set of functions governing the iterations.
### 2-TASEP with arbitrary hopping rates
This first model is a two-species generalization of TASEP, it consists of two types of particles, denoted by \(\bullet\) and \(\circ\), (empty sites are denoted by \(*\)). The hopping rates in the bulk are :
\[\bullet*\xrightarrow{\beta}**\bullet\qquad\ast\circ\xrightarrow{\alpha}\circ* \qquad\bullet\circ\xrightarrow{1}\circ\bullet\]
while the only non vanishing boundary rates we consider are \(\nu_{\bullet\bullet}^{L/R},\nu_{\epsilon\circ}^{L/R},\nu_{\bullet}^{L/R}\). The currents for this model have been calculated in [21] and used in [22] in order to study its hydrodynamic behavior and in particular to solve the corresponding Riemann problem. Let's recall the expression of the currents:
\[J_{\circ}(\rho_{\circ},\rho_{\bullet}) =z_{\alpha}(z_{\beta}-1)+\rho_{\circ}(z_{\alpha}-z_{\beta}) \tag{10}\] \[J_{\bullet}(\rho_{\circ},\rho_{\bullet}) =z_{\beta}(1-z_{\alpha})+\rho_{\bullet}(z_{\alpha}-z_{\beta}) \tag{11}\]
where \(z_{\alpha}\in[0,\min(1,\alpha)]\) and \(z_{\beta}\in[0,\min(1,\beta)]\) are solution of the saddle point equations
\[\frac{\rho_{\circ}}{z_{\alpha}}+\frac{\rho_{\bullet}}{z_{\alpha}-1 }+\frac{1-\rho_{\circ}-\rho_{\bullet}}{z_{\alpha}-\alpha} =0 \tag{12}\] \[\frac{\rho_{\bullet}}{z_{\beta}}+\frac{\rho_{\circ}}{z_{\beta}-1 }+\frac{1-\rho_{\circ}-\rho_{\bullet}}{z_{\beta}-\beta} =0. \tag{13}\]
The variables \(z_{\alpha},z_{\beta}\) happen to be the Riemann variables for this model [22]. In figure 1 (left) we reported two examples of simulations of the 2-TASEP on a lattice of size \(L=100\) and with different values of the model parameters. We see that the numerical result agrees very well with the theoretical prediction obtained through the iterative solution of eqs.(3,5). The convergence of the iterative procedure is reported on the right of the same figure.
#### iii.1.1 Phase diagram
Following the discussion in Section I.2 we partition the phase space of the bulk densities of this model in phases, characterized by the sign of the functions \(v_{k}(\mathbf{z}^{B})\). This a priori results in 9 phases for a two-component system, however, hyperbolicity of the corresponding conservation laws implies that some phases are forbidden as illustrated in the following table
\[\begin{array}{l|c|c|c}&\hskip-14.226378pt\left|v_{\alpha}<0\right|&\hskip-14.226378ptv _{\alpha}=0&\hskip-14.226378ptv_{\alpha}>0\\ \hline\hline\hskip-14.226378ptv_{\beta}<0&RR&BR&LR\\ \hline v_{\beta}=0&\times&BB&LB\\ \hline v_{\beta}>0&\times&\times&LL\\ \end{array}\]
In the preceding table the first letter represents the state of \(z_{\alpha}\): L: left induced, R: right induced, B: bulk induced. The second letter is for the state of \(z_{\beta}\). The symbol \(\times\) is for a forbidden phase. See figure 2 for the result of this partitioning for the values \(\alpha=0.8,\beta=0.9\) of the bulk exchange rates.
Numerical evidence for this diagram is reported in figure 3, where the results of simulations are shown together with theoretical predictions with varying parameter \(\nu^{L}_{\bullet\circ}\) and all the other parameters fixed. We notice that \(z_{\beta}^{B}\) coincides with \(z_{\beta}^{L}\) within the region where \(v_{\beta}<0\), and they split in the region where \(v_{\beta}=0\). At the same time \(z_{\alpha}^{L}\) coincides with \(z_{\alpha}^{B}\) for both regions since \(v_{\alpha}<0\).
### 2-species ASEP and 3-species TASEP
We have considered other two models for which we have access to the exact expressions of the hydrodynamic current as functions of the densities,.
The first model, a 2-species ASEP, contains two species
Figure 1: On the left, two examples of Monte-Carlo simulation of the density profile for 2-TASEP (continuous lines) along with the corresponding Riemann variables (dashed lines) for a lattice of size \(L=100\). The horizontal segments represent the predicted values. On the right, the evolution of densities for the iterative algorithm with damping \(\gamma=0.01\) (up to \(1000\) iterations). Parameters values for top diagrams: \(\alpha=0.5\), \(\beta=1.5\), \((\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\bullet})=(0.2,0.08,0.07)\), \((\nu^{L}_{\circ},\nu^{L}_{\circ},\nu^{L}_{\bullet\bullet})=(0.24,0.04,0.12)\). For the bottom diagrams: \(\alpha=0.4\), \(\beta=0.7\), \((\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\circ},\nu^{R}_{\bullet\bullet})=(0.5,0.1,0.8)\), \((\nu^{L}_{\bullet\circ},\nu^{L}_{\bullet\circ},\nu^{L}_{\bullet\bullet})=(0.1,0.2,0.5)\).
Figure 3: Bulk and boundary densities (left) and the corresponding Riemann variables (right) of 2-TASEP as a function of the \(\nu^{L}_{\bullet\bullet}\). The crosses represent the numerical simulations, while the lines are the theoretical predictions. For the green shaded region \(v_{\beta}>0\), while for the yellow shaded section \(v_{\beta}=0\) (in both regions \(v_{\alpha}<0\)).
Figure 2: Phase diagram of a 2-species TASEP (\(\alpha=0.8,\beta=0.9\)). The signs on the left correspond to the velocities \(v_{\alpha}\) and \(v_{\beta}\) in order. On the right, we have an example of the ODE flow exhibiting a sink singularity in the left-induced phase and a saddle point in the mixed-induced phase.
of particles and the following bulk exchange rates:
\[\nu_{ij}=\begin{cases}1&\text{if}\quad\ i>j\\ q&\text{if}\quad\ i<j\end{cases} \tag{14}\]
where we have chosen the following order on the species: \(\bullet>*>\circ\).
Although the stationary measure for a uniform state is not a product measure, yet, it's straightforward to write the currents-density relations since each of the \(\bullet\) and \(\circ\) particles dynamics can be decoupled in the bulk:
\[\begin{split}& J_{\bullet}=(1-q)\rho_{\bullet}(1-\rho_{\bullet}) \\ & J_{\circ}=(q-1)\rho_{\circ}(1-\rho_{\circ}).\end{split} \tag{15}\]
From these equations it is immediate that the densities are also Riemann variables for this model. However, the dynamics of the two species cannot in general be decoupled on the boundaries, making the max-min principle not applicable in this case.
The last model we have considered, a 3-species TASEP, contains particles with labels \((1,2,3,4)\), where the type 4 can be seen as empty sites, and bulk hopping rates:
\[ij\xrightarrow{\nu_{ij}}ji\qquad\nu_{ij}=\begin{cases}0&\text{if}\quad i>j \\ \nu_{12}&\text{if}\quad(i,j)=(1,2)\\ \nu_{34}&\text{if}\quad(i,j)=(3,4)\\ 1&\text{otherwise}\end{cases} \tag{16}\]
The particle currents of this model can be derived from those of the 2-TASEP, \(J_{\circ/\bullet}(\rho_{\circ},\rho_{\bullet},\alpha,\beta)\), by making some particle identifications. Firstly, the particles 4 and 3 can be seen as \(\circ\), 1 as \(\bullet\) and 2 as \(*\), for \(\alpha=1,\beta=\nu_{12}\). Secondly, 1 and 2 can be seen as \(\bullet\), 3 as \(*\) and 4 as \(\circ\) with \(\alpha=\nu_{34},\beta=1\). Using densities of particles of species \(1,2\) and 4 as independent variables one finds
\[\begin{split}& J_{1}=J_{\bullet}(1-\rho_{1}-\rho_{2},\rho_{1},1, \nu_{12})\\ & J_{2}=J_{\bullet}(\rho_{4},\rho_{1}+\rho_{2},\nu_{34},1)-J_{1} \\ & J_{4}=J_{\circ}(\rho_{4},\rho_{1}+\rho_{2},\nu_{34},1).\end{split} \tag{17}\]
In figure 4 and 5 we report the results for the bulk and boundary densities of these models, obtained though simulations of a system of size \(L=100\), along with the theoretical predictions. One boundary parameter is varied (\(\nu_{\bullet*}^{L}\) in the 2-ASEP and \(\nu_{12}^{L}\) in the 3-TASEP) while all the other parameters are fixed. Similarly to the case of the 2-TASEP seen in the previous section, we find good agreement.
### Conclusion
In conclusion, this paper introduces a method which allows to determine the steady state average particle densities and currents of one-dimensional multi-species driven system with open boundaries. The method, rooted in the bulk hydrodynamic behavior of the model, extends the max-min principle applicable to single-species models [5; 6; 7; 8]. By comparing our method's predictions with numerical simulations across three models, we observed good agreement. Our analysis of bulk hydrodynamic conservation laws enables us to predict the phase diagram, which becomes more intelligible when considering the behavior of the Riemann variables of the model (when they exist).
The major open question pertains to the method's domain of validity, particularly in establishing precise definitions of boundary densities for more general boundary conditions. The heuristic argument in favor of our method rests on the existence of a complete set of Riemann variables in bulk dynamics. Therefore, exploring models with more than two species, lacking this completeness, and subjecting our method to such models, presents an intriguing avenue for future research.
###### Acknowledgements.
We thank Gunter Schutz for useful discussions. The work of A. Zhara has been partially funded by the ERC Starting Grant 101042293 (HEPIQ) and completed while he was a member of LPTM.
|
2301.13778 | Differentially Private Distributed Bayesian Linear Regression with MCMC | We propose a novel Bayesian inference framework for distributed
differentially private linear regression. We consider a distributed setting
where multiple parties hold parts of the data and share certain summary
statistics of their portions in privacy-preserving noise. We develop a novel
generative statistical model for privately shared statistics, which exploits a
useful distributional relation between the summary statistics of linear
regression. Bayesian estimation of the regression coefficients is conducted
mainly using Markov chain Monte Carlo algorithms, while we also provide a fast
version to perform Bayesian estimation in one iteration. The proposed methods
have computational advantages over their competitors. We provide numerical
results on both real and simulated data, which demonstrate that the proposed
algorithms provide well-rounded estimation and prediction. | Barış Alparslan, Sinan Yıldırım, Ş. İlker Birbil | 2023-01-31T17:27:05Z | http://arxiv.org/abs/2301.13778v2 | # Differentially Private Distributed Bayesian
###### Abstract
We propose a novel Bayesian inference framework for distributed differentially private linear regression. We consider a distributed setting where multiple parties hold parts of the data and share certain summary statistics of their portions in privacy-preserving noise. We develop a novel generative statistical model for privately shared statistics, which exploits a useful distributional relation between the summary statistics of linear regression. Bayesian estimation of the regression coefficients is conducted mainly using Markov chain Monte Carlo algorithms, while we also provide a fast version to perform Bayesian estimation in one iteration. The proposed methods have computational advantages over their competitors. We provide numerical results on both real and simulated data, which demonstrate that the proposed algorithms provide well-rounded estimation and prediction.
**Keywords:** Differential privacy, linear regression, distributed learning, MCMC
## 1 Introduction
Linear regression is a mathematical method that lies at the core of statistical research. Many researchers have been working on linear regression since the 19th century, and hence, many well-known solution methods exist. On a separate note, privacy-preserving statistical learning has gained popularity and importance in recent years, with _differential privacy_ prevailing as the most commonly used definition for privacy (Dwork, 2006; Dwork et al., 2014; Dankar and El Emam, 2013). As a result, there is a recent but growing interest in differentially private linear regression.
Many works in the data privacy literature do not mainly focus on regression but are motivated by or can be applied to regression. As an example, differentially private empirical risk minimisation (Chaudhuri et al., 2009; Bassily et al., 2014; Abadi et al., 2016; Kuru et al., 2022) can be applied to regression once it is cast as a data-driven optimisation problem. Many general-purpose Bayesian differentially private estimation methods can also be used in regression problems. Williams and Mcsherry (2010) is one of the first works that considered a hierarchical model for the privatised data and Bayesian estimation for the model parameters. Zhang et al. (2016) analyse several differential privacy mechanisms for posterior sampling and suggest using these mechanisms also
for linear regression. Dimitrakakis et al. (2017) developed a posterior sampling query algorithm to combine differential privacy and Bayesian inference. Contrary to those one-sample approaches, general-purpose differentially private Markov chain Monte Carlo (MCMC) algorithms, which aim to identify the posterior distribution via iterative sampling, can also be applied to regression (Wang et al., 2015; Foulds et al., 2016; Wang et al., 2015; Yildirim and Ermis, 2019; Heikkila et al., 2019; Gong, 2022; Alparslan and Yildirim, 2022; Ju et al., 2022).
Several works in the literature are somewhat more directly related to differentially private regression. Zhang et al. (2012) suggested a functional mechanism method, which is based on perturbing polynomial objective functions with privacy-preserving noise. As an alternative, Dwork et al. (2014b); Wang (2018) considered perturbation of summary statistics. Alabi et al. (2022) provide a technical discussion on different point estimation methods for differentially private simple linear regression, that is when we have a single feature. Ferrando et al. (2022) present a method to compute confidence intervals for the coefficients of linear regression. Cai et al. (2021) study the rates of convergence for parameter estimation with differential privacy via output perturbation, where a non-private estimator is perturbed. All those works consider point estimation of the linear regression parameters.
In this paper, we focus on differential private distributed Bayesian inference for the parameters of linear regression. We use a novel hierarchical model that relies on a distributional relationship (Proposition 1) between the summary statistics of linear regression, which, to the best of our knowledge, has not been exploited so far. We propose Bayesian inference algorithms that take perturbations of summary statistics as observations. The general inferential tool we pick in this paper is MCMC, a well-known framework for iterative sampling from posterior distributions. As we shall see, the proposed MCMC algorithms in this paper already have lower computational complexities per iteration than their closest competitors in Bernstein and Sheldon (2019). Additionally, we also propose much faster Bayesian estimation methods that perform estimation in one iteration. Finally, we assume a distributed setting where the total dataset is shared among multiple parties (data nodes), who want to collaborate for the inference of a common parameter, see _e.g._, Heikkila et al. (2017) for such a setting. The non-distributed setting is just a special case (single data holder) for our methodology.
This paper has connections with several works in the literature, yet it has significant differences from each of those, as we shall explain below.
For the privacy-preserving mechanism, we consider adding noise to summary statistics of linear regression, similarly to Wang (2018); Bernstein and Sheldon (2019). The adaSSP framework of Wang (2018) motivates the fast Bayesian estimation methods developed in this paper. However, adaSSP is a point estimation method while we aim for a posterior distribution. The latter work, Bernstein and Sheldon (2019), is particularly related to this paper as they also study Bayesian linear regression with differential privacy using perturbed statistics of data. However, there are some important differences between our work and that of Bernstein and Sheldon (2019). These differences stem from the choice of summary statistics and the consequent hierarchical structure used for modelling linear regression. Those modelling differences lead to significant differences in the inference methods as well as significant computational advantages for our methods. Specifically, the computational complexity of our methods is \(\mathcal{O}(d^{3})\), where \(d\) is the number of features. This order is much less than the \(\mathcal{O}(d^{6})\) of Bernstein and Sheldon (2019). Finally, neither Wang (2018) nor Bernstein and Sheldon (2019) has considered a distributed learning setting like we do in
this paper, although both works can be modified for the distributed setting after moderate modifications.
Foulds et al. (2016); Heikkila et al. (2017) are other differentially Bayesian inference methods that target posterior distributions of perturbed summary statistics of sensitive data. The one by Heikkila et al. (2017) is particularly interesting because they consider a distributed setting and present linear regression as their showcase example. However, we differ from those works in the way we model the perturbed statistics and in the choice of inference methods. Specifically, Foulds et al. (2016); Heikkila et al. (2017) treat the perturbed statistics as if not perturbed, while we incorporate the effect of perturbation in our model.
Recently, Alparslan and Yildirim (2022) and Ju et al. (2022) employ data augmentation for modelling sensitive and privatised data and propose MCMC for Bayesian inference, the latter work having linear regression as a major application. Their methods have \(\mathcal{O}(n)\) complexity per iteration in general where \(n\) is the number of instances in the data set, which can be slow when \(n\) is large. In contrast, our methods are scalable in data size since their computational complexities do not depend on \(n\). We note that Alparslan and Yildirim (2022, Section 4.2) also present an MCMC method scalable with \(n\) that exploits the approximate normality of additive summary statistics. However, a direct application of that would lead to an algorithm with \(\mathcal{O}(d^{6})\) computational complexity (per iteration), like in Bernstein and Sheldon (2019).
The paper is organised as follows: In Section 2 we review differential privacy. In Section 3 we lay out the hierarchical model for differentially private distributed linear regression with perturbed summary statistics. In Section 4, we present and discuss the aspects of the proposed inference algorithms. Section 5, we provide numerical experiments. We conclude in Section 6.
Notation:Matrices and vectors are shown in bold-face notation. For a matrix \(\mathbf{A}\), its transpose, trace, and determinant (whenever they exist) are \(\mathbf{A}^{T}\), \(\operatorname{tr}(\mathbf{A})\), and \(|\mathbf{A}|\), respectively. For any sequence \(\{a_{i}\}_{i\geq 0}\), we let \(a_{i:j}=(a_{i},\dots,a_{j})\). We write \(x\sim P\) to mean the random variable \(x\) has distribution \(P\). \(\mathcal{N}(\mathbf{m},\mathbf{\Sigma})\) stands for the multivariate normal distribution with mean \(\mathbf{m}\) and covariance \(\mathbf{\Sigma}\). Wishart and inverse-Wishart distributions with scale matrix \(\Lambda\) and \(\kappa\) degrees of freedom are shown as \(\mathcal{W}(\mathbf{\Lambda},\kappa)\) and \(\mathcal{IW}(\mathbf{\Lambda},\kappa)\), respectively. \(\mathcal{IG}(a,b)\) stands for the inverse-gamma distribution with shape and scale parameters \(a\) and \(b\). We augment those notations with \(\mathbf{x}\) to denote the respective probability density functions (pdf), _e.g._, as \(\mathcal{N}(\mathbf{x};\mathbf{m},\mathbf{\Sigma})\).
## 2 Differential Privacy
Differential privacy (Dwork, 2006, 2008) concerns randomised algorithms that run on sensitive, or usually private, data. A randomised algorithm takes an input data set \(D\in\mathcal{D}\) and returns a random output in \(\mathcal{O}\), where the randomness is intrinsic to the algorithm. A differentially private algorithm constrains the difference between the probability distributions of the outputs obtained from neighbouring data sets. We say two data sets are neighbours if they differ by one individual's piece of data.
**Definition 1** (Differential privacy).: A randomised algorithm \(M:\mathcal{D}\mapsto\mathcal{O}\) is \((\epsilon,\delta)\)-differentially private (DP) if for any pair of neighbouring data sets \(D,D^{\prime}\in\mathcal{D}\) and for any subset \(O\subseteq\mathcal{O}\) of the of support domain, it satisfies
\[\mathbb{P}[M(D)\in O]\leq e^{\epsilon}\mathbb{P}[M(D^{\prime})\in O]+\delta.\]
The definition implies that smaller \((\epsilon,\delta)\) leads to more privacy.
Privacy-preserving algorithms often use noise-adding mechanisms. A popular noise-adding mechanism is the _Gaussian mechanism_(Dwork et al., 2006), which perturbs a function \(f:\mathcal{D}\mapsto\mathbb{R}^{k}\) of the sensitive data, for some \(k\geq 1\), with a random noise drawn from the Gaussian distribution. The amount of the added noise depends on the \(L_{2}\)-_sensitivity_ of the function, given by
\[\Delta_{f}=\max_{\text{neighbour}D_{1},D_{2}\in\mathcal{D}}\lVert f(D_{1})-f(D_ {2})\rVert_{2}.\]
An \((\epsilon,\delta)\)-DP Gaussian mechanism returns
\[f(D)+\Delta_{f}\sigma(\epsilon,\delta)\mathbf{v},\quad\mathbf{v}\sim\mathcal{N}(\mathbf{ 0},\mathbf{I}_{k}) \tag{1}\]
upon taking \(D\) as the input, where the quantity \(\sigma(\epsilon,\delta)\) ensures \((\epsilon,\delta)\)-DP. In this work, we take \(\sigma(\epsilon,\delta)\) as the analytical solution given in Balle and Wang (2018, Algorithm 1) due to its tightness. The Gaussian mechanism is also central to other forms of privacy, such as zero-concentrated DP (Bun and Steinke, 2016) and Gaussian DP (Dong et al., 2022).
In this paper, we consider \((\epsilon,\delta)\)-DP as the type of privacy and the Gaussian mechanism to generate noisy observations. Moreover, the proposed methods in this paper never use the sensitive data once given the noisy observations generated using the Gaussian mechanism, hence exploiting the _post-processing_ property of differential privacy (Dwork and Roth, 2014).
**Theorem 1** (Post-processing).: _If \(M:\mathcal{D}\mapsto\mathcal{O}\) be \((\epsilon,\delta)\)-DP and let \(f:O\to O^{\prime}\) be another mapping independent of \(D\) given \(M(D)\). Then \(f_{M}:\mathcal{D}\mapsto\mathcal{O}^{\prime}\) with \(f_{M}(D)=f(M(D))\) is \((\epsilon,\delta)\)-DP._
## 3 Differentially Private Distributed Linear Regression
In this section, we present a new hierarchical model for differentially private distributed linear regression. For ease of exposition, we first present a model with a single data holder, then generalise the model for the distributed setting.
### Basic Model and Privacy Setup
Suppose we have a sequence of random variables \(\{(\mathbf{x}_{i},y_{i}):i=1,\ldots,n\}\), where \(\mathbf{x}_{i}\in\mathcal{X}\subseteq\mathbb{R}^{d\times 1}\) are the feature vectors and \(y_{i}\in\mathcal{Y}\subseteq\mathbb{R}\) is the \(i\)'th response variable. We consider the normal linear regression to model the dependency between \(\mathbf{x}_{i}\) and \(y_{i}\). Specifically,
\[y_{i}=\mathbf{x}_{i}^{T}\mathbf{\theta}+e_{i},\quad e_{i}\stackrel{{\text {i.i.d.}}}{{\sim}}\mathcal{N}(0,\sigma_{y}^{2}),\quad i=1,\ldots,n,\]
where \(\mathbf{\theta}\in\mathbb{R}^{d}\) is the vector of the linear regression coefficients. We assume that the feature vectors \(\mathbf{x}_{i}\)'s are i.i.d. with distribution \(P_{x}\). Below, we will particularly focus on the case when \(P_{x}\) can be assumed to be a normal distribution. However, we will also present algorithms for general \(P_{x}\).
In matrix notation, the above can shortly be expressed as
\[\mathbf{y}=\mathbf{X}\mathbf{\theta}+\mathbf{e},\quad\mathbf{e}\sim\mathcal{N}(\mathbf{0},\sigma_{y}^{ 2}\mathbf{I}_{n}),\]
where \(\mathbf{X}=\begin{bmatrix}\mathbf{x}_{1}^{T}&\ldots&\mathbf{x}_{n}^{T}\end{bmatrix}^{T}\) is the so-called design matrix, \(\mathbf{y}=\begin{bmatrix}y_{1}&\ldots&y_{n}\end{bmatrix}^{T}\). Additionally, we also define the summary statistics of \(\mathbf{X}\) and \(\mathbf{y}\) given by
\[\mathbf{S}:=\mathbf{X}^{T}\mathbf{X},\quad\mathbf{z}:=\mathbf{X}^{T}\mathbf{y},\]
respectively. We assume a setup where \(\mathbf{S}\) and \(\mathbf{z}\) are privately released as the noisy summary statistics \(\hat{\mathbf{S}}\) and \(\hat{\mathbf{z}}\) are constructed as
\[\hat{\mathbf{S}} =\mathbf{S}+\sigma_{s}\mathbf{M}, \tag{2}\] \[\hat{\mathbf{z}} =\mathbf{z}+\sigma_{z}\mathbf{v},\quad\mathbf{v}\sim\mathcal{N}(\mathbf{0},\mathbf{I }_{d}), \tag{3}\]
where \(\mathbf{M}\) is a \(d\times d\) symmetric matrix with its upper triangular elements drawn from \(\mathcal{N}(0,1)\). Dwork et al. (2014) arrange \(\sigma_{s}\) and \(\sigma_{z}\) so that both (2) and (3) are \((\epsilon/2,\delta/2)\) differentially private, leading to \((\epsilon,\delta)\)-DP overall. Differently than Dwork et al. (2014), we set
\[\sigma_{s}=\sigma_{z}=\Delta_{sz}\sigma(\epsilon,\delta),\]
where \(\sigma(\epsilon,\delta)\) is given in Balle and Wang (2018, Algorithm 1), and \(\Delta_{sz}\) is the overall \(L_{2}\) sensitivity of \([\mathbf{S},\mathbf{z}]\), given by
\[\Delta_{sz}=\sqrt{\|X\|^{4}+\|X\|^{2}\|Y\|^{2}}\]
with \(\|X\|=\max_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}\|_{2}\) and \(\|Y\|=\max_{y\in\mathcal{Y}}|y|\).
Based on the above relations, we shall represent a hierarchical model that enables Bayesian inference of \(\mathbf{\theta}\) given \(\hat{\mathbf{S}}\) and \(\hat{\mathbf{z}}\). One important element of our modelling approach is the following result that establishes the conditional distribution of \(\mathbf{z}\) given \(\mathbf{S}\), \(\mathbf{\theta}\), and \(\sigma_{y}^{2}\).
**Proposition 1**.: For the normal linear regression model, we have
\[\mathbf{z}|\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}\sim\mathcal{N}(\mathbf{S}\mathbf{\theta},\bm {S}\sigma_{y}^{2}).\]
Proof.: First, note that,
\[\mathbb{E}[\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2}] =\mathbb{E}[\mathbf{X}^{T}\mathbf{X}\mathbf{\theta}+\mathbf{X}^{T}\mathbf{e}]=\mathbf{S} \mathbf{\theta}, \tag{4}\] \[\mathrm{Cov}(\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2}) =\mathbf{X}^{T}\mathbf{X}\sigma_{y}^{2}=\mathbf{S}\sigma_{y}^{2}, \tag{5}\]
and observe that both moments depend on \(\mathbf{X}\) through its statistic \(\mathbf{S}\). Therefore, the conditional density of \(\mathbf{z}\) given \(\mathbf{S}\), \(\mathbf{\theta}\), and \(\sigma_{y}^{2}\) is
\[p(\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2})=\mathcal{N}(\mathbf{z};\mathbf{S}\mathbf{ \theta},\mathbf{S}\sigma_{y}^{2}).\]
Next, define the function \(f:\mathbb{R}^{n\times d}\mapsto[0,\infty)\) with \(f(\mathbf{X})=p(\mathbf{z}|\mathbf{X},\mathbf{\theta},\sigma_{y}^{2})\) and let \(\mathcal{C}_{\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}}=\{\mathbf{X}:\mathbf{X}^{T}\mathbf{X}=\mathbf{S}\}\), Since the function \(f\) is constant over \(\mathcal{C}_{\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}}\), we can write
\[p(\mathbf{z}|\mathbf{S})=\int_{\mathcal{C}_{\mathbf{S},\mathbf{\theta},\sigma_{y}^{2}}}f\mathrm{ d}P_{x}=\mathcal{N}(\mathbf{z};\mathbf{S}\mathbf{\theta},\mathbf{S}\sigma_{y}^{2}),\]
where the second equation is by moment equations in (4) and (5) above. This concludes the proof.
Finally, we assign prior distributions for \(\mathbf{\theta}\), \(\sigma_{y}^{2}\) as
\[\mathbf{\theta}\sim\mathcal{N}(\mathbf{m},\mathbf{C}),\quad\sigma_{y}^{2}\sim\mathcal{IG} (a,b). \tag{6}\]
At this point, it is worth discussing some important modelling differences between our work and Bernstein and Sheldon (2019). In Bernstein and Sheldon (2019), the central limit theorem (CLT) is applied to \(\big{[}\mathbf{S},\mathbf{z},\mathbf{y}^{T}\mathbf{y}\big{]}\), leading to a normality assumption for the whole vector. In contrast, we use the _exact_ conditional distribution \(p(\mathbf{z}|\mathbf{S},\mathbf{\theta},\sigma^{2})\) thanks to Proposition 1. Moreover, unlike Bernstein and Sheldon (2019), we do _not_ require a noisy version \(\mathbf{y}^{T}\mathbf{y}\), hence have a slight advantage of using less privacy-preserving noise. In summary, our model has a different hierarchical structure and requires less privacy-preserving noise.
### Distributed Setting
Next, we extend our model to the distributed setting, where the total data are shared among \(J\geq 1\) data holders as
\[(\mathbf{X},\mathbf{y})=\{(\mathbf{X}_{j},\mathbf{y}_{j});j=1,\ldots,J\}. \tag{7}\]
We let \(n_{i}\) be number of rows in each \(\mathbf{x}_{i}\), so that \(n=n_{1}+\ldots+n_{J}\). Each data holder \(j\) shares their own summary statistics \(\mathbf{S}_{j}=\mathbf{X}_{j}^{T}\mathbf{X}_{j}\), \(\mathbf{z}_{j}=\mathbf{X}_{j}^{T}\mathbf{y}_{j}\) with privacy-preserving noise
\[\begin{split}\hat{\mathbf{S}}_{j}&=\mathbf{S}_{j}+\sigma_{s }\mathbf{M}_{j},\\ \hat{\mathbf{z}}_{j}&=\mathbf{z}+\sigma_{z}\mathbf{v}_{j},\quad \mathbf{v}_{j}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d}).\end{split} \tag{8}\]
Note that, to preserve a given \((\epsilon,\delta)\)-DP overall, each party must provide that level of privacy for their data, hence \(\sigma_{s}\) and \(\sigma_{z}\) are the same as before. The hierarchical structure of the overall model (specified for normally distributed \(\mathbf{x}_{i}\)'s) is shown in Figure 1.
The distributed setting deserves separate consideration than the single data holder case for a couple of reasons: Firstly, the node-specific observations \((\hat{\mathbf{S}}_{1},\hat{\mathbf{z}}_{1}),\ldots,(\hat{\mathbf{S}}_{J},\hat{\mathbf{z}}_{J})\) are altogether statistically _more informative_ on \(\theta\) than their aggregates \(\sum_{j=1}^{J}\hat{\mathbf{S}}_{j}\) and \(\sum_{j=1}^{J}\hat{\mathbf{z}}_{j}\). This is because the aggregate versions are _not_ sufficient statistics of the node-specific observations \((\hat{\mathbf{S}}_{1},\hat{\mathbf{z}}_{1}),\ldots,(\hat{\mathbf{S}}_{J},\hat{\mathbf{z}}_{J})\) with respect to \(\mathbf{\theta}\) (even when \(\sigma_{y}^{2}\) is known.) Therefore, when the node-specific observations are available, one should not, in principle, trivially aggregate them and apply an inference method designed for \(J=1\) using those aggregates.
Secondly, the partitioning of data as in (7) can be relevant to data privacy applications even _outside_ the distributed learning framework, rendering the methodology in Section 4 useful in a
Figure 1: Differentially private distributed linear regression model (specified for normally distributed \(\mathbf{x}_{i}\)’s.)
broader sense. For example, batches of \((\mathbf{x},y)\)-type of data may be donated to a common data collector as in (8). At this point, a particular and interesting relation exists with pan-privacy applications (Dwork et al., 2010). Imagine that sensitive data from individuals are collected sequentially in time, and the data holder is concerned about possible intrusions into the memory where the sensitive data are stored. Then, one possible way to ensure the privacy of the data against such possible intrusions, which is the promise of pan-privacy, is to store the noisy statistics of every new batch of data and erase the original sensitive data. Then, at any time the data collector has data of the form \((\hat{\mathbf{S}}_{1},\hat{\mathbf{z}}_{1}),\ldots,(\hat{\mathbf{S}}_{J},\hat{\mathbf{z}}_{J})\), each pair corresponding to a batch. As a result, inference algorithms as in Section 4 can be applied.
## 4 Algorithms for Bayesian Inference
Bayesian inference targets the posterior distribution of the latent variables of the model, in particular \(\mathbf{\theta}\), given the observations \(\hat{\mathbf{S}}_{1:J}\) and \(\hat{\mathbf{z}}_{1:J}\). We present several Bayesian inference algorithms for the hierarchical model described in the previous section. In addition to other concerns like computational budget, the choice among those approaches mainly depends on the specification of \(P_{x}\) as the distribution of \(\mathbf{S}\) directly depends on it. In this paper, we have considered the following two cases and devised algorithms for each of them:
1. In some cases it may be adequate to specify \(P_{x}=\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{x})\). This leads to \(\mathbf{S}|\mathbf{\Sigma}_{x}\sim\mathcal{W}(\mathbf{\Sigma}_{x},n)\). Further, to account for the uncertainty about the covariance \(\mathbf{\Sigma}_{x}\), one can treat it as a random variable with \(\mathbf{\Sigma}_{x}\sim\mathcal{IW}(\mathbf{\Lambda},\kappa)\). Figure 1 shows the hierarchical structure of the distributed setting with those specifications. We defer discussing the conflict between the normality and boundedness assumptions to Remark 1 towards the end of Section 4.1.
2. As the second case, we assume a general (non-normal) \(P_{x}\). A normal approximation, based on the CLT, could be considered for the distribution \(\mathbf{S}\)(Wilson and Ghahramani, 2011). However, this would require the knowledge (or accurate estimation) of up to the fourth moments of \(P_{x}\) as well as expensive computations for sampling \(\mathbf{S}\). We circumvent those difficulties by plugging in a point estimate of \(\mathbf{S}\) given \(\hat{\mathbf{S}}\) and use it during the sampling process as if it is the true \(\mathbf{S}\) itself. Then, we develop two different algorithms for inference of \(\mathbf{\theta}\), one being an MCMC algorithm and the other providing a closed form-solution for the posterior of \(\mathbf{\theta}\) following a rough point-wise estimation of \(\sigma_{y}^{2}\). Note that these algorithms with fixed \(\mathbf{S}\) do not require a distribution for \(\mathbf{x}\).
Next, we provide the details of our approaches and the resulting algorithms.
### Normally Distributed Features
In this section, we present an MCMC algorithm for Bayesian inference for the differentially private distributed linear regression model when \(P_{x}=\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{x})\) and \(\mathbf{\Sigma}_{x}\sim\mathcal{IW}(\Lambda,\kappa)\). The latent variables involved in this variant are \(\mathbf{\theta},\mathbf{\Sigma}_{x},\sigma_{y}^{2},\mathbf{S}_{1:J},\mathbf{z}_{1:J}\). Their posterior distribution given \(\hat{\mathbf{S}}_{1:J},\hat{\mathbf{z}}_{1:J}\) can be written as
\[p(\mathbf{\theta},\sigma_{y}^{2},\mathbf{\Sigma}_{x},\mathbf{z}_{1:J},\mathbf{S}_{1:J}|\hat{ \mathbf{z}}_{1:J},\hat{\mathbf{S}}_{1:J})\propto p(\mathbf{\theta})p(\sigma_{y}^{2})p(\bm {\Sigma}_{x})\prod_{j=1}^{J}p(\mathbf{z}_{j}|\mathbf{\theta},\sigma_{y}^{2},\mathbf{S})p( \mathbf{S}_{j}|\mathbf{\Sigma}_{x})p(\hat{\mathbf{S}}_{j}|\mathbf{S}_{j})p(\hat{\mathbf{z}}_{j}| \mathbf{z}_{j}). \tag{9}\]
One could design an MCMC algorithm for this posterior distribution that updates \(\mathbf{\theta}\), \(\sigma_{y}^{2}\), \(\mathbf{\Sigma}_{x}\), \(\mathbf{z}_{1:J}\), \(\mathbf{S}_{1:J}\) in turn based on their full conditional distributions. However, such an algorithm suffers from poor convergence because of a high posterior correlation between \(\mathbf{\theta}\) and \(\mathbf{z}_{1:J}\) (as verified in our numerical studies). It is well known that highly correlated variables result in poor convergence if they are updated one conditional on the other. To alleviate that problem, we work with the reduced model where \(\mathbf{z}_{1:J}\) are integrated out. The reduced model has \(\mathbf{\theta},\mathbf{\Sigma}_{x},\sigma_{y}^{2}\) as its latent variables, whose joint posterior distribution can be written as
\[p(\mathbf{\theta},\sigma_{y}^{2},\mathbf{\Sigma}_{x},\mathbf{S}|\hat{\mathbf{z}},\hat{\mathbf{S}}) \propto p(\mathbf{\theta})p(\sigma_{y}^{2})p(\mathbf{\Sigma}_{x})\prod_{j=1}^{J}p(\bm {S}_{j}|\mathbf{\Sigma}_{x})p(\hat{\mathbf{S}}_{j}|\mathbf{S}_{j})p(\hat{\mathbf{z}}_{j}|\mathbf{S }_{j},\mathbf{\theta},\sigma_{y}^{2}), \tag{10}\]
where \(p(\hat{\mathbf{z}}|\mathbf{S},\mathbf{\theta},\sigma_{y}^{2})=\mathcal{N}(\hat{\mathbf{z}}; \mathbf{S}\mathbf{\theta},\sigma_{y}^{2}\mathbf{S}\mathbf{\theta}+\sigma_{z}^{2}\mathbf{I}_{d})\).
We would like to sample from the posterior distribution in (10) via MCMC that updates \(\mathbf{\theta}\), \(\sigma_{y}^{2}\), \(\mathbf{\Sigma}_{x}\), \(\mathbf{S}_{1:J}\) in turn based on their full conditional distributions. The variables \(\mathbf{\theta}\) and \(\mathbf{\Sigma}_{x}\) enjoy closed-form full conditional distributions (see Appendix A for the derivations):
\[\mathbf{\Sigma}_{x}|\mathbf{S}_{1:J},\hat{\mathbf{S}}_{1:J},\hat{\mathbf{z}}_{1:J} \sim\mathcal{IW}\left(\mathbf{\Lambda}+\sum_{j=1}^{J}\mathbf{S}_{j},\kappa +n\right), \tag{11}\] \[\mathbf{\theta}|\sigma_{y}^{2},\hat{\mathbf{z}},\mathbf{S}_{1:J} \sim\mathcal{N}(\mathbf{m}_{p},\mathbf{\Sigma}_{p}), \tag{12}\]
where the posterior moments for \(\mathbf{\theta}\) are
\[\mathbf{\Sigma}_{p}^{-1}=\sum_{j=1}^{J}\mathbf{S}_{j}(\sigma_{y}^{2}\mathbf{S}_{j}+\sigma _{z}^{2}I)^{-1}\mathbf{S}_{j}+\mathbf{C}^{-1},\quad\mathbf{m}_{p}=\mathbf{\Sigma}_{p}\left( \sum_{j=1}^{J}\mathbf{S}_{j}(\sigma_{y}^{2}\mathbf{S}_{j}+\sigma_{z}^{2}I)^{-1}\hat{ \mathbf{z}}_{j}+\mathbf{C}^{-1}\mathbf{m}\right).\]
The full-conditional distributions of \(\mathbf{S}_{1:J}\) and \(\sigma_{y}^{2}\) have no closed form; hence we design Metropolis-Hastings (MH) moves to update them. For \(\sigma_{y}^{2}\), one can simply use a random-walk MH move targeting \(p(\sigma_{y}^{2}|\mathbf{\theta},\mathbf{S}_{1:J},\hat{\mathbf{z}}_{1:J})\). For \(\mathbf{S}_{1:J}\), their full conditional distribution can be factorised as
\[p(\mathbf{S}_{1:J}|\hat{\mathbf{S}}_{1:J},\hat{\mathbf{z}}_{1:J},\mathbf{\Sigma}_{x},\sigma_{y }^{2},\mathbf{\theta})=\prod_{j=1}^{J}p(\mathbf{S}_{j}|\hat{\mathbf{S}}_{j},\hat{\mathbf{z}}_ {j},\mathbf{\Sigma}_{x},\sigma_{y}^{2},\mathbf{\theta}),\]
where each factor is given by
\[p(\mathbf{S}_{j}|\hat{\mathbf{S}}_{j},\hat{\mathbf{z}}_{j},\mathbf{\Sigma}_{x},\sigma_{y}^{2},\mathbf{\theta})\propto p(\hat{\mathbf{z}}_{j}|\mathbf{S}_{j},\mathbf{\theta},\sigma_{y}^{2}) p(\mathbf{S}_{j}|\mathbf{\Sigma}_{x})p(\hat{\mathbf{S}}_{j}|\mathbf{S}_{j}).\]
Thanks to that factorised form, each \(\mathbf{S}_{j}\) can be updated with an MH move independently and in parallel. For the MH algorithm to update one \(\mathbf{S}_{j}\), we propose a new value from a Wishart distribution as \(\mathbf{S}_{j}^{\prime}\sim\mathcal{W}(\mathbf{S}_{j}/\alpha,\alpha)\), which has mean \(\mathbf{S}_{j}\) and variance determined by \(\alpha\). In our experiments, we adjust \(a\) using ideas from the adaptive MCMC framework (Andrieu and Thoms, 2008) to target an acceptance rate of around \(0.2\).
Algorithm 1 represents the overall MCMC algorithm for the hierarchical model for differentially Bayesian distributed linear regression when \(P_{x}\) is a normal distribution with a random covariance matrix having an inverse-Wishart distribution. We call this algorithm MCMC-normalX.
```
0: Current values of \(\mathbf{S}_{1:J}\), \(\mathbf{\theta}\), \(\sigma_{y}^{2}\), \(\mathbf{\Sigma}_{x}\); observations \(\hat{\mathbf{S}}_{1:J}\),\(\hat{\mathbf{z}}_{1:J}\); noise variances \(\sigma_{s}^{2}\), \(\sigma_{z}^{2}\); proposal parameters \(a\), \(\sigma_{q}^{2}\); hyperparameters \(a,b,\kappa,\mathbf{\Lambda}\), \(\mathbf{m}\), \(\mathbf{C}\).
0: New sample of \(\mathbf{\Sigma}_{x},\mathbf{S},\sigma_{y}^{2},\mathbf{\theta}\)
1 Sample \(\mathbf{\Sigma}_{x}\) using (11).
2for\(j=1,2,\ldots J\)do
3 Update \(\mathbf{S}_{j}\) via an MH move targeting \(p(\mathbf{S}_{j}|\mathbf{\Sigma}_{x},\mathbf{\theta},\hat{\mathbf{z}}_{j})\).
4 Sample \(\mathbf{\theta}\) using (12).
5 Update \(\sigma_{y}^{2}\) via an MH move targeting \(p(\sigma_{y}^{2}|\mathbf{\theta},\mathbf{S}_{1:J},\hat{\mathbf{z}}_{1:J})\).
```
**Algorithm 1**MCMC-normal X - one iteration
_Remark 1_.: Admittedly, a potential concern is a conflict between the normality and boundedness assumptions (both for \(\mathbf{x}\) and \(y\)). However, we also note that the collected data often happen to have some natural boundaries (which can be exploited to determine the sensitivity of the shared statistics), and yet the normal distribution is still used for modelling and subsequent inference mainly for sake of tractability. With the normality assumption, one can implement computationally efficient algorithms at the expense of minor modelling inaccuracies. While we acknowledge the methodologies in Alparslan and Yildirim (2022, Section 4.2) and Ju et al. (2022) that can correctly incorporate the effect of truncation into inference, we remark that those methods pay the price of exactness by having \(\mathcal{O}(n)\) computational complexity per iteration.
### Features with a General Distribution
The normality assumption for \(\mathbf{x}_{i}\)'s in Section 2 may not be adequate for some data sets. Moreover, when \(d\) is large, updating \(\mathbf{S}_{j}\)'s can be the bottleneck of MCMC-normalX in Algorithm 1 in terms of computation time and convergence. We propose two algorithms to address both of those concerns. As it turns out, those algorithms provide accurate estimations even for the case of normally distributed features; see Section 5.1.
Our approach for \(\mathbf{x}_{i}\)'s with a general distribution is based on estimating \(\mathbf{S}_{j}\)'s from the beginning, using some principled estimation method, and fixing \(\mathbf{S}_{j}\)'s to those estimates during the whole course of the inference procedure. In that way, we obtain a faster MCMC algorithm at the expense of targeting an approximate posterior distribution. Moreover, we have observed in our experiments that this variant is quite competitive in terms of accuracy, especially when the total number of nodes \(J\) increases. We call this variant MCMC-fixedS and present it in Algorithm 2.
As for estimating \(\mathbf{S}_{j}\)'s, one could simply consider taking the privately shared \(\hat{\mathbf{S}}_{j}\) as an estimator for \(\mathbf{S}_{j}\), but \(\hat{\mathbf{S}}_{j}\) is not necessarily a positive (semi-)definite matrix. Instead, we propose the nearest positive semi-definite matrix of to \(\hat{\mathbf{S}}_{j}\) as the estimator of \(\mathbf{S}_{j}\) in terms of the Frobenius norm. (The nearest positive _definite_ matrix to \(\hat{\mathbf{S}}_{j}\) does not exist.) To find the nearest positive semi-definite matrix, we follow Higham (1988) and apply the following procedure for each \(j=1,\ldots,J\): (i) Calculate the eigendecomposition \(\hat{\mathbf{S}}_{j}=\mathbf{E}\mathbf{D}\mathbf{E}^{T}\), where \(\mathbf{E}\) is a matrix of eigenvectors, and \(\mathbf{D}\) is a diagonal matrix consisting of the eigenvalues \(\lambda_{i}\). (ii) The nearest symmetric positive semi-definite matrix is \(\widetilde{\mathbf{S}}_{j}=\mathbf{E}\mathbf{D}_{+}\mathbf{E}^{T}\), where \(\mathbf{D}_{+}\) is a diagonal matrix with \(\mathbf{D}_{+}(i,i)=\max\{\mathbf{D}(i,i),0\}\).
Note that \(\widetilde{\mathbf{S}}_{j}\) found above is the maximum likelihood estimator of \(\mathbf{S}_{j}\) given \(\hat{\mathbf{S}}_{j}\) (over the set of positive semi-definite matrices) since the conditional distribution of \(\hat{\mathbf{S}}_{j}\) given \(\mathbf{S}_{j}\) is a normal
distribution with mean \(\mathbf{S}_{j}\).
```
Input: Current values of \(\mathbf{\theta}\), \(\sigma_{y}^{2}\); estimates \(\hat{\mathbf{S}}_{1:J}\), observations \(\hat{\mathbf{z}}_{1:J}\); noise variance \(\sigma_{z}^{2}\), and hyperparameters \(a\), \(b\), \(\mathbf{m}\), \(\mathbf{C}\). Output: New sample of \(\sigma_{y}^{2},\mathbf{\theta}\).
1 Use \(\mathbf{S}_{1:J}=\widetilde{\mathbf{S}}_{1:J}\) throughout. Sample \(\mathbf{\theta}\) using (12). Update \(\sigma_{y}^{2}\) via an MH move targeting \(p(\sigma_{y}^{2}|\mathbf{\theta},\mathbf{S}_{1:J},\hat{\mathbf{z}}_{1:J})\).
```
**Algorithm 2**MCMC-fixedS - one iteration
```
Input: \(\hat{\mathbf{S}}_{1:J}\), \(\hat{\mathbf{z}}_{1:J}\); noise variance: \(\sigma_{z}^{2}\); estimate \(\tilde{\sigma}_{y}^{2}\) of \(\sigma_{y}^{2}\); hyperparameters: \(\mathbf{m}\), \(\mathbf{C}\). Output: Estimate \(\hat{\mathbf{\theta}}\).
1for\(j=1,2,\ldots J\)do
2 Calculate the estimate \(\widetilde{\mathbf{S}}_{j}\) for \(\mathbf{S}_{j}\) using \(\hat{\mathbf{S}}_{j}\).
3 Calculate \(\mathbf{\Sigma}_{j}=\widetilde{\mathbf{S}}_{j}(\tilde{\sigma}_{y}^{2}\widetilde{\mathbf{S }}_{j}+\sigma_{z}^{2}\mathbf{I})^{-1}\widetilde{\mathbf{S}}_{j}\).
4 Calculate \(\mathbf{m}_{j}=\widetilde{\mathbf{S}}_{j}(\tilde{\sigma}_{y}^{2}\widetilde{\mathbf{S}}_{j }+\sigma_{z}^{2}\mathbf{I})^{-1}\hat{\mathbf{z}}_{j}\).
5return Posterior moments of \(\mathbf{\theta}\): \(\mathbf{\Sigma}_{\text{post}}^{-1}=\sum_{j=1}^{J}\mathbf{\Sigma}_{j}+\mathbf{C}^{-1}\), \(\mathbf{m}_{\text{post}}=\mathbf{\Sigma}_{\text{post}}\left(\mathbf{C}^{-1}\mathbf{m}+\sum_{j= 1}^{J}\mathbf{m}_{j}\right)\).
```
**Algorithm 3**Bayes-fixedS-fast
MCMC-fixedS in Algorithm 2 is faster than MCMC-normalX in Algorithm 1, since it avoids the step to update \(\mathbf{S}_{j}\)'s, which constitutes the main computational burden on Algorithm 1. However, MCMC-fixedS can be made even faster by fixing \(\sigma_{y}^{2}\) also. As a crude estimator, we used \(\tilde{\sigma}_{y}^{2}=\|\mathcal{Y}\|/3\) throughout the experiments. When \(\sigma_{y}^{2}\) is fixed in addition to \(\mathbf{S}_{1:J}\), we end up with a non-iterative method where the posterior distribution of \(\mathbf{\theta}\) is calculated in closed form. We call the resulting algorithm Bayes-fixedS-fast and present it in Algorithm 3. Algorithm 3 does nothing but returns the moments of the posterior distribution of \(\mathbf{\theta}\) given \(\widetilde{\mathbf{S}}_{j}\)'s, \(\hat{\mathbf{z}}_{j}\)'s, \(\tilde{\sigma}_{y}^{2}\), and the prior parameters for \(\mathbf{\theta}\).
### Computational Cost
All our methods described in this section require \(\mathcal{O}(d^{3})\) computation (per iteration for the iterative ones in Algorithms 1 and 2, or as a whole for the fast version in Algorithm 3) since they deal with \(d\times d\) matrices. In contrast, as Bernstein and Sheldon (2019) apply CLT to the vector \([\mathbf{S},\mathbf{z},\mathbf{y}^{T}\mathbf{y}]\), their methods deal with covariance matrices of size \((d^{2}+d+1)\)_explicitly_, which leads to \(\mathcal{O}(d^{6})\) computation per MCMC iteration. For even moderate \(d\), this computational difference becomes dramatic and the latter may be prohibitive. Moreover, the complexity of our methods does not depend on \(n\). This is in contrast to the \(\mathcal{O}(n)\) complexity of general-purpose methods, such as Alparslan and Yildirim (2022, Section 4.3) and Ju et al. (2022), that can be applied to linear regression.
### Extensions
We mention two other variants of our methodology, deferring the details to Appendix B.
Another solution for dealing with non-normal \(P_{x}\) could be to average the feature vectors in \(\mathbf{X}\) (and the corresponding response variables in \(\mathbf{y}\)), so that the averaged rows of \(\mathbf{X}\) can be modelled as approximately normal, due to CLT. This enables using the methods devised for normally distributed features. For the details of this approach, see Appendix B.1.
Secondly, if the features are normally distributed but the data are not centred, we need to include the intercept parameter, which corresponds to appending \(\mathbf{x}_{i}\) with a one from the left, and MCMC-normalK does not directly apply. In that case, we can modify the hierarchical model that accommodates the non-centralised features and the intercept parameter and still benefit from the sampling techniques involved in MCMC-normalX in Algorithm 1. Appendix B.2 contains the details of the modified hierarchical model.
## 5 Numerical Experiments
We present several numerical evaluations of the proposed methods, MCMC-normalX, MCMC-fixedS, and Bayes-fixedS-fast with simulated and real data. We compare our algorithms with two methods: adaSSP of Wang (2018) and the MCMC method of Bernstein and Sheldon (2019) for differentially private linear regression that we call MCMC-B&S. Note that adaSSP and MCMC-B&S are originally proposed for the non-distributed setting, that is, \(J=1\). For a comprehensive comparison, we have implemented their extensions for \(J\geq 1\). The details of those extensions are provided in Appendix C. In particular, we have carefully generalised the model in Bernstein and Sheldon (2019) for \(J\geq 1\) similarly as we have done for our model in Section 3.2. What we call MCMC-B&S is the adaptation of Bernstein and Sheldon (2019, Algorithm 1) for this generalised model (and \((\epsilon,\delta)\)-DP). The code to replicate all of the experiments in this section can be found at [https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git](https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git).
### Experiments with Simulated Data
We have considered two different configurations, \((n=10^{5},d=2)\) and \((n=10^{5},d=5)\), for the problem size. For each \((n,d)\), we have simulated the data as follows: We have generated \(\mathbf{\theta}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d})\), \(\mathbf{x}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{x})\) where \(\mathbf{\Sigma}_{x}\sim\mathcal{IW}(\mathbf{\Lambda},\kappa)\) with \(\kappa=d+1\) and selected the scale matrix randomly as \(\mathbf{\Lambda}=\mathbf{V}^{T}\mathbf{V}\), where \(\mathbf{V}\) is a \(d\times d\) matrix of i.i.d. variables from \(\mathcal{N}(0,1)\). The response variables \(\mathbf{y}\) have been generated with \(\sigma_{y}^{2}=1\). For inference, we have used the same \(\mathbf{\Lambda}\), \(\kappa\) as above and \(a=20\), \(b=0.5\), \(\mathbf{m}=\mathbf{0}_{d\times 1}\), \(\mathbf{C}=(a-1)/b\mathbf{I}_{d}\) for the other hyperparameters.
We have evaluated the methods at all combinations of \(J\in\{1,5,10\}\) and \(\epsilon\in\{0.1,0.2,0.5,1,2,5,10\}\). All the MCMC algorithms have been run for \(10^{4}\) iterations. For each \((J,\epsilon)\) pair, we have tried each method \(50\) times (each with different noisy observations) to obtain average performances.
For performance metrics, we have looked at the mean squared errors (MSE) of (i) the estimates \(\hat{\mathbf{\theta}}\), and (ii) the predictions \(\hat{y}(\mathbf{x}_{\text{test}})\) generated by the methods. For the Bayesian methods, \(\hat{\mathbf{\theta}}\) is taken as the mean posterior, which can be numerically estimated for the MCMC algorithms. For prediction performance, we have calculated \(\mathbb{E}[\hat{y}(\mathbf{x}_{\text{test}})-y_{\text{test}}]^{2}\). For the Bayesian methods, \(\hat{y}(\mathbf{x}_{\text{test}})\) is the posterior predictive expectation of \(y_{\text{test}}\) at \(\mathbf{x}_{\text{test}}\). For adaSSP, we simply take \(\hat{y}(\mathbf{x}_{\text{test}})=\mathbf{x}_{\text{test}}^{T}\hat{\mathbf{\theta}}\).
The results are summarised in Figure 2. We observe that MCMC-fixedS and Bayes-fixedS-fast outperform adaSSP and MCMC-B&S in almost all cases both in terms of estimation and prediction. Comparing the full-scale algorithms MCMC-normalX and MCMC-B&S (that involve updates of \(\mathbf{S}\)), we observe a clear advantage of MCMC-normalX at \(d=2\), but MCMC-B&S becomes more competitive at \(d=5\). This can be attributed to the fact that MCMC-B&S requires the extra statistic \(\mathbf{y}^{T}\mathbf{y}\), unlike MCMC-normalX, which causes MCMC-B&S to use more noisy statistics. This difference becomes more significant at small \(d\), where the relative effect of the presence of \(\mathbf{y}^{T}\mathbf{y}\) on the sensitivity is more significant. Finally, all methods improve as \(\epsilon\) grows, which is expected.
We also compare the computation times of the MCMC algorithms MCMC-normalX, MCMC-fixedS, and MCMC-B&S1. Figure 3 shows the run-times of the algorithms vs \(d\). The drastic difference in computational loads explained in Section 4.3 is also visible in the figure. While MCMC-B&S may be improved in terms of accuracy as \(d\) increases, the \(\mathcal{O}(d^{6})\) dramatically slows it down.
Figure 3: Run times per iteration for MCMC algorithms
Figure 2: Averaged prediction and estimation performances (over 50 runs). Top row: \(n=10^{5},d=2\), Bottom row: \(n=10^{5},d=5\).
### Experiments with Real Data
For the real data case, we have used four different data sets from the UCI Machine Learning Repository. We have disregarded the columns including string data or key values (ID, name, date, _etc._), and we have considered the most right-hand column as \(\mathbf{y}\). The finalised data sets are summarised below.
For prediction, we have taken 80% of the data for training and the rest for testing. We present the average prediction performances (out of 50 runs) in Table 1 for each dataset and \(J\) with \(\epsilon=1\). We observe that the prediction performances of the compared methods are close, while MCMC-fixed-S and Bayes-fixed-S are arguably the most stable ones. When \(J>1\) (the distributed data setting), those two methods beat adaSSP and MCMC-B&S more satisfactorily.
## 6 Conclusion
We propose a novel Bayesian inference framework, with MCMC being its main workhorse, for a differentially private distributed linear regression setting where the data is partitioned among the data holders. We provide several Bayesian inference algorithms suited to the developed hierarchical model for linear regression. Those algorithms can be preferred one over the other depending on the computational budget, model specifics, or how much we know about the underlying statistical facts of the data. We exploit the conditional structure between the summary statistics of linear regression, as given in Proposition 1, which leads to feasible algorithms with computational advantages over their competitors. The numerical experiments show that the proposed methods are competitive with their state-of-the-art alternatives in terms of accuracy.
The extensions mentioned in Section 4.4 indicate potential future directions. There is also room
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \(J\) & **data sets** & \(n\) & \(d\) & **hyperlinks** \\ \hline power plant energy & 7655 & 4 & view link \\ bike sharing & 13904 & 14 & view link \\ air quality & 7486 & 12 & view link \\
3d road & 347900 & 3 & view link \\ \hline \hline \end{tabular}
\end{table}
Table 1: Averaged prediction performances (over 50 runs) for the real datasets - \(\epsilon=1\)
for improvement of MCMC-normalX. We chose the most common MH moves to update \(\sigma_{y}^{2}\) and \(\mathbf{S}_{j}\)'s, without paying much attention to their efficiencies. Especially for large \(d\), more advanced techniques, such as those stemming from Hamiltonian Monte Carlo (Neal, 2001) or pseudo-marginal MCMC (Andrieu and Roberts, 2009), may be employed to facilitate the mixing of the algorithm.
## 7 Acknowledgement
The study was funded by the Scientific and Technological Research Council of Turkey (TUBITAK) ARDEB Grant No 120E534.
Supplementary material:The code to replicate the experiments in Section 5 can be found at [https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git](https://github.com/sinanyildirim/Bayesian_DP_dist_LR.git).
|
2309.05738 | GRMHD simulations of accretion flows onto unequal-mass, precessing
massive binary black hole mergers | In this work, we use general relativistic magnetohydrodynamics simulations to
explore the effect of spin orientation on the dynamics of gas in the vicinity
of merging black holes. We present a suite of eight simulations of
unequal-mass, spinning black hole binaries embedded in magnetized clouds of
matter. Each binary evolution covers approximately 15 orbits before the
coalescence. The geometry of the accretion flows in the vicinity of the black
holes is significantly altered by the orientation of the individual spins with
respect to the orbital angular momentum, with the primary black hole dominating
the mass accretion rate $\dot{M}$. We observe quasiperiodic modulations of
$\dot{M}$ in most of the configurations, whose amplitude is dependent on the
orientation of the black hole spins. We find the presence of a relation between
the average amplitude of $\dot{M}$ and the spin precession parameter
$\chi_{\mathrm{p}}$ showing that spin misalignment systematically leads to
stronger modulation, whereas configurations with spins aligned to the orbital
angular momentum damp out the quasiperiodicity. This finding suggests a
possible signature imprinted in the accretion luminosity of precessing binaries
approaching merger and has possible consequences on future multimessenger
observations of massive binary black hole systems. | Federico Cattorini, Bruno Giacomazzo, Monica Colpi, Francesco Haardt | 2023-09-11T18:03:36Z | http://arxiv.org/abs/2309.05738v1 | # GRMHD simulations of accretion flows onto unequal-mass,
###### Abstract
In this work, we use general relativistic magnetohydrodynamics simulations to explore the effect of spin orientation on the dynamics of gas in the vicinity of merging black holes. We present a suite of eight simulations of unequal-mass, spinning black hole binaries embedded in magnetized clouds of matter. Each binary evolution covers approximately 15 orbits before the coalescence. The geometry of the accretion flows in the vicinity of the black holes is significantly altered by the orientation of the individual spins with respect to the orbital angular momentum, with the primary black hole dominating the mass accretion rate \(\dot{M}\). We observe quasiperiodic modulations of \(\dot{M}\) in most of the configurations, whose amplitude is dependent on the orientation of the black hole spins. We find the presence of a relation between the average amplitude of \(\dot{M}\) and the spin precession parameter \(\chi_{p}\) showing that spin misalignment systematically leads to stronger modulation, whereas configurations with spins aligned to the orbital angular momentum damp out the quasiperiodicity. This finding suggests a possible signature imprinted in the accretion luminosity of precessing binaries approaching merger and has possible consequences on future multimessenger observations of massive binary black hole systems.
## I Introduction
Gravitational waves (GWs) generated by binaries of MBHs in the mass range \(M\)\(\sim\)\(10^{4}\)-\(10^{7}\)\(\mathrm{M_{\odot}}\) are expected to be detected by future space-based, low-frequency gravitational interferometers such as the Laser Interferometer Space Antenna [LISA, 1, 2, 3] and the Taiji Program in Space [4]. In the aftermath of a gas-rich galaxy merger, the late-inspiral and coalescence of MBBHs is expected to occur in gas-rich environments [3, 5, 6], making these sources particularly promising systems for multimessenger astrophysics. Concurrent (multimessenger) detection of gravitational and electromagnetic (EM) radiation generated by merging MBBHs yields the potential to resolve a number of astrophysical conundrums, e.g., providing new probes of the universe expansion [7, 8], and offering new opportunities to deepen our understanding of MBHs evolution in the context of large-scale structures [9]. Future observational endeavors searching for these systems rely on firm predictions for the EM emission arising before, during, and after the merger. Without accurate predictions, it will be challenging to know what to look for and where to search in the EM spectrum. Limited knowledge of the unique spectral and timing features of such signals would frustrate future multimessenger efforts. Thus, next-generation multimessenger astronomy relies on detailed theoretical understanding of the mechanisms that may give rise to EM signatures of MBBHs, aiming to determine distinctive features that will help distinguishing them from other events.
There are several proposed models about the gaseous environment in which a MBBH can be embedded. When the gas around the binary has enough angular momentum, the two MBHs are expected to be surrounded by a dense and cold, radiatively inefficient _circumbinary disk_ (CBD) with the black holes located in a central low-density region excavated by the binary tidal torques [10, 11, 12]. The CBD feeds mass to the black holes through tidal streams, which may form individual accretion disks ("mini-disks") around each MBH [13, 14, 15, 16]. Over the last decade, this scenario has been extensively studied by several theoretical groups adopting a variety of different numerical techniques [see, e.g., 17, for a review].
Since the balance between heating and cooling mechanisms in the gas near MBBHs can be significantly altered toward the merger, different scenarios for the accretion flow in the proximity of merging MBBHs are possible. If radiative cooling is inefficient, the gaseous environment would resemble a hot and tenuous radiatively inefficient accretion flow [IRAF, 18, 19]. In this scenario, most of the energy generated by accretion and turbulent stresses is stored in the gas, resulting in a geometrically thick accretion flow [20]. Furthermore, the binary tidal torques are incapable of creating a central cavity, because the ejected gas is replenished on a dynamical time scale. Therefore, the binary will find itself engulfed in a hot _gas cloud_ all the way down to the merger.
The gas cloud scenario has been modeled in relativistic simulations by several groups [21, 22, 23, 24]. In [25], we have explored the features of moderately magnetized gas cloud accretion onto equal-mass, aligned-spinning binaries by producing a set of nine ideal-GRMHD simulations covering a range of initially-uniform magnetized fluids with different initial magnetic-to-gas pressure ratio \(\beta_{0}^{-1}\). Our results have shown that aligned-spins have a suppressing effect on the mass accretion rate as large as \(\sim\)50%; also, we found that the peak Poynting luminosity reached shortly after merger is enhanced by up to a factor \(\sim\)2.5 for binaries of spinning black holes compared to nonspinning configurations. In the follow-up investigation [26], we produced five simulations of equal-mass binaries of spinning black holes characterized by diverse orientations of the spins relative to the orbital angular momentum. Notably, we found that the orientation of the individual spins during the
late-inspiral appears to be related with quasiperiodic modulations in the rate of mass accretion, leading to a "chirp" in the accretion rate that strongly resembles the gravitational one.
In this Paper, we present a suite of eight unequal-mass simulations of binary black holes immersed in a cloud of magnetized matter and consider a broader family of spin-misalignment. We aim to further explore the relation between the orientation of the individual spins and the quasiperiodic modulations in the mass accretion rate. The Paper is organized as follows. In Section II, we outline the numerical methods adopted in our simulations and provide the initial data for the metric and the MHD fields. Section III is devoted to our results on the effects of spin-misalignment on the dynamics of magnetized accretion flows and the link between spin-orientation and quasiperiodicities. Finally, in Section IV we summarize our conclusions. Throughout the Paper, we adopt geometric units and set \(c=G=M=1\), where \(M\) is the total mass of the system.
## II Numerical Methods and Initial Data
In the present Section, we outline the numerical setup adopted to solve numerically Einstein's field equations and the equations of ideal MHD in curved and dynamical spacetime. All our simulations have been carried out on adaptive-mesh refinement (AMR) grids provided by the Carpet driver [27] using the Einstein Toolkit1 framework [28; 29; 30], a powerful infrastructure for relativistic astrophysics and gravitational physics made up by several components (the so-called "thorns").
Footnote 1: [http://einsteintoolkit.org](http://einsteintoolkit.org)
We produced a suite of eight simulations of unequal-mass binaries (Table 1) starting on quasicircular orbits at a orbital coordinate separation \(a_{0}=12.5M\) and evolving across the late-inspiral and merger. Along the path to coalescence of a MBBH, the accretion of matter onto the binary components tends to drive their mass-ratio close to unity [31; 32]. For this reason, we have chosen to evolve equal-mass systems and unequal-mass systems with mass ratios close to unity. For all our models, the binary mass-ratio is \(q\equiv m_{-}/m_{+}=0.6\), where \(m_{-}\) (\(m_{+}\)) is the mass of the secondary (primary) black hole. We produced eight \(q=0.6\) simulations of BBHs with spin magnitude \(a=0.6\) and covering a broader range of spin misalignments with the orbital angular momentum.
Six configurations consider binaries immersed in a magnetized plasma with an adiabatic index \(\Gamma=4/3\). In addition, we evolve two models with the same initial metric setup of configuration b@p6 and with an adiabatic index \(\Gamma=5/3,13/9\).
The fluid and magnetic-field configurations are equivalent to those by [26]: the binary systems are immersed in an initially uniform, radiation-dominated polytropic fluid (\(p_{0}=\kappa\rho_{0}^{\Gamma}\), with \(\rho_{0}=1,\ \kappa=0.2,\ \Gamma=4/3\)). The fluid is initially at rest relative to black holes, which is non-physical. Therefore, we must be careful to start our BBH at a large enough separation to allow the gas in the strong-field region to establish a quasi-equilibrium flow with the binary motion. The numerical explorations by [24] have investigated the dependence of the timing features of the evolving plasma on the initial binary separation and observed that binary configurations starting at initial separations of \(11.5M\), \(14.4M\), and \(16.3M\) show the same qualitative behavior. Motivated by this result, we choose to evolve our binaries starting from an initial separation \(12.5M\). We evolve our binaries over the last \(\sim\)15 orbits to merger and beyond.
The fluid in which the MBBHs are embedded is threaded by an initially uniform magnetic field parallel to the binary angular momentum, i.e., \(B^{i}=(0,0,B^{i})\). The magnetic field is assumed to be anchored to a distant circumbinary disk located outside the computational domain. This initial configuration of the magnetic field is analogous to that implemented in previous works [e.g., 21; 23; 24; 33; 34]. As in [26], we choose the initial magnetic-to-gas pressure to be \(\beta_{0}^{-1}=0.31\).
Our simulations adopt a cubic domain given by \([-1024M,1024M]^{3}\) and employ AMR with \(N=11\) levels of refinement. The coarsest resolution is \(\Delta x_{c}=128M/7\), and the finest one is \(\Delta x_{f}=\Delta x_{c}\cdot 2^{1-N}=M/56\). We employ radiative (or Sommerfeld) boundary conditions for the metric and "outflow" boundary conditions [35] to the MHD variables.
### Spacetime metric evolution
We write the generic line element in the standard 3+1 form as:
\[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=-\alpha^{2}dt^{2}+\gamma_{ij}\left(dx^{i} +\beta^{i}dt\right)\left(dx^{i}+\beta^{j}dt\right), \tag{1}\]
where \(\alpha\), \(\beta^{i}\), and \(\gamma_{ij}\) are the lapse function, shift, and spatial metric, respectively. We evolve \(\gamma_{ij}\) and the extrinsic curvature \(K_{ij}\) using the Kranc-based McLachlan [36; 37] code, which evolves Einstein's equations adopting the BSSN [38; 39; 40] formalism [the evolution and constraint equations for the spacetime fields in the BSSN form are summarized, e.g., in [41]. The metric evolution equations do not include matter source terms, since for all the simulations considered in this work we assume that the total mass of the fluid is negligible with respect to the mass of the two BHs, \(M_{\rm fluid}\ll M_{\rm BHs}\) (i.e., we evolve the Einstein equations in vacuum).
Our initial metric data are of the Bowen-York type [42], conditioned to satisfy the constraint equations using the TwoPunctures thorn [43]. We adopt the standard "moving puncture" gauge conditions [44; 45; 46]:
\[\partial_{t}\alpha=-F\alpha^{N}K \tag{2}\]
\[\partial_{t}\beta^{i}=\zeta B^{i} \tag{3}\]
\[\partial_{t}B^{i}=\partial_{0}\Gamma^{i}-\eta B^{i} \tag{4}\]
where we set \(F=2\), \(N=1\) (1+log condition), \(\zeta=3/4\), and \(\eta=8/3\). To generate the initial data we employ the NRPyPN module of NRPy+, a Python-based code generator for numerical relativity [47; 48]. In Table 1 we list the initial punctures
separation, masses, momenta, and spins for our eight binary models. In Fig. 1, we display the six different spin orientations considered in our models.
### General relativistic magnetohydrodynamics
We evolve the matter and magnetic fields with the IllinoisGRMHD thorn [35; 49], which solves the GRMHD equations assuming a perfect fluid stress-energy tensor for the matter in the ideal MHD limit, i.e., considering a perfectly conducting medium. The GRMHD equations are solved in a flux-conservative form via a high-resolution shock-capturing (HRSC) scheme [35]. The divergence-free property of the magnetic field is guaranteed with the evolution of the magnetic four-vector potential in the "generalized" Lorenz gauge introduced in [50]. To close the system of GRMHD equations, we must specify an EoS. Across the evolution, we adopt a \(\Gamma\)-law EOS of the form
\[P=(\Gamma-1)\rho\epsilon, \tag{5}\]
where \(\rho\) is the rest-mass density and \(\epsilon\) is the specific internal energy.
### Diagnostics
To follow the dynamics of accretion flows and the evolution of the magnetic fields, as well as their timing features and their relation with the gravitational signal, we employ a number of physical diagnostics.
#### ii.3.1 Gravitational radiation
One of the most important outputs of a simulation of merging black holes is the emitted gravitational radiation. A successful approach which can be employed to extract the outgoing GWs in a numerical relativity simulation is based on the calculation of the Weyl scalar \(\Psi_{4}\) in the Newman-Penrose formalism [51], which can be interpreted as a measure of the outgoing gravitational radiation [see, e.g., 52] as
\[\Psi_{4}=\bar{h}_{+}-i\bar{h}_{\times}. \tag{6}\]
We calculate the Weyl scalar \(\Psi_{4}\) with the thorn WeylScal4 [28]. Although relation (6) is only strictly valid in asymptotically flat spacetimes (i.e., at infinity), in numerical simulations \(\Psi_{4}\) is often extracted on spheres of large but finite radius. In all our simulations, it is extracted on spheres of radius 800 \(M\). The GW is then computed integrating Eq. (6) in the frequency domain following the standard approach of Reisswig and Pollney [53].
#### ii.3.2 Rest-mass accretion rate
A fundamental diagnostic of our simulations is the flux of rest-mass across the horizon of each BH. To study the mass accretion rate onto the BH horizons, we use the Outflow thorn [54], which computes the flow of rest-mass density across a given spherical surface (e.g., in our case, across each apparent horizon). This quantity is calculated via
\[\dot{M}=-\oint_{S}\sqrt{\gamma}Dv^{i}d\sigma_{i}, \tag{7}\]
Figure 1: Artistic impression of the initial configurations of the binary systems investigated in our simulations. The black holes are initially on the \(xy\)-plane at an orbital coordinate separation \(a_{0}=12.5M\). The arrows denote the six different orientations of the individual spins, ranging from aligned spins (fainter arrows) to spins nearly orthogonal to the orbital angular momentum (thicker arrows). The initial data for our configurations are given in Table 1.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Run & \(|p_{i}|\) & \(|p_{i}|\) & \(\hat{a}_{+}\) [\(m_{+}^{2}\)] & \(\hat{a}_{-}\) [\(m_{-}^{2}\)] \\ \hline b\(\overline{\text{bp2}}\) & 3.61e-4 & 7.53e-2 & (0.00, 0.00, 0.60) & (0.00, 0.00, 0.60) \\ \hline b\(\overline{\text{bp5p12}}\) & 3.62e-4 & 7.53e-2 & (-0.16, 0.00, 0.58) & (0.16, 0.00, 0.58) \\ \hline b\(\overline{\text{bp3}}\) & 3.67e-4 & 7.56e-2 & (-0.30, 0.00, 0.52) & (0.30, 0.00, 0.52) \\ \hline b\(\overline{\text{bp4}}\) & 3.74e-4 & 7.59e-2 & (-0.42, 0.00, 0.42) & (0.42, 0.00, 0.42) \\ \hline b\(\overline{\text{bp6}}\) & 3.85e-4 & 7.64e-2 & (-0.52, 0.00, 0.30) & (0.52, 0.00, 0.30) \\ \hline b\(\overline{\text{bp12}}\) & 3.99e-4 & 7.70e-2 & (-0.58, 0.00, 0.16) & (0.58, 0.00, 0.16) \\ \hline \hline \end{tabular}
\end{table}
Table 1: BBH initial data parameters in code units of the GRMHD runs: initial puncture separation \(a_{0}\), individual puncture masses \(m_{+}/M\) and \(m_{-}/M\), linear momentum components \(p_{i}\), \(\&\ p_{i}\), and dimensionless spin vectors \(\hat{a}_{i}=(a_{i,x},a_{i,x})\) of each BH. Data for model b\(\overline{\text{bp6}}\) refer to three different simulations, characterized by different values of the adiabatic index \(\Gamma\).
where \(D\equiv\rho\omega u^{0}\) is the fluid density measured in the observer frame (i.e. \(\rho W\), where \(W\) is the Lorentz factor), and \(\sigma^{i}\) is the ordinary (flat) space directed area element of the surface enclosing the horizon.
#### ii.3.3 Frequency analysis
A number of output diagnostics (e.g., the mass accretion rate) extracted from our simulations display quasiperiodic timing features. To examine these periodicities, we perform Fourier analysis on the time-series of the diagnostics (Section III.3). The data are prepared according to the following steps: (i) we crop the data excluding from the frequency analysis the first and the last orbit, in order to remove initial transients and late decrease prior to merger; (ii) we fit the data with a 10th-order polynomial and subtract the fit from the data to isolate the periodic behavior; (iii) we "zero-pad" the data in order to produce a smoother function in the frequency domain. We then proceed to estimate the power spectral density (PSD) with the signal.periodogram method from the scipy package.
## III Results
In table 2, we list a number of derived quantities for our six spin configurations All models start at a separation \(a_{0}=12.5~{}M\). We observe that runs b@p3 takes the longest time to merge (\(t_{\rm merger}/M=2958\)), followed by run b@p4 (\(t_{\rm merger}/M=2934\)). In general, the merger time is smaller for the configuration with spins (nearly) orthogonal to the orbital angular momentum \(L_{\rm orb}\) (run b@p12, \(t_{\rm merger}/M=2861\)). Also, run b@p12 yields a remnant BH with the smallest spin. Conversely, the remnant mass is largest for run b@p12 and becomes smaller as the spin orientation lines up with \(L_{\rm orb}\). The remnant BHs of all misaligned-spin configurations experience a recoil velocity the order of \(\sim 10^{3}\) km s\({}^{-1}\).
### MHD fields evolution
The evolution of the rest-mass density for our unequal-mass models exhibits features resembling those by equal-mass configuration UUMIS by [26]. One noticeable difference lies in the geometry of the disk-like overdensities that develop around the horizons. The plasma around the primary BH settles in wider structures that are tilted about the spin axis. On the top row of Fig. 2 we display 2D-slices in the \(xz\)-plane of the rest-mass density \(\rho/\rho_{0}\) for model b@p12. The snapshots were taken approximately after five orbits (left), after 11 orbits (center), and at a time equal to \(\sim 10^{3}\) after merger. We observe the development of thin, disk-like structures around the primary BHs, which correspond to the equatorial areas of powerful, magnetically-dominated funnels (protojets) that form near the horizons (bottom rows of Fig. 2). The secondary BHs are enveloped by smaller overdensities and develop less powerful protojets. To investigate this feature, we track the evolution of the magnetic field density close to each BH and find that the magnetic field in the vicinity of the primary is on average \(\sim\)80% larger than near the secondary. This general trend does not depend on the initial orientation of the individual spins.
The protojets that we observe in the equal-mass simulations from [26] are in general oriented towards the spin axis of each BH for distances \(\lesssim 5~{}M\); for larger separations, they start to align to the orbital axis. The structure of the protojets of the unequal-mass models is more twisty: for distances \(\lesssim 5~{}M\), the funnels are generally oriented towards the BH spin; for separations \(5\lesssim d/M\lesssim 50\) their structure is quite convoluted; finally, for distances \(\gtrsim 50~{}M\), they start to align to the orbital axis. After merger, we observe a magnetic field amplification of \(\sim\)two orders of magnitude. This is consistent with previous findings by [23; 55] and with our results of equal-mass binary models, and appears to indicate a general trend of binary mergers in MMAFs: the magnetic field strength steadily grows as the binary approaches merger, due to the accretion of gas and thus piling up of magnetic field lines in the vicinity of the horizons.
In the right panels of Fig. 2, we observe that the remnant BH is in motion due to a recoil kick velocity. We observe that (i) the remnant is dragging the disk-like structure along, and (ii) the disk-like structure is clearly tilted about the rem
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Run & \(t_{\rm merger}\) [M] & \(N_{\rm orb}\) & \(M_{\rm rem}/M\) & \(\hat{a}_{\rm rem}\) [M\({}_{\rm rem}^{2}\)] & \(\vec{v}_{\rm kick}\) [km s\({}^{-1}\)] & \(v_{\rm kick}\) [km s\({}^{-1}\)] \\ \hline b@p2 & 2861 & \(\sim\)15 & - & - & - & - & - \\ \hline b@p512 & 2909 & \(\sim\)16 & 0.936 & (0.022, 0.005, 0.847) & (35, -877, -1122) & 1424 \\ \hline b@p3 & 2958 & \(\sim\)16 & 0.938 & (0.046, 0.010, 0.819) & (361, 62, -701) & 791 \\ \hline b@p4 & 2934 & \(\sim\)16 & 0.942 & (0.068, 0.013, 0.805) & (-1027, -155, -381) & 1107 \\ \hline b@p6 & 2829 & \(\sim\)15 & 0.946 & (0.092, 0.017, 0.759) & (636, -337, -1.35) & 733 \\ \hline b@p12 & 2687 & \(\sim\)14 & 0.952 & (0.113, 0.012, 0.712) & (874, 560, 1031) & 1500 \\ \hline \hline \end{tabular}
\end{table}
Table 2: BBH derived quantities in code units of the six \(q=0.6\) binary configurations: merger time \(t_{\rm merger}\), total number of orbits \(N_{\rm orb}\), remnant’s mass M\({}_{\rm rem}\), components of the remnant’s spin parameter \(\hat{a}_{\rm rem}\), remnant’s kick velocity \(\vec{v}_{\rm kick}\) and speed \(v_{\rm kick}\) in km s\({}^{-1}\). Data relative to model b@p2 are missing because of computational issues.
nant's spin axis. Similarly to the magnetically-dominated funnels emerging from the individual BHs across the inspiral, the protojets emerging from the remnant are directed towards the spin direction for distances \(\lesssim 50\ M\). At larger distances, they start to align toward the \(z\)-axis. The recoil kick experienced by massive merged remnants might be associated to a potentially detectable transient, as the BH "shakes" the nuclear material and creates shocks which may give rise to an EM signal.
### Mass accretion rate
In Fig. 3, we display the total accretion rate \(\dot{M}\) for our six \(\Gamma=4/3\) models: the premerger value of \(\dot{M}\) is the sum of the individual accretion rates computed on each horizon, while the postmerger value is calculated on the remnant BH2. The time of merger is denoted by a vertical dashed line. The accretion rate is scaled to consider a binary of total mass \(M=10^{6}\ \mathrm{M}_{\odot}\) (consistent with a binary system in the LISA band) embedded in a gas cloud with initial uniform density \(\rho_{0}=10^{-11}\ \mathrm{g}\ \mathrm{cm}^{-3}\). The units of time are \(M_{6}\)hours, where \(M_{6}\equiv M/10^{6}\mathrm{M}_{\odot}\).
Footnote 2: Due to computational issues, the postmerger value of b@p2 model in the first moments after the coalescence is missing.
The time-averaged premerger value of \(\dot{M}\) is consistent across all models, but displays a distinct (albeit small) increase as the initial spin orientation shifts from totally-aligned (run b@p2, top-left panel in Fig. 3) to nearly orthogonal to the \(z\)-axis (run b@p12, bottom-right panel).
Noticeably, the oscillatory features of \(\dot{M}\) display a more prominent growth: as the initial spins misalign relative to \(L_{orb}\), we observe that the variability of the accretion rate exhibits sharper quasiperiodic features; also, the amplitude of the modulations increases. In Fig. 3, we plot also the individual accretion rates onto the primary (red curve) and secondary (blue curve) black holes. The accretion rate is larger for the primary BH by a factor \(\sim\)(\(m^{+}/m^{-})^{2}\simeq 2.8\), consistent with Bondi-Hoyle-Lyttleton accretion [see, e.g. 56, for a review].
Figure 2: Top row: evolution of the normalized rest-mass density \(\rho/\rho_{0}\) on the \(xz\)-plane of the b@p12 configuration. Bottom row: evolution of the magnetic-to-gas pressure \(\beta^{-1}\) on the \(xz\)-plane. Snapshots were taken approximately after five orbits (left), after 11 orbits (middle), and at a time equal to\(\sim\)1300 M after the merger.
#### Accretion rate variability and spin precession
We observe that the amplitude of the modulations in \(\dot{M}\) appears tightly connected with the inclination of the individual spins. Assuming that the accretion rate translates to detectable accretion-luminosity signals, this finding could in principle point to a new diagnostic to probe black hole spins observationally. If combined with gravitational wave detections, it could provide an additional means to characterize spin precession, and help distinguishing the timing features that will emerge from the EM candidates to MBBH GW counterparts.
Spin precession is a key feature in the relativistic evolution of two compact objects [57] and its measurement has significant impact in both astrophysics and fundamental physics [58]. To date, the most commonly used quantity to characterize spin precession in GW data analysis in the so-called _effective precession parameter_\(\chi_{\rm p}\)[59]. This parameter is widely used in state-of-the-art analyses of GW data to infer the occurrence of precession: a non-zero value of \(\chi_{\rm p}\) is considered a strong indication that orbital precession has been measured [60]. We employ the definition of \(\chi_{\rm p}\) given in [59] (sometimes called "heuristic" spin parameter) and compute the value of \(\chi_{\rm p,heu}\) of four \(q=0.6,\Gamma=4/3\) models (i.e., configurations b0p3, b0p4, b0p6, and b0p12). We chose these four models as they display the clearest quasiperiodic pattern in \(\dot{M}\). After extracting the average value of \(\chi_{\rm p}\) for these four runs, we wish to connect it to the amplitudes of the modulations in the accretion rate. To this aim, we consider the premerger total ac
Figure 3: Total rest-mass accretion rate \(\dot{M}\) for our six \(q=0.6\), \(\Gamma=4/3\) models (black lines): the premerger value of \(\dot{M}\) is the sum of the individual accretion rates computed on each horizon, the postmerger value is computed on the remnant BH. The colored lines denote the premerger individual rest-mass accretion rates onto the primary (red) and secondary (blue) black holes. The time of merger is marked by black, vertical dashed lines.
Figure 4: Time-averaged amplitudes \(\overline{A}_{H}\) of the accretion rate as a function of the heuristic spin precession parameter \(\chi_{\rm p,heu}\) for the four \(q=0.6\), \(\Gamma=4/3\) models b0p3, b0p4, b0p6, and b0p12. The straight line indicates our best fit. The bars denote the standard normal error.
cretion rate of the four given models and quantify the strength of the modulations by calculating the _average amplitude_\(\overline{A}_{M}\) of \(\dot{M}\). To calculate \(\overline{A}_{\dot{M}}\), we first crop our \(\dot{M}\) data to exclude the early stages of the inspiral and the last phases prior to the merger. This corresponds to considering data in the interval \(I=[2,6]M_{6}\) hours (see Fig. 3). Then, we fit the data set with a 10-th order polynomial and subtract it to the accretion rates in order to isolate the oscillatory behavior that we want to investigate. To calculate \(\overline{A}_{\dot{M}}\), we measure the average distance of the data from the mean and normalize it by the corresponding mean value of \(\dot{M}\) in the interval \(I\). Then, the values of \(\overline{A}_{\dot{M}}\) can be interpreted as percentage-level fluctuations. In Fig. 4, we plot the values of \(\overline{A}_{\dot{M}}\) against the heuristic spin precession parameter \(\chi_{\rm p,bu}\) (the error bars denote the standard normal errors). Remarkably, we observe a clear trend that connects models featuring a stronger spin precession with larger modulations of the accretion rate.
### Variability
In this section we pursue the investigation of the timing features of \(\dot{M}\). In addition, we follow the evolution of the mass \(M_{HS}\) enclosed within the Hill spheres of the individual BHs [16; 61]. We investigate periodicities of \(\dot{M}\) and \(M_{HS}\) calculating the PSD of these quantities for the six \(q=0.6,\Gamma=4/3\) models. To account for the diverse features that may arise in unequal-mass systems, we consider separately the accretion rates and the mass enclosed within the Hill sphere of the individual BHs.
The left panels of Figs. 5-6 display the accretion rate and mass contained within the Hill sphere of the primary (\(\dot{M}^{+},~{}M^{+}_{HS}\)) and secondary (\(\dot{m}^{-},~{}m^{-}_{HS}\)) black holes as a function of time for each of our \(q=0.6,\Gamma=4/3\) configurations; on the right, we show the power spectral densities of the corresponding timeseries normalized by the average binary orbital frequency \(f_{\rm orb}\).
The frequency analysis of the mass enclosed within the Hill spheres reveals common features across all spin configurations: the PSD of \(M^{+}_{HS}\) exhibit a prominent peak at \(\sim\)2-2.2\(f_{\rm orb}\); conversely, the PSD of \(m^{-}_{HS}\) show two clear, well-separated peaks falling at \(\sim\)2-2.2\(f_{\rm orb}\) and \(\sim\)-4-4.2\(f_{\rm orb}\), respectively. The PSD of the mass accretion rates display comparable features for configurations b@p3, b@p4, b@p6, and b@p12. For these models, the accretion rate onto each horizon shows a definite peak at \(\sim\)2\(f_{\rm orb}\). The time evolution of \(\dot{M}\) for the remaining configurations (b@p2, and b@5p12) is more twisted: for both models, the PSD of the accretion rate onto the secondary BH exhibit a peak at \(\sim\)\(f_{\rm orb}\); instead, the PSD of the accretion rate onto the primary shows in both cases a number of peaks falling between \(\sim\)\(f_{\rm orb}\) and \(\sim\)4\(f_{\rm orb}\).
In general, we find that the modulations in the mass enclosed within the Hill spheres are consistent across all our binary configurations, and display similar features both for the primary and secondary BH. The periodicity in the accretion rate and the mass within the Hill spheres are correlated, and in most cases share the characteristic frequency of \(\sim\)2\(f_{\rm orb}\). This is not the case of models b@p2 and b@5p12, for which no apparent relation yields between the two quantities. As pointed out by [16], one could expect a strong correlation between the two quantities, as cycles of increased rest-mass within the Hill spheres may lead directly to an increase in the mass accretion rate. This picture is consistent with our results for models b@p3-b@p12, but seems in contradiction with the analysis of models b@p2 and b@5p12.
Such discrepancy will motivate future investigations by our group, which will address a larger number of binary configurations in diverse gaseous environments.
## IV Summary and Conclusions
Observed periodicities in the EM light curves rising during the GW chirp might serve as a smoking gun to detect a counterpart of a MBBH merger. Over the last years, it has been suggested that the mass accretion rate onto inspiraling MBBHs can exhibit a quasiperiodic behavior. General relativistic simulations have shown that periodicity signatures in the accretion rate may arise due to a number of mechanisms. In the CBD scenario, streams of gas flung outwards by the binary may periodically hit and shock the inner edge of the disk [62]. Also, modulations due to relativistic Doppler effect are expected to dominate the variability for mass-ration \(\leq\) 0.05 [63]. Modulations in the accretion rate onto MBBHs approaching merger may occur also in a hot and tenuous environment, in which the binary finds itself engulfed in a radiation dominated gas all the way down to the merger [26]. Additional variability may be present if the binary is in motion relative to the gas cloud (Bondi-Hoyle-Lyttleton scenario). If this is the case, modulation in the accretion rate can arise due to shocks that develop around each BH as it moves against the flow of the gas [64].
In this Paper, we presented GRMHD simulations of unequal-mass, misaligned-spinning BBHs and considered a broader family of spin-misalignments to further investigating the link between spin orientation and quasiperiodic features in the accretion rate. We observed that the geometry of the accretion flow is significantly altered by the orientation of the individual spins with respect to the orbital angular momentum \(L_{\rm orb}\). Also, we found that binary systems with a larger spin precession parameter \(\chi_{\rm p}\) imprint a sharper variability in the mass accretion rate. Conversely, individual spins aligned (or nearly aligned) with the orbital angular momentum dampen the strength of the quasiperiodicity.
This result indicates that misaligned spin might imprint periodic variability in the EM light curve of MBBHs approaching the merger. This would be of particular interest for future space-based gravitational interferometers such as LISA [3], allowing a robust identification of the EM counterpart to a GW source.
To extend our study, we aim at exploring a variety of choices for the parameters of the gas (magnetic-to-gas pressure ratio, equation of state) and of the binaries (mass ratio, individual spin magnitude and orientation, eccentricity). Furthermore, to investigate whether our result is sensitive to the initial gas distribution, a companion Paper [65] will focus on
Figure 5: Left: Accretion rate and mass contained within the Hill sphere of the primary (\(\dot{M}^{*}\)\(M_{HS}^{*}\)) and secondary (\(\dot{m}^{-}\), \(m_{HS}^{-}\)) black holes as a function of time for each of our \(q=0.6,\Gamma=4/3\) configurations. Bottom: Power spectral densities of the above timeseries normalized by the average binary orbital frequency \(f_{orb}\).
Figure 6: (continue) Left: Accretion rate and mass contained within the Hill sphere of the primary (\(\dot{M}^{+}\)\(M^{+}_{HS}\)) and secondary (\(\dot{m}^{-}\), \(\dot{m}^{-}_{HS}\)) black holes as a function of time for each of our \(q=0.6,\Gamma=4/3\) configurations. Bottom: Power spectral densities of the above timeseries normalized by the average binary orbital frequency \(f_{\rm orb}\).
the MHD features arising from the interplay of merging BBHs and thinner distributions of gas. More generally, the in-depth study of MBBHs in diverse gaseous environments is critical in establishing firm predictions of EM signals emerging from these systems; the need for a more robust theoretical characterization of these sources will motivate our future work.
###### Acknowledgements.
Numerical calculations were run on the MARCONIA3 cluster at CINECA (Bologna, Italy) with computational resources provided through a CINECA-INFN agreement (allocation INF22_teongrav), and on the EAGLE cluster at the Poznan Supercomputing and Networking Center (PSNC, Poland) with resources provided by PRACE DECI-17 grant SSMBHB. MC acknowledges support by the 2017-NAZ-0418/PER grant.
## Appendix A Poynting luminosity
To get a measure of time dependence of the jet-like Poynting-driven EM power, we compute the Poynting luminosity \(L_{\rm Poynt}\). It is calculated as the surface integral across a two-sphere with radius \(R\rightarrow\infty\)
\[L_{\rm Poynt}\approx\lim_{R\rightarrow\infty}2R^{2}\sqrt{\frac{\pi}{3}}S^{z}_ {(1,0)}, \tag{10}\]
where \(S^{z}_{(1,0)}\) is the dominant \((l,m)=(1,0)\) spherical harmonic mode of the Poynting vector. In our simulations, we extract the Poynting vector at a distance \(R=30~{}M\) and evaluate \(L_{\rm Poynt}\) with Eq. (10) [see also 24, 25].
In our GRMHD simulations, the Poynting luminosity scales as [25]
\[L_{\rm Poynt}=\rho_{0}M^{2}F(t/M;\epsilon_{0},\zeta_{0}) \tag{11}\]
where \(\epsilon_{0}\) is the initial specific internal energy, \(\zeta_{0}\equiv u_{\rm mag}/u_{\rm fluid}\) the initial magnetic-to-fluid energy density ratio and \(F\) is a dimensionless function of time (for more details, see Section 3 in [24]).
Equation (11) is in code units, where \(c=G=1\). To convert this relation to cgs units, we need to multiply by a factor \(G^{2}/c\approx 1.48\times 10^{-25}~{}{\rm g^{-2}~{}cm^{4}~{}s^{-2}}\), and we obtain
\[\begin{split} L_{\rm Poynt}\left(t\right)=& 1.483\times 10^{-25}\left(\frac{\rho}{1~{}{ \rm g~{}cm^{-3}}}\right)\left(\frac{M}{1~{}{\rm g}}\right)^{2}\\ &\times F\left(t;\epsilon_{0},\zeta_{0}\right){\rm erg~{}s^{-1}} \end{split} \tag{12}\]
If we want to scale with our canonical density \(\rho_{0}=10^{-11}~{}{\rm g}\) cm\({}^{-3}\), and for a system of two BHs of \(M_{1}=M_{2}=10^{6}~{}{\rm M_{\odot}}\) (i.e., \(M\simeq 3.977\times 10^{39}~{}{\rm g}\)), we find
\[\begin{split} L_{\rm Poynt}\left(t\right)&=2.347 \times 10^{43}\rho_{-11}M_{6}^{2}F\left(t;\epsilon_{0},\zeta_{0}\right)~{}{ \rm erg~{}s^{-1}}\\ &=L_{0}~{}\rho_{-11}M_{6}^{2}F\left(t;\epsilon_{0},\zeta_{0} \right)~{}{\rm erg~{}s^{-1}}\end{split} \tag{13}\]
where \(\rho_{-11}\equiv\rho_{0}/(10^{-11}{\rm g~{}cm^{-3}})\) and \(M_{6}\equiv M/(10^{6}~{}{\rm M_{\odot}})\).
Figure 7: Top: Poynting luminosity \(L_{\rm Poynt}\) extracted on a coordinate sphere of radius \(R=30M\) for each \(q=0.6,\Gamma=4/3\) model. The values of \(L_{\rm Poynt}\) are normalized to \(L_{0}\equiv 2.347\times 10^{43}\rho_{-11}M_{6}^{2}~{}{\rm erg~{}s^{-1}}\). Bottom: close-up of the top panel highlighting the features of \(L_{\rm Poynt}\) across the inspiral for different models.
In the following, we examine the features of the EM Poynting emission for our six \(q=0.6,\Gamma=4/3\) models. In Fig. 7 we display the Poynting luminosity as a function of time extracted on a sphere with radius \(R_{\rm ext}=30\ M\) (consistently with [24] and our previous works). On the left panel, we show the global evolution of \(L_{\rm Poyn}\) across the full duration of our simulations. On the right panel, we display a close-up that allows to appreciate the features of \(L_{\rm Poyn}\) for different models across the inspiral.
In accordance with our results for equal-mass binaries, we verify that the postmerger peak values of \(L_{\rm Poyn}\) are consistent with the Blandford-Znajek mechanism [66] and exhibit a scaling with the square of the spin parameters of the remnants. During the inspiral, we observe a gradual increase of \(L_{\rm Poyn}\) as the spin of the BHs shifts from being nearly orthogonal to \(L_{\rm orb}\) (b\(\emptyset\)p12 model, pink curve on Fig. 7) to being aligned with it (b\(\emptyset\)p2 model, black curve).
## Appendix B Dependence on \(\Gamma\)
We compute the total rest-mass accretion rate for our three b\(\emptyset\)p6 models that are evolved with different values of the adiabatic index \(\Gamma\). By changing \(\Gamma\), we can examine accretion flows under a full range of conditions. We consider the values \(\Gamma=4/3,5/3\), indicative of a hot, relativistic gas and a nonrelativistic gas, respectively, and the value \(\Gamma=13/9\) as an in-between model. In Fig. 8, we plot the total rest-mass accretion rate for the three models: we observe that \(\dot{M}\) steadily decreases for higher values of \(\Gamma\). This is consistent with the fact that, for fixed values of the rest-mass density, fluids with larger values of \(\Gamma\) have a larger pressure that can more effectively balance the gravitational force. Furthermore, we find that a larger adiabatic index yields larger fluctuations in the accretion rate.
|
2309.06219 | Human Action Co-occurrence in Lifestyle Vlogs using Graph Link
Prediction | We introduce the task of automatic human action co-occurrence identification,
i.e., determine whether two human actions can co-occur in the same interval of
time. We create and make publicly available the ACE (Action Co-occurrencE)
dataset, consisting of a large graph of ~12k co-occurring pairs of visual
actions and their corresponding video clips. We describe graph link prediction
models that leverage visual and textual information to automatically infer if
two actions are co-occurring. We show that graphs are particularly well suited
to capture relations between human actions, and the learned graph
representations are effective for our task and capture novel and relevant
information across different data domains. The ACE dataset and the code
introduced in this paper are publicly available at
https://github.com/MichiganNLP/vlog_action_co-occurrence. | Oana Ignat, Santiago Castro, Weiji Li, Rada Mihalcea | 2023-09-12T13:38:44Z | http://arxiv.org/abs/2309.06219v3 | # Human Action Co-occurrence in Lifestyle Vlogs
###### Abstract
We introduce the task of automatic human action co-occurrence identification, i.e., determine whether two human actions can co-occur in the same interval of time. We create and make publicly available the ACE (Action Co-occurrenceE) dataset, consisting of a large graph of \(\sim\)12k co-occurring pairs of visual actions and their corresponding video clips. We describe graph link prediction models that leverage visual and textual information to automatically infer if two actions are co-occurring. We show that graphs are particularly well suited to capture relations between human actions, and the learned graph representations are effective for our task and capture novel and relevant information across different data domains. The ACE dataset and the code introduced in this paper are publicly available at [https://github.com/MichiganNLP/vlog_action_co-occurrence](https://github.com/MichiganNLP/vlog_action_co-occurrence).
## 1 Introduction
Action understanding is a long-standing goal in the development of intelligent systems that can meaningfully interact with humans, with recent progress made in several fields including natural language processing (Fast et al., 2016; Wilson and Mihalcea, 2017, 2019), computer vision (Carreira and Zisserman, 2017; Shou et al., 2017; Tran et al., 2018; Chao et al., 2018; Girdhar et al., 2019; Feichtenhofer et al., 2019), data mining (Kato et al., 2018; Xu et al., 2019), and others. Many of the action understanding systems developed to date however rely mostly on pattern memorization and do not effectively understand the action, which makes them fragile and unable to adapt to new settings (Sigurdsson et al., 2017; Kong and Fu, 2018). As a step toward enabling systems to gain more in-depth knowledge about human actions, we propose a new action understanding task: learning which actions are likely to occur in the same time interval, i.e., human action co-occurrence in everyday life.
Most human actions are interconnected, as an action that ends is usually followed by the start of a related action and not a random one (e.g., after "waking up", one would "wash face" or "make breakfast" and not "sell books" or "go to bed"). We model this information through co-occurrence relations: in general, we expect that the actions 'wake up", "wash face" and "make breakfast" co-occur in a short interval of time, while "wake up", "clean house" or "go to bed" do not.
The interconnection of human actions is very well depicted in lifestyle vlogs, where vloggers visually record their everyday routine consisting of the activities they perform during a regular day (Fouhey et al., 2018; Ignat et al., 2019, 2021). We collect a dataset of lifestyle vlogs from YouTube that reflect daily scenarios and are currently very challenging for systems to solve.
A natural way to model the connections between human actions is through a graph representation, where actions are represented as nodes, and their co-occurrences are represented as edges (Figure 1). An important advantage of this representation is that it reflects the _transitive property of co-occurrence_: if an action A ("wake up") co-occurs with an action B ("make breakfast"), which in turn co-occurs with action C ("wash face"), action A,
Figure 1: Human action co-occurrence in lifestyle vlogs: _two actions co-occur if they occur in the same interval of time (10 seconds) in a video_. The actions are represented as nodes in a graph, the co-occurrence relation between two actions is represented through a link between the actions, and the action co-occurrence identification task as a link prediction task.
and action C are more likely to co-occur with one another than, e.g., another action Z ("clean house") that does not co-occur with the actions A or B.
Another important advantage, which we leverage in this work, is that within this graph representation, the action co-occurrence identification task can be formulated as a link prediction problem. We apply simple but powerful topology heuristics and learning models that use the graph representation to capture novel and useful information about human actions, and we show that this formulation leads to significant improvements in action co-occurrence identification as compared to other models.
Our paper makes four main contributions. First, we formalize the new task of human action co-occurrence identification in online videos. Second, we introduce a new dataset, ACE, consisting of a large graph of co-occurring actions in online vlogs. Third, we propose several models to solve the task of human action co-occurrence, by using textual, visual, multi-modal, and graph-based action representations. Finally, we also show that our graph representations capture novel and relevant information, across different data domains, which leads to rich avenues for future work for improving action co-occurrence and ultimately making progress toward the broader goal of action understanding.
## 2 Related Work
There are three areas of research related to our work: human action co-occurrence, graph link prediction and webly-supervised learning
Human Action Co-occurrence.Recent work shows that action co-occurrence priors Kim et al. (2020, 2021) increase the performance of human-object interaction models and lead to more effective training, especially in long-tail classes. Unlike our work, they assume that the action co-occurrence information is provided and they do not attempt to learn it. To the best of our knowledge, we are the first to propose the task of learning human action co-occurrence in videos.
Human action co-occurrence identification is also related to learning action temporal order in videos which is used to construct the co-occurring action pairs. Misra et al. (2016) propose the task of temporal order verification, i.e., to determine whether a sequence of frames from a video is in the correct temporal order. Using this simple task and no semantic labels, they learn visual representation. In our work, we learn action representations using the information extracted from the action co-occurrence graph, which is a more general relation reflecting a shared context among the actions.
Link Prediction.Link prediction is a key problem for graph-structured data and is relevant for our graph formulation of action co-occurrence. The objective of link prediction is to predict whether two nodes in a graph are likely to be linked Liben-Nowell and Kleinberg (2007).
Link prediction approaches can be categorized into three main categories Kumar et al. (2020): similarity-based/heuristic Newman (2001); Jaccard (1901); Salton and McGill (1983); Adamic and Adar (2003); Ravasz et al. (2002); Zhou et al. (2009); Liben-Nowell and Kleinberg (2007); probabilistic-based Kashima and Abe (2006); and dimensionality reduction-based (e.g., embedding-based or other learning approaches; Grover and Leskovec (2016); Kipf and Welling (2017).
For our task, we apply the similarity-based, embedding-based, and learning-based models. Similarity-based methods are the simplest and measure similarity between every pair of nodes using topology properties of the graph (e.g., common neighbors). The embedding-based link prediction models map the embedding of nodes to a lower dimension such that similar nodes have similar embeddings. The learning-based link prediction models can be cast using supervised classification models where a point corresponds to a node pair in the graph, and the point label represents the presence or absence of an edge/link between the pair.
Webly-Supervised Learning.In our work, we identify human action co-occurrence in the context of rich, virtually unlimited, constantly evolving online videos from YouTube, using the video transcripts as a web supervision signal. Large-scale video datasets on instructional videos Miech et al. (2019) and lifestyle vlogs Fouhey et al. (2018); Ignat et al. (2019, 2021, 2022) are other examples of web supervision. The latter is similar to our work as they analyze online vlogs, but unlike our work, their focus is on action detection or the reasons behind actions and not on action co-occurrence.
## 3 Dataset
To develop and test models for determining if two actions co-occur, we compile a novel dataset, which we refer to as ACE (Action Co-occurrenceE).
### Data Collection
We start by compiling a set of lifestyle videos from YouTube, consisting of people performing their daily routine activities, such as cleaning, cooking,
studying, relaxing, and others. We build a data-gathering pipeline to automatically extract and filter videos and their transcripts.
We select 20 YouTube channels and download all the videos and their transcripts. The channels are selected to have good-quality videos with automatically generated transcripts containing detailed verbal descriptions of the actions depicted.
An analysis of the videos indicates that both the textual and visual information are rich sources for describing not only the actions but also in what order the actions are performed, making them a great source of data for developing action co-occurrence models. The routine nature of the videos means that the vloggers record and describe their actions in the order they normally occur in a day: e.g., "wake up", "make bed", "wash face", "make breakfast", "drive to work", and so on. They can also choose to focus on certain activities (e.g., often cooking) and enumerate more fine-grained actions related to those activities (e.g., "cut apple", "add peanut butter"). Therefore, our dataset contains both general and fine-grained actions. We present data analyses in Section 5.4.
Action extraction.Having a comprehensive list of actions is necessary for creating graphs that contain most of the actions in the videos. At the same time, not all the actions from the transcripts are useful, as many of them are not visible in the video or hard to detect by computer vision systems (e.g., "feel", "talk", "thank", "hope", "need", "see').
Therefore, we first make sure that the actions we collect are mostly visible in the videos. Our strategy is to extract all the verbs from the transcripts and then filter them using a list of "visual verbs" collected from imSitu (Yatskar et al., 2016), COCO-a (Ronchi and Perona, 2015) and Levin (Levin, 1993).1 Verbs from imSitu and COCO-a are considered visual as the dataset collection pipelines include an explicit annotation step to determine if verbs are visual. We manually filter and check the verbs collected from Levin.
Footnote 1: Levin’s taxonomy provides a classification of 3,024 verbs (4,186 senses) into 48 broad and 192 fine-grained classes. We leave analyzing the Levin verb taxonomy impact on human action model performance as a future work direction.
Next, we extract all actions from the video transcripts using the dependency parser from spaCy (Honnibal et al., 2020) by extracting all the verbs and their corresponding verb phrase direct objects, prepositions, and objects of prepositions. We find that extracting only verbs and their corresponding direct objects does not always return comprehensive actions (e.g., "add teaspoon" versus "add a teaspoon of salt"). We also find that many verbs do not have informative direct objects (e.g., "write it", "clean them"), which makes the actions harder to differentiate and visually recognize. To address this, we apply co-reference resolution on the video transcripts using spaCy (Honnibal et al., 2020) NeuralCoref2 model, and re-extract the actions from the processed transcripts.
Footnote 2: [https://spacy.io/universe/project/neuralcoref](https://spacy.io/universe/project/neuralcoref)
Finally, we obtain our visible actions by filtering all the transcript-extracted actions that contain visual verbs.
Video extraction.As transcripts are temporally aligned with videos, we can obtain meaningful video clips related to the narration. We extract clips corresponding to the visual actions based on transcript timestamps. From 2,571 videos, we obtain 19,685 unique video clips and 25,057 (action, video-clip) pairs. Note that an action can be present in multiple video clips, and conversely, a video clip can contain multiple actions. To control the number of clips per action, we randomly sample up to 10 random video clips for each action and finally obtain 12,994 (action, video-clip) sampled pairs.
Quality Assurance.As described above, we perform multiple steps to ensure the actions appear in the videos. First, we manually select 20 YouTube channels from vloggers with high-quality filming styles, who usually provide detailed visual and textual descriptions of their actions. Second, we automatically extract actions that contain visual verbs. We manually check around 100 extracted actions to check if they are parsed well and if they correctly match their corresponding video and transcript context. Third, we automatically filter out videos that do not contain any transcripts or no significant motion. We filter out the motionless videos by following the procedure from Ignat et al. (2019): we sample one out of every one hundred frames of the videos and compute the 2D correlation coefficient between these sampled frames. We filter out all the videos with a median of the values greater than a threshold (0.8). We manually check around 100 (action, video) pairs to see if they correctly match and find around 18 complete misalignments. Finally, to mediate the misalignment and obtain diverse filming perspectives, we randomly sample up to 10 video clips for each action, which increases the chances that the action is present in at least one video. Examples of actions and their corresponding video-frames are found in sample-frames.
### Data Pre-processing
After collecting the videos, transcripts, and visual actions, the following data pre-processing steps are applied.
Action Co-occurrence Selection.From all the extracted visual actions, we automatically select all the action pairs that are co-occurring. We define two actions as co-occurring if they are less than 10 seconds away from each other. The 10 seconds is an _intermediate value threshold_ we set after experimenting with other values. This threshold controls the scale of time we choose to focus on when collecting co-occurring actions: e.g., mostly short actions (e.g., "open fridge", "get milk") are captured in a small interval of time (1-5 sec), while longer intervals allow for longer and more diverse actions to co-occur (e.g., "prepare meal"). We choose an intermediate value that allows for both shorter and longer actions to co-occur3. Our intuition is that modeling the relations between both shorter and longer actions would result in learning more comprehensive information about human actions. We also consider the in-depth analysis of this threshold and its downstream effects as an interesting future work direction and our framework allows for effortless threshold tune.
Footnote 3: The captured actions also depend on the filming style (e.g., vloggers could increase the filming time of normally short actions).
For computing the distance in time between two actions, we use the transcript time stamps. This allows scaling data, with no constraints from the annotation budget. The transcript time stamps do not always match the time the action appears in the video, however, this hardly impacts our task because the actions mentioned in the transcript usually follow the order from the video. Furthermore, we mediate misalignments by collecting multiple videos per action and by filtering steps described in the previous section.
Action Clustering.We find that many actions are often very similar in meaning. This leads to many action repetitions: e.g., "use iron", "iron shirt", "iron cloth". To avoid such repetitions, we group similar actions by clustering all actions. We represent each action using the pre-trained model Sentence-BERT Reimers and Gurevych (2019) and apply Agglomerative Clustering Murtagh and Legendre (2014). We filter out the clusters of actions with less than two actions, as they are likely to be outliers that were not well extracted. The actions in each cluster are then renamed to the most common action in the cluster: e.g., "iron shirt" and "iron cloth" are renamed to "use iron".
We observe that the clustering model is introducing some level of noise as it does not perfectly cluster all actions. We tried to mitigate this with different Sentence-BERT pre-trained models for sentence similarity4 and fine-tuning our clustering model hyper-parameters5 based on automatic evaluation metrics for measuring the quality of clusters6.
Footnote 4: sbert.net/docs/pretrained_models.html
Footnote 5: linkage distance threshold (1.5), linkage criterion (ward)
Footnote 6: Silhouette Coefficient (Rousseeuw, 1987), Calinski-Harabasz Score Calinski and Harabasz, 1974), and Davies-Bouldin Index Davies and Bouldin, 1979)
Action Graph Filtering.After we rename the actions based on clustering, we create a graph, where the graph nodes represent the actions and the graph edges represent the relations between two actions. Specifically, we create an undirected graph for each video, where the graph nodes are represented by the actions in the video and the co-occurring actions are connected by an edge. Each edge has weight, which is equal to the number of times the corresponding actions co-occur in the video.
We combine all the video graphs to obtain a single large graph that contains all the co-occurring actions in our data. We filter out the action pairs that co-occur only once in the graph (their edge weight is equal to one), as their co-occurrence relation is not strong and might be random. We show the statistics before and after all the action filtering steps in Table 1. More information (e.g., action frequency distributions, action pairs) can be found in Appendix A.
### ACE vs. current Human Action Datasets
Many publicly available visual action datasets Carreira and Zisserman (2017); Soomro et al. (2012); Kuehne et al. (2011) do not have video transcripts and do not have videos with multiple actions presented in their natural order, therefore we cannot leverage the textual information and the relations between actions, as we can do in our dataset.
The majority of human actions datasets with transcripts are restricted to one or few domains
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \#Verbs & \#Actions & \#Action pairs \\ \hline Initial & 608 & 20,718 & - \\ Co-occurrence & 439 & 18,939 & 80,776 \\ Clustering & 172 & 2,513 & 48,934 \\ Graph & 164 & 2,262 & 11,711 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for the collected number of unique verbs, actions, and co-occurring action pairs at each stage of data pre-processing.
(e.g., cooking Zhou et al. (2018) or instructional videos Miech et al. (2019); Tang et al. (2019)) while ours contains diverse, mostly in-door everyday actions from five main domains: _cleaning_, _cooking_, _DYI_, _Entertainment_, _personal care_. Note that due to the diversity of domains in ACE, our model learns not only the co-occurrence between in-domain actions (e.g., "cut potato" & "add onion" - _cooking_) but also cross-domain (e.g., "wash face" & "make breakfast" - _personal care_ & _cooking_).
## 4 Action Co-occurrence in Vlogs
We formulate our action co-occurrence identification task as a link prediction task. Link prediction aims to predict the existence of a link between two nodes in a graph. In our setup, nodes are represented by actions, and every two co-occurring actions are connected by a weighted edge, where the weight represents the number of times the two actions co-occur. Our goal is to determine if there exists an edge between two given actions.7
Footnote 7: At this point, we do not aim to also identify the strength of the link.
### Data Representation
Textual Representations.To represent the textual data - actions and their transcript context, we use Sentence Embeddings computed using the pre-trained model Sentence-BERT embeddings Reimers and Gurevych (2019) calculated using the graph topology and the textual embeddings obtained from CLIP Radford et al. (2021). When computing CLIP textual action embeddings, we concatenate the action with given prompts (e.g., "This is a photo of a person"), as described in the original paper Radford et al. (2021).
Video Representations.We use the CLIP model Radford et al. (2021) to represent all the actions and their corresponding video clips. One action can have multiple video clips: an action has at most 10 corresponding videos. From each video clip, we extract four equally spaced frames and pre-process them as done before Radford et al. (2021). We use the pre-trained Vision Transformer model ViT-B/16 Dosovitskiy et al. (2021) to encode the video frames and the textual information. We apply the model to each of the four frames and average their representations Luo et al. (2021).
Graph Representations.We also use the training graph topology information (node neighbors and edge weights) to compute action embeddings as the weighted average of all of their neighbor node embeddings, where the weights are edge weights (i.e., how many times the two nodes co-occur). The neighbor node embeddings are represented using either textual embeddings Sentence-BERT; Reimers and Gurevych (2019) or visual embeddings (CLIP; Radford et al. (2021). All the graph-based models described in the next section use graph topology information from the validation graph (see Section 5.1).
We use the representations described above as input to different action co-occurrence models.
### Action Co-occurrence Models
We explore different models with different input representations. We group the models as described in the related work link prediction section: random baseline, heuristic-based models (graph topology models), embedding-based models (cosine similarity and graph neural networks), and learning-based models (SVM models). As described in Section 4.1, we run experiments with various types of data representations: Textual: Action and Action Transcript; Visual: Action, Video, and Multimodal (Action&Videos; the average between action and video visual embeddings); Graph: Action and Multi-modal (Action&Videos) using the graph topology.
#### 4.2.1 Random Baseline
The action pairs to be predicted as co-occurring or not are split into equal amounts, therefore a random baseline would have an accuracy score of 50%.
#### 4.2.2 Heuristic-based Graph Topology Models
We apply several popular node similarity methods that only use graph topology information in the prediction process: Common Neighbours Newman (2001), Salton Index Salton and McGill (1983), Adamic-Adar Index Adamic and Adar (2003), Hub Depressed Index, Hub Promoted Index Ravasz et al. (2002), Resource Allocation Zhou et al. (2009), and Shortest Path Liben-Nowell and Kleinberg (2007). Note that the heuristic-based methods do not use any of the data representations described in 4.1. We describe each of the methods above:
Notation.Let \(s_{xy}\) be the similarity between nodes \(x\) and \(y\), \(\Gamma(x)\) be the set of nodes connected to node \(x\) and \(k_{x}\) be the degree of node \(x\).
Common Neighbours.Two nodes are more likely to be connected if they have more common neighbors.
\[s_{xy}=|\Gamma(x)\cap\Gamma(y)| \tag{1}\]
Salton Index.Measures the cosine of the angle between columns of the adjacency matrix, corresponding to given nodes.
\[s_{xy}=\frac{|\Gamma(x)\cap\Gamma(y)|}{\sqrt{k_{x}k_{y}}} \tag{2}\]
Hub Promoted Index.This measure assigns higher scores to edges adjacent to hubs (high-degree nodes), as the denominator depends on the minimum of the degrees of the nodes of interest.
\[s_{xy}=\frac{|\Gamma(x)\cap\Gamma(y)|}{\min\{k_{x},k_{y}\}} \tag{3}\]
Hub Depressed Index.Unlike the Hub Promoted Index, this measure assigns lower scores to edges adjacent to hubs. It penalizes large neighborhoods.
\[s_{xy}=\frac{|\Gamma(x)\cap\Gamma(y)|}{\max\{k_{x},k_{y}\}} \tag{4}\]
Adamic-Adar Index.This measure counts common neighbors by assigning weights to nodes inversely proportional to their degrees. That means that a common neighbor, which is unique for a few nodes only, is more important than a hub.
\[s_{xy}=\sum_{z\in\Gamma(x)\cap\Gamma(y)}\frac{1}{\log k_{z}} \tag{5}\]
Resource Allocation.Measures how much resource is transmitted between two nodes.
\[s_{xy}=\sum_{z\in\Gamma(x)\cap\Gamma(y)}\frac{1}{k_{z}} \tag{6}\]
Shortest Path.The similarity score is inversely proportional to the length of the shortest path between two nodes.
\[s_{xy}=\frac{1}{min\{l:path_{xy}^{<l>}exists\}} \tag{7}\]
Weighted Graph Models.Our graph is weighted, therefore we also apply weighted graph models. We modify some of the above models (Common Neighbours, Adamic-Adar Index, and Resource Allocation) to use the link weight information, as proposed in Zhu and Xia (2016). We find that using link weights achieves similar results as without them.
#### 4.2.3 Embedding-based Models
Cosine Similarity.We compute the cosine similarity between their embeddings to determine if two given actions co-occur. If the similarity score is greater than a threshold fine-tuned on validation data, we predict the actions as co-occurring.
Graph Neural Networks.We also use Graph Neural Network (GNN) models. We choose four diverse and popular models Kumar et al. (2020): attri2vec Zhang et al. (2019), GraphSAGE Hamilton et al. (2017), GCN Kipf and Welling (2017).
Graph Neural Network models can also be classified as learning-based models: they learn a new heuristic from a given network, as opposed to Graph Topology models, which use predefined heuristics, i.e., score functions. We create our graph based on a known heuristic: co-occurring actions are closely connected in the graph. Therefore, we hypothesize that heuristic models will perform better. Indeed, we observe that for our graph, the GNN methods do not perform better than the heuristic models: the best-performing model is GraphSAGE with 77.2% accuracy, while the best-performing topology model has an 82.9% accuracy (see Table 2). Therefore, we conclude that our task does not benefit from these neural models.
#### 4.2.4 Learning-based Model
We run a support vector machine (SVM) Cortes and Vapnik (1995) classifier on each action pair to be classified as co-occurring or not. We concatenate all the input representations/embeddings and all the heuristic scores and we standardize the features by removing the mean and scaling to unit variance. We fine-tune the model hyper-parameters (kernel type, C, gamma) on the validation data, using a grid search.
## 5 Evaluation
We conduct extensive experiments to evaluate the action pairs co-occurrence identification task. The task can be represented as a graph link prediction task. Therefore, we adopt the link prediction evaluation process.
### Evaluation Data Split
We split the original graph into train, validation, and test graphs. In link prediction, the goal is to predict which links will appear in the future of an evolving graph. Therefore, while keeping the same number of nodes as the original graph, the number of edges is changed as some of the edges are removed during each split and used as the positive samples for training, fine-tuning, and testing the link prediction models. The edges are split into the train, validation, and test sets using a transductive split, which is considered the default evaluation splitting technique for link prediction models Xu et al. (2018). More specifically, we randomly sample 10% of all existing edges from the original graph as positive testing data and the same number of nonexistent edges (unconnected node pairs) as negative testing data. The reduced graph becomes the test graph and together with the set of sampled edges is used for testing the models. We
repeat the same procedure to obtain the validation and the training data for the models. The validation graph is obtained by reducing the test graph, and the training graph is obtained by reducing the validation graph.
### Results and Ablations
Table 2 contains the results, measured by accuracy, for each model type. The learning-based model, SVM, using all input representations (textual, visual, graph) and all graph heuristic scores obtains the highest accuracy score. Therefore, using both graph topology information and textual embeddings leads to the best performance for our task. The results for each of the heuristic-based, graph topology models are shown in Table 2. Simple heuristics (common neighbors or shortest path) are enough to achieve a good performance.
Modality Ablation.The ablation results, split by input representation are shown in Table 3. We analyze how different input representations influence the model's performance: textual (Sentence-BERT and CLIP textual) vs. visual (CLIP visual) vs. multi-modal (CLIP textual and visual) vs. graph (Sentence-BERT and CLIP textual and visual). The input representations are described in Section 4.1. The textual embeddings are a strong signal for our task, even when not using any graph information: SVM with only Action Sentence-BERT embeddings has a 76.3% accuracy. Using graph representations or graph heuristic information leads to significantly better performance (80.9% and 91.1% accuracy, respectively). The visual and multi-modal embeddings are also valuable but perform worse than the textual embeddings. We hypothesize that CLIP embeddings might be affected by the time misalignment between the transcript and the video. However, the visual modality offers important information about human actions and can be used in future work with more robust visual models.
### Downstream Task: Similar Action Retrieval
Similar to how word embeddings have been used for word similarity and for similar word and document retrieval [11, 10], our graph dataset enables _action similarity_ and _similar action retrieval_ leveraging action-specific properties in the multi-modal space.
To show the usefulness of our graph-based action embeddings, we test them on the _similar action retrieval_ downstream task. Specifically, we compare two action representations: textual (Action Sentence-BERT embeddings) and graph-based (graph weighted average of neighbor nodes Action Sentence-BERT embeddings). In Figure 2 we show the top three nearest neighbor actions, from each of the representations, for three random action queries from our dataset. We observe that _each representation captures different types of information_. The actions obtained with textual representations are more syntactically similar to the action query, sharing either the verb or the object. This can be undesirable, as many retrieved actions are too repetitive and not always relevant to the action query: e.g., "build desk": "build bookshelf", "build house". In contrast, the actions obtained with graph representations are more _diverse_ and capture _location information_, i.e., actions expected to be temporally close in a video: e.g., "build desk": "use knife", "add storage", "put piece of wood".
Novelty vs. Relevance in Action Retrieval.A major focus in the field of Information Retrieval has been the development of retrieval models that maximize both the relevance and the novelty among higher-ranked documents [10]. For the task of action retrieval, we can approximate _relevance_ through the _location_ relevance of an action, and _novelty_ through the _diversity_ of the actions retrieved.
Diversity in Action Representations.Similar to word or document retrieval, diversity in action retrieval is a reflection of novel results. To measure the _diversity_ captured by the action representations,
\begin{table}
\begin{tabular}{c|c} \hline \hline Model & Accuracy \\ \hline \hline Baseline & \\ \hline \hline Random & 50.0 \\ \hline \hline Heuristic-based & \\ \hline \hline Common Neighbours & 82.9 \\ Salton Index & 71.2 \\ Hub Promoted Index & 78.3 \\ Hub Depressed Index & 61.5 \\ Adamic-Adar Index & 82.9 \\ Resource Allocation & 67.3 \\ Shortest Path & 82.9 \\ \hline \hline Embedding-based & \\ \hline \hline Cosine similarity & 82.8 \\ \hline attri2vec & 65.7 \\ GCN & 77.2 \\ GraphSAGE & 78.1 \\ \hline \hline Learning-based & \\ \hline \hline SVM & **91.1** \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy results for all the models.
we compute the _overlap score_ as the number of overlapping words between the action query and the retrieved top \(k\) action nearest neighbors, divided by the total number of words in the retrieved actions. For example, in Fig. 2, the action query "chop potato", for \(k=3\), the action kNNs using textual representations (in blue) have 3 overlapping words ("chop", "potato", "potato"), from a total of 8 words, resulting in an overlap score of 3/8. We average the overlap scores across all the action queries in our dataset (2,262 unique actions), for \(k\in{3,5,10}\). Table 4 shows that the actions retrieved using our graph representations have around three times fewer overlapping words with the action query, i.e., they are more diverse than the actions retrieved using the textual representation.
Location in Action Representations.To quantify how much _location_ information an action representation holds, we use three annotated action localization datasets: COIN Tang et al. (2019), which includes instructional videos; EPIC-KITCHENS Damen et al. (2018); and Breakfast Kuehne et al. (2014, 2016), consisting of a collection of cooking videos.
We use the training data to create an action co-occurrence graph and learn action graph representations, and the testing data to test our action representations. For each action query in the test set, we obtain the actions localized before and after as the gold standard action neighbors. We also calculate the predicted action kNNs (\(k=3\)) of the action query using textual and graph-based representations. To measure the location information, we compute the recall score between the gold standard action temporal neighbors and the predicted action kNNs. Table 4 shows that graph-based action representations hold more location information than textual representations.
Action representations that capture location information would likely benefit models in many computer vision applications, such as action localization, segmentation, or detection. This leads to future research directions on how best to use graph-based action representations and action co-occurrence information.
### Data Analysis
We want to determine which actions co-occur the most in our dataset, as it may be valuable knowledge for action recognition and action prediction systems. Systems enriched with this knowledge can make more informed decisions when predicting or recognizing actions. Specifically, action recognition systems can discard actions that are unlikely to happen given a previous action and assign a higher probability to actions known to co-occur with the previous action (e.g., given a previously recognized action "wake up", a likely next action could be "wash face", and not "clean house").
Given two actions, we compute their co
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \multirow{2}{*}{\(k\)} & \multicolumn{2}{c}{Input Representations} \\ \cline{2-3} & Textual & Graph \\ \cline{2-3} & Diversity/ Overlap Score \(\downarrow\) \\ \hline \hline
3 & 0.35 & **0.12** \\
5 & 0.31 & **0.11** \\
10 & 0.26 & **0.10** \\ \hline \hline Dataset & Location / Recall Score \(\uparrow\) \\ \hline \hline Breakfast & 0.16 & **0.22** \\ COIN & 0.23 & **0.60** \\ EPIC-KITCHENS & 0.14 & **0.26** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Scores measuring the difference of information, diversity, and location, between the action kNNs using different types of embeddings: textual and graph-based.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Input Representations} \\ \cline{2-7} & \multicolumn{2}{c|}{Textual} & \multicolumn{2}{c|}{Visual} & \multicolumn{2}{c}{Graph} \\ \cline{2-7} & Action & Transcript & Action & Video & Action\&Video & Action & Action\&Video \\ \hline Cosine Similarity & 60.6 & 65.2 & 62.7 & 57.0 & 65.4 & **82.8** & 50.6 \\ SVM & 76.3 & 71.1 & 73.1 & 76.2 & 76.1 & 80.9 & 74.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablations and accuracy results on test data. We compute the ablations for each input representation: textual, visual, and graph, for an embedding-based model (cosine similarity) and a learning-based model (SVM); the heuristic-based models do not depend on input representation type, therefore we do not ablate them.
Figure 2: Top three action neighbors, obtained from textual (blue) and graph-based (purple) representations, for three random action queries from our dataset: “rub stain”, “build desk”, “chop potato”.
occurrence score using the Positive Pointwise Mutual Information (PPMI) [10]. PMI is biased towards infrequent words, therefore we do not compute PMI for infrequent actions (that appear less than 10 times).
\[PPMI_{a_{i},a_{j}}=\max(\log\frac{P_{a_{i},a_{j}}}{P_{a_{i}}P_{a_{j}}},0) \tag{8}\]
\[P_{a_{i},a_{j}}=\frac{\#(a_{i},a_{j})}{\#action\ pairs},P_{a_{k}}=\frac{\#a_{k}}{\# actions} \tag{9}\]
Figure 3 shows the co-occurrence matrix for the top 20 most frequent actions. The most frequent actions are related to cooking and we can see how actions related to adding ingredients are co-occurring among themselves (e.g. "add potato" and "add avocado") or with actions related to adding something to a container (e.g."add potato" and "add to bowl"). Appendix A includes additional information: co-occurrence matrices of top 50 most frequent actions and verbs (Figures 4 and 5), top 20 actions and verb pairs co-occurring the most/least (Appendix A.1), actions and verbs distributions (Figures 6 and 7), top 10 most frequent clusters (Figure 8).
## 6 Conclusion
In this paper, we addressed the task of detecting co-occurring human actions in online videos. We explored the genre of lifestyle vlogs and constructed ACE, a new dataset of \(\sim\)12k pairs of visual actions and their corresponding video clips. We described and evaluated models that leverage textual, visual, multi-modal, and graph information.
We built ACE and action co-occurrence identification models to address the task of capturing human action relations, which in turn can lead to progress towards the broader goal of action understanding. We are the first to address this problem and to use graph representations in this setting. We show that graph representations are useful for our task and capture novel and relevant information about human actions, across diverse domains, which complements the representations learned from the current language and visual models. The ACE dataset and the code introduced in this paper are publicly available at [https://github.com/MichiganNLP/vlog_action_co-occurrence](https://github.com/MichiganNLP/vlog_action_co-occurrence).
## Limitations
Weak supervision from video transcripts.We use the weakly supervised time signal from automatically generated video transcripts and no manual annotations. This allows for no limits in scale at the cost of some noise. To reduce the noise, we use multiple (10) videos to obtain the temporal action information and perform various filtering steps as described in the Quality Assurance subsection. Furthermore, the time information is used only to find the co-occurrence information between actions, not the actual time location of the actions; therefore, it is not necessary to be very precise.
**Directed vs. Undirected graph representations.** A directed graph also captures the order between the actions, which can be used in a future work direction for action prediction applications. However, an undirected graph is sufficient to obtain co-occurrence information, which suits our goal for our paper. We looked into transforming our graph into a directed one, however, we could not do this reliably because the transcripts do not preserve the exact order of the actions. This is due to how vloggers choose to verbally describe their routines: e.g. from "during washing my face, I will wake up" - it is not trivial to automatically extract the correct/natural order of the actions, as in this case, the result would be incorrect (wash face, then wake up). We tried modeling this using time keywords (e.g. "during", "after", "before") but due to the complexity of natural language, we found exceptions and other complex scenarios that could not be modeled automatically.
Figure 3: Co-occurrence matrix for the top 20 most frequent actions in our dataset, ACE. The scores are computed using the PPMI measure: actions with higher scores have a stronger co-occurrence relation and vice-versa. For better visualization, we sort the rows of the matrix to highlight clustersa. Best viewed in color.
## Ethics and Broad Impact Statement
Our dataset contains public YouTube vlogs, in which vloggers choose to share episodes of their daily life routine. We use the videos to detect co-occurring actions, without relying on any information about the identity of the person such as gender, age, or location.
The data can be used to better understand people's lives, by looking at their daily routine and in which order they choose to perform their actions. The data contains videos of men and women and sometimes children, but most videos come from women. The routine videos present mostly ideal routines and are not comprehensive about all people's daily lives. Most of the people represented in the videos are middle-class Americans and the language spoken in the videos is English.
In our data release, we only provide the YouTube URLs of the videos, so the creator of the videos can always have the option to remove them. YouTube videos are a frequent source of data in research papers [19, 17, 16], and we followed the typical process used by all this previous work of compiling the data through the official YouTube API and only sharing the URLs of the videos. We have the right to use our dataset in the way we are using it, and we bear responsibility in case of a violation of rights or terms of service.
|
2306.17591 | On the Effects of Quantum Decoherence in a Future Supernova Neutrino
Detection | Quantum decoherence effects in neutrinos, described by the open quantum
systems formalism, serve as a gateway to explore potential new physics,
including quantum gravity. Previous research extensively investigated these
effects across various neutrino sources, imposing stringent constraints on the
spontaneous loss of coherence. In this study, we demonstrate that even within
the Supernovae environment, where neutrinos are released as incoherent states,
quantum decoherence could influence the flavor equipartition of $3\nu$ mixing.
Additionally, we examine the potential energy dependence of quantum decoherence
parameters ($\Gamma = \Gamma_0 (E/E_0)^n$) with different power laws ($n = 0,
2, 5/2$). Our findings indicate that future-generation detectors (DUNE,
Hyper-K, and JUNO) can significantly constrain quantum decoherence effects
under different scenarios. For a Supernova located 10 kpc away from Earth, DUNE
could potentially establish $3\sigma$ bounds of $\Gamma \leq 6.2 \times
10^{-14}$ eV in the normal mass hierarchy (NH) scenario, while Hyper-K could
impose a $2\sigma$ limit of $\Gamma \leq 3.6 \times 10^{-14}$ eV for the
inverted mass hierarchy (IH) scenario with $n=0$ - assuming no energy exchange
between the neutrino subsystem and non-standard environment ($[H,V_p] = 0$).
These limits become even more restrictive for a closer Supernova. When we relax
the assumption of energy exchange ($[H,V_p] \neq 0$), for a 10 kpc SN, DUNE can
establish a $3\sigma$ limit of $\Gamma_8 \leq 4.2 \times 10^{-28}$ eV for NH,
while Hyper-K could constrain $\Gamma_8 \leq 1.3 \times 10^{-27}$ eV for IH
($n=0$) with $2\sigma$, representing the most stringent bounds reported to
date. Furthermore, we examine the impact of neutrino loss during propagation
for future Supernova detection. | Marcos V. dos Santos, Pedro C. de Holanda, Pedro Dedin Neto, Ernesto Kemp | 2023-06-30T12:12:24Z | http://arxiv.org/abs/2306.17591v2 | # On the Effects of Quantum Decoherence in a Future Supernova Neutrino Detection
###### Abstract
Quantum decoherence effects in neutrinos, described by the open quantum systems formalism, serve as a gateway to explore potential new physics, including quantum gravity. Previous research extensively investigated these effects across various neutrino sources, imposing stringent constraints on spontaneous loss of coherence. In this study, we demonstrate that even within the Supernovae environment, where neutrinos are released as incoherent states, quantum decoherence could influence the flavor equipartition of \(3\nu\) mixing. Additionally, we examine the potential energy dependence of quantum decoherence parameters (\(\Gamma=\Gamma_{0}(E/E_{0})^{n}\)) with different power laws (\(n=0,2,5/2\)). Our findings indicate that future-generation detectors (DUNE, Hyper-K, and JUNO) can significantly constrain quantum decoherence effects under different scenarios. For a Supernova located 10 kpc away from Earth, DUNE could potentially establish \(3\sigma\) bounds of \(\Gamma\leq 6.2\times 10^{-14}\) eV in the normal mass hierarchy (NH) scenario, while Hyper-K could impose a \(2\sigma\) limit of \(\Gamma\leq 3.6\times 10^{-14}\) eV for the inverted mass hierarchy (IH) scenario with \(n=0\) -- assuming no energy exchange between the neutrino subsystem and non-standard environment (\([H,V_{p}]=0\)). These limits become even more restrictive for a closer Supernova. When we relax the assumption of energy exchange (\([H,V_{p}]\neq 0\)), for a 10 kpc SN, DUNE can establish a \(3\sigma\) limit of \(\Gamma_{8}\leq 4.2\times 10^{-28}\) eV for NH, while Hyper-K could constrain \(\Gamma_{8}\leq 1.3\times 10^{-27}\) eV for IH (\(n=0\)) with \(2\sigma\), representing the most stringent bounds reported to date. Furthermore, we examine the impact of neutrino loss during propagation for future Supernova detection.
###### Contents
* I Introduction
* II Quantum decoherence effects in supernova neutrinos
* II.1 Formalism
* II.2 Selected Models
* III Methodology and simulation
* III.1 Factorization of the dynamics
* III.2 Exploring a future SN-\(\nu\) detection
* III.3 Role of Earth matter effects
* IV Future limits on quantum decoherence
* IV.1 MSC\({}^{\prime}\)
* IV.2 MSC\({}^{*}\)
* IV.3 Neutrino loss
* V Neutrino mass hierarchy measurement
* VI Conclusions
A Decoherence inside the SN and matter effects
## II Tables with QD bounds
### II Introduction
Since the supernova SN1987A, the expectation for the next supernova (SN) neutrino detection has stimulated a number of works proposing tests on new physics in our galaxy, making this event a promising natural laboratory for neutrino physics.
Once \(\sim 1\) galactic SN is expected per century [1], the next event holds the opportunity to break through many aspects of neutrino physics, with capabilities of next-generation detectors, such as DUNE [2; 3; 4], Hyper-Kamiokande (HK) [5] and JUNO [6], leading to a sensible future measurement, increasing the number of neutrino events from the current few dozen to tens of thousands or more in a SN explosion 10 kpc away from Earth. A typical Core-Collapse SN (CCSN) undergoes three main emission phases to be known (see [7] for a review): neutronization burst, where a high amount of \(\nu_{e}\) is emitted given a rate of \(e^{-}\) capture in the first \(\sim 30\) ms after core bounce; accretion, where progenitor mass infall and a high luminosity are expected during roughly \(\sim 1\) s; and cooling, a thermal phase where a proto-neutron star cools down via neutrino emission, with \(\sim 10\) s of duration.
With the possible future sensitivity and increasing sophistication in SN neutrino simulations [8; 9; 10], a precise description of standard neutrino evolution until Earth is been pursued. However, in a SN environment, collective oscillations led by \(\nu-\nu\) interactions are a source of high uncertainties, since a definitive solution for the \(\nu\) equation of motion has not been achieved, even with many ongoing developments in the topic [11]. One critical remark is that for the three mentioned SN emission phases, collective oscillations are expected to play an important role only in accretion and cooling, with no significant impact on neutronization burst, given the large excess of \(\nu_{e}\) over other flavors, turning it in a promising environment to test new physics.
In SN neutrino-mixing, if we disregard collective effects, with the only relevant neutrino interaction being the MSW matter effect, the neutrino flux that comes out of the SN can be treated as an incoherent sum of mass states, and no oscillation is expected1. Since \(\nu_{\alpha}\) is generated as a mass state in matter \(\nu_{i}^{m}\), it leaves the SN as a mass state in vacuum \(\nu_{i}\) (for an adiabatic conversion in the SN) until reaching Earth. Despite this expected incoherence, neutrinos coming from a SN could be affected by _quantum decoherence_. In this work, we show the impact of quantum decoherence, or the neutrino evolution from pure to mixed states given a coupling to the environment, in the case of a future SN neutrino detection.
Footnote 1: Given the indistinguishability of \(\nu_{\mu}\) and \(\nu_{\tau}\) (\(\bar{\nu}_{\mu}\) and \(\bar{\nu}_{\tau}\)) in the detection, they are generally classified as \(\nu_{x}\) (\(\bar{\nu}_{x}\)) in the literature.
There are different possible sources of decoherence in neutrino evolution, such as wave packet decoherence, that comes from different group velocities of neutrino mass states disentangling the respective wave packets [12; 13; 14], or even Gaussian averaged neutrino oscillation given by uncertainty in energy and path length [15]. The underlying physics in this work is of a different type and refers to effects induced by propagation in a non-standard environment generated by beyond Standard Model physics, and the term _decoherence_ used in this work refers to the latter.
The idea of inducing pure elementary quantum states into mixed ones was originally established by Hawking [16] and Bekenstein [17] and discussed by a number of subsequent works [18; 19; 20; 21; 22], being attributed to quantum (stochastic) fluctuations of space-time background given quantum gravity effects. Many authors have given a physical interpretation on the impact of such stochastic quantum gravitational background in neutrino oscillations [23; 24; 25; 26; 27; 28; 29; 30; 31; 32], with expected decoherence being well described by open quantum systems formalism through GKSL (Gorini-Kossakowski-Sudarshan-Lindblad) master equation. In particular, in [27], the authors provided a simple and interesting interpretation of physical scenarios for specific forms of GKSL equation, then we use a similar terminology along this manuscript to guide our choices in the analysis.
Phenomenological studies designed to impose bounds on neutrino coupling to the environment through open quantum systems formalism were investigated in atmospheric [33; 23], accelerator [34; 35; 36; 37; 38; 39; 40; 41; 42], reactor [43; 44], and solar [45; 33; 44] neutrinos with different approaches. Only upper limits over quantum decoherence parameters were obtained up to now.
This manuscript is structured as follows: in Section II we show the quantum decoherence formalism, introducing the models to be investigated. In Section III we discuss the methods to factorize the neutrino evolution and how to use them to impose bounds on quantum decoherence with a future SN detection. We also discuss the role of Earth matter effects. Our results are presented in Section IV and in Section V we discuss how quantum decoherence could affect the neutrino mass ordering determination. Finally, in Section VI we present our conclusions.
Quantum decoherence effects in supernova neutrinos
In this section, we devote ourselves to revisiting quantum decoherence formalism in neutrino mixing and show the impacts on the (already) incoherent SN neutrino fluxes.
### Formalism
Considering the effects of quantum decoherence, we can write the GKSL equation in propagation (mass) basis in vacuum [46; 47]
\[\frac{d\rho}{dt}=-i[H,\rho]+\mathcal{D}(\rho)\] (II.1)
where \(\mathcal{D}(\rho)=\sum_{p}^{N^{2}-1}(V_{p}\rho V_{p}^{\dagger}-\frac{1}{2}\{V_ {p}^{\dagger}V_{p},\rho\})\) is a dissipation term, representing the neutrino subsystem coupling to the environment. If (II.1) is a general equation of motion to describe \(\nu\) propagation and a non-standard effect induces a non-null \(\mathcal{D}(\rho)\), we require an increase of von Neumann entropy in the process, which can be achieved imposing \(V_{p}=V_{p}^{\dagger}\)[48]. It is also possible to write the dissipation term at the r.h.s. of (II.1) expanding in the appropriated group generators as \(\mathcal{D}(\rho)=\mathcal{D}(\rho)_{\mu}\lambda^{\mu}=D_{\mu\nu}\rho^{\nu} \lambda^{\mu}\), in which \(\lambda^{\mu}\) are the generators of SU(\(N\)) for a system of \(N\) neutrinos families. In fact, the same procedure can be done in the Hamiltonian term of (II.1) in order to get a Lindbladian operator \(\mathcal{L}=-2(H_{\mu\nu}+D_{\mu\nu})\), leading to:
\[\left|\dot{\rho}\right\rangle=-2\mathcal{L}\left|\rho\right\rangle\] (II.2)
that operates in a "vectorized" density matrix \(\left|\rho\right\rangle\) with dimension \(N^{2}\) (where \(N\) is the number of levels of the system). In 3 neutrino mixing, \(\left|\rho\right\rangle\) has dimension 9 and \(\mathcal{L}\) is a \(9\times 9\) matrix.
One of the advantages of this formalism is that, despite a lack of understanding about the microscopic phenomena we are interested to model, we are able to infer the resulting damping effects by properly parameterizing \(\mathcal{D}(\rho)\) (or more specifically \(D_{\mu\nu}\)) in a generic way2
Footnote 2: For some forms of \(\mathcal{D}(\rho)\) derived from first principles, see [49; 50].
\[D=\begin{pmatrix}0&0&0&0&0&0&0&0&0\\ 0&-\gamma_{1}&\beta_{12}&\beta_{13}&\beta_{14}&\beta_{15}&\beta_{16}&\beta_{17 }&\beta_{18}\\ 0&\beta_{12}&-\gamma_{2}&\beta_{23}&\beta_{24}&\beta_{25}&\beta_{26}&\beta_{27 }&\beta_{28}\\ 0&\beta_{13}&\beta_{23}&-\gamma_{3}&\beta_{34}&\beta_{35}&\beta_{36}&\beta_{37 }&\beta_{38}\\ 0&\beta_{14}&\beta_{24}&\beta_{34}&-\gamma_{4}&\beta_{45}&\beta_{46}&\beta_{47 }&\beta_{48}\\ 0&\beta_{15}&\beta_{25}&\beta_{35}&\beta_{45}&-\gamma_{5}&\beta_{56}&\beta_{57 }&\beta_{58}\\ 0&\beta_{16}&\beta_{26}&\beta_{36}&\beta_{46}&\beta_{56}&-\gamma_{6}&\beta_{67 }&\beta_{68}\\ 0&\beta_{17}&\beta_{27}&\beta_{37}&\beta_{47}&\beta_{57}&\beta_{67}&-\gamma_{7} &\beta_{78}\\ 0&\beta_{18}&\beta_{28}&\beta_{38}&\beta_{48}&\beta_{58}&\beta_{68}&\beta_{78 }&-\gamma_{8}\\ \end{pmatrix},\] (II.3)
in 3 neutrino mixing. Although it is not explicit, the entries in matrix (II.3) can be directly related to the coefficients of expansion of \(V_{p}\) in the generators of SU(3), or \(\gamma,\beta=f(v_{p})\), with \(v_{p}\) coming from \(V_{p}=v_{p\mu}\lambda^{\mu}\). Note that the null entries in the first column of (II.3) are given by the hermiticity of \(V_{p}\), which also enables rewriting the dissipation term as \(\mathcal{D}(\rho)=\frac{1}{2}\sum_{p}^{N^{2}-1}[[V_{p},\rho],V_{p}]\), showing that terms proportional to identity in the SU(3) expansion vanish, making the first line of (II.3) also null. It is important to note that the parameters used to define \(D_{\mu\nu}\) are not all independent. They are related to each other in order to ensure complete positivity, which is a necessary condition for a quantum state to be physically realizable [39; 51; 52] (see [39] for a set of relations in a 3-level system).
However, it is not viable to investigate this general format of (II.3) given the number of parameters. Therefore, in this work, we restrict ourselves to cases in which \(D\) is diagonal as in [44], in order to capture the effects of interest arising from QD. We tested a non-diagonal version of \(D\) using complete positivity relations and our results are not significantly affected.
In the context of supernova neutrinos, the neutrino propagates a large distance inside the supernova (\(\sim 10^{8}\) km), then we also investigate the impact of QD combined with SN matter effects. A possible procedure to cross-check it is by rotating eq. (II.1) to flavor basis, where the Hamiltonian can be summed to an MSW potential, i.e. \(H_{f}=H_{f}^{\rm vac}+V_{W}\). However, as it will be more clear in Section III.1, the probability we are interested in is between mass eigenstates on both matter and vacuum, which can be accomplished by diagonalizing the Hamiltonian in flavor basis using a proper transformation.
### Selected Models
Since we analyse diagonal versions of (II.3), \(\beta_{\mu\omega}=0\) for all \(\mu\) and \(\omega\). In works such as [53; 44] it is shown that quantum decoherence can give rise to two disentangled effects when the evolution occurs in vacuum: the pure decoherence, where a coherent state becomes incoherent along propagation; and the relaxation effect, responsible to lead the ensemble to a maximal mixing. As decoherence effects on SN neutrinos are suppressed due to matter effects on the mixing angle and long propagation lengths3, we do not expect pure decoherence effects to play any role in the propagation, being only (possibly) affected by relaxation.
Footnote 3: If neutrinos are only affected by MSW effect, it is possible for \(\nu_{\mu}\) and \(\nu_{r}\) oscillate to each other. It generally does not affect the analysis of flavor conversion, once they are indistinguishable in the detection, and therefore generally denoted as \(\nu_{x}\). However, as we will see in Section III, their creation in coherent states changes one of the tested QD models here.
Up to this date, and to the authors's best knowledge, there is no consistent theory in which you can get the parameters of \(D\) from quantum gravity, or even if the parameters are constant. Different works [54; 23; 27; 33] suggested the possibility of a dependency on energy as \(\gamma_{i}=\gamma_{0_{i}}(E/E_{0})^{n}\) motivated by quantum space-time phenomenology, where \(E_{0}\) is an arbitrary energy scale. In this work, we chose \(E_{0}=10\) MeV to match the energy scale of supernova neutrinos. As for the energy dependence, we explore the scenarios with \(n=0\) and \(n=2\), given that most of the works check this power law exponents for \(\gamma_{i}\), which enables us to compare SN limits to other sources (and works), and \(n=5/2\), well-motivated by the natural Planck scale for the SN energy range of \(0-100\) MeV. By natural scale, we refer to \(\gamma_{0_{i}}=\xi_{\rm Planck}/M_{\rm Planck}^{n-1}\) with \(\xi_{\rm Planck}\sim 1\)[55; 27], making \(\gamma_{0i}=\xi_{\rm Planck}M_{\rm Planck}^{1}\), \(\xi_{\rm Planck}M_{\rm Planck}^{-1}\), and \(\xi_{\rm Planck}M_{\rm Planck}^{-3/2}\) for our choices of \(n=0,2\) and \(5/2\).
With dimensional analysis (which can be further justified when solving the evolution equation), we expect that the effects of decoherence would show up for distances larger than a _coherence length_, defined by \(L_{\rm coh}=1/\gamma\). In Fig. 1 we show the expected coherence length for these values of \(n\). We see that if this "natural" scale holds, \(n=0\) and \(2\) would be possibly ruled out by terrestrial and solar experiments, whereas for \(n=3\), \(L_{\rm coh}\) is out of the observable universe for the expected SN-\(\nu\) energy scale. For the mentioned values of \(n\), we analyze the following models:
Figure 1: Coherence length (\(L_{\rm coh}=1/\gamma\)) for values of \(n\) in a power law of decoherence coefficients \(\gamma=\gamma_{0}(E/E_{0})^{n}\) for a “natural” scale of quantum gravity, with \(\xi_{\rm Planck}=1\). The yellow region corresponds to the solar system edge, while the blue region is the Milky Way diameter, and the dashed grey line is to respect the observable universe.
**Mass State Coupling (MSC)**: The neutrino mass basis is coupled to the environment and the relaxation effect leads to maximal mixing. In 3-\(\nu\) mixing, it means a 1/3 (equal) probability of detecting any state. In this model, we test two possible scenarios related to energy conservation in the neutrino subsystem:
* \(\text{MSC}^{\ell}\) (\([H,V_{p}]=0\)): Here, the neutrino energy is conserved for any non-standard mixing process in vacuum4. It means that \(V_{p}=\mathbf{v}_{3}\lambda_{3}+\mathbf{v}_{8}\lambda_{8}\), where \(\lambda_{\mu}\) are Gell-Mann matrices and \(\mathbf{v}_{\mu}=\sum_{p=1}^{8}v_{p_{\mu}}\), with \(\mu\) ranging from 0 to 8 in the SU(3) expansion of \(V_{p}\). To simplify the analysis we choose a diagonal version of the dissipation term in (II.3) with a single parameter \(\Gamma\). Additionally, using complete positivity relations [39], we can find the special case of \(D=-\text{diag}(0,\Gamma,\Gamma,0,\Gamma/4,\Gamma/4,\Gamma/4,\Gamma/4,0)\), with \(\Gamma=\Gamma_{0}(E/10\ \text{MeV})^{n}\). The transition probabilities amongst mass states in vacuum are null in this case. However, if we look at the propagation inside the supernova layers, in a diagonalized basis of the mass state in matter \(P_{ij}^{m(\text{SN})}\), this probability could be non-null for \(i\neq j\), i.e. transitions between \(\nu_{i}^{m}\) and \(\nu_{j}^{m}\) are allowed and would change proportionally to \(e^{-\Gamma}\). Therefore, the coherence length to be investigated is the SN radius, and the matter effects in addition to quantum decoherence would induce a maximal mixing inside the SN. In Fig. 2 we show the transition probabilities of mass state in matter basis calculated using the slab approach with a simulated SN density profile from Garching group [56; 10], corresponding to a progenitor of 40 \(M_{\odot}\). More details about our solution are in Appendix A. When the neutrino is released to vacuum, it is no longer affected by quantum decoherence until detection. Since the length traveled inside the Earth by the neutrino is much smaller then \(L_{\text{coh}}^{\text{SN}}\), we do not take the quantum decoherence in Earth matter into account in this specific case, albeit standard non-adiabatic MSW effect could play a role. Note that this regime essentially depends on \(\nu\) matter effects in the SN. Footnote 4: In our notation, the superscript symbol \(\not{\!}\) accounts to no exchange of energy to the environment, while \(\epsilon\) has the opposite meaning.
* \(\text{MSC}^{\epsilon}\) (\([H,V_{p}]\neq 0\)): In this model, we relax the above assumption, allowing some exchange of \(\nu\) energy with the "non-standard" environment. We choose the most general diagonal version to the dissipation term from (II.3): \(D=-\text{diag}(0,\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\gamma_{5},\gamma _{6},\gamma_{7},\gamma_{8})\). In [27], this choice of \(D\) is intrinsically related to _mass state selected_ scenario to be impacted by quantum gravitational effects. To quantify the effects of this model, we solve analytically (II.1) to get the probabilities of interest in mass basis in vacuum5: Footnote 5: The expected (adiabatic MSW) solution for the probabilities is a Kronecker delta, i.e. \(P_{ij}=\delta_{ij}\). \[P_{11} =\frac{1}{3}+\frac{1}{2}e^{-\gamma_{3}x}+\frac{1}{6}e^{-\gamma_{ 3}x}\] (II.4) \[P_{12} =\frac{1}{3}-\frac{1}{2}e^{-\gamma_{3}x}+\frac{1}{6}e^{-\gamma_{ 3}x} P_{22}=P_{11}\] \[P_{13} =\frac{1}{3}-\frac{1}{3}e^{-\gamma_{3}x} P_{23}=P_{13}\,\] \[P_{33} =\frac{1}{3}+\frac{2}{3}e^{-\gamma_{3}x}\] (II.5)
Figure 2: Survival probabilities of mass state in matter basis inside the SN for the \(\text{MSC}^{\ell}\) model (no exchange of energy from neutrinos and environment in vacuum) and \(n=0\) (and then \(\Gamma=\Gamma_{0}\)). The SN matter density profile used is from a Garching simulation of a 40 \(M_{\odot}\) (LS180-s40.0) progenitor [56; 10], shown in Fig. 22 in Appendix A.
with \(x\) as the propagated distance. For other possible probabilities on this basis, we use \(P_{ij}=P_{ji}\). It should be noted that on this basis the probabilities depend only on \(\gamma_{3}\) and \(\gamma_{8}\). The reason is that when solving the set of differential equations in (II.2), the equations corresponding to \(\gamma_{3}\) and \(\gamma_{8}\), i.e. \(\mathcal{L}_{3\nu}\) and \(\mathcal{L}_{8\nu}\) are the only decoupled ones, independent of Hamiltonian terms.
If we look at \(\gamma_{i}\) parameters in terms of \(\mathbf{v}_{\mu}\) coefficients of the SU(3) expanded \(V_{p}\) we find
\[\begin{split}\gamma_{3}&=\mathbf{v}_{1}^{2}+ \mathbf{v}_{2}^{2}+\frac{\mathbf{v}_{4}^{2}}{4}+\frac{\mathbf{v}_{5}^{2}}{4}+ \frac{\mathbf{v}_{6}^{2}}{4}\\ \gamma_{8}&=\frac{3\mathbf{v}_{4}^{2}}{4}+\frac{3 \mathbf{v}_{5}^{2}}{4}+\frac{3\mathbf{v}_{6}^{2}}{4}+\frac{3\mathbf{v}_{7}^{2 }}{4}\,.\end{split}\] (II.5)
Equation (II.5) shows that \(\gamma_{3}\) and \(\gamma_{8}\) are not independent. In order to compare our results to solar limits [44], we can use the same notation to define:
\[\begin{split}\Gamma_{3}&=\mathbf{v}_{1}^{2}+ \mathbf{v}_{2}^{2}\\ \Gamma_{8}&=\frac{3\mathbf{v}_{4}^{2}}{4}+\frac{3 \mathbf{v}_{5}^{2}}{4}+\frac{3\mathbf{v}_{6}^{2}}{4}+\frac{3\mathbf{v}_{7}^{2 }}{4}\end{split}\] (II.6)
leading to \(\gamma_{3}=\Gamma_{3}+\Gamma_{8}/3\) and \(\gamma_{8}=\Gamma_{8}\), resulting in pure (independent) relaxation \(\Gamma\) parameters, that will be the ones effectively inducing the maximal admixture in this scenario. The energy dependence is explicitly written as \(\Gamma_{i}=\Gamma_{0i}(E/10\text{ MeV})^{n}\) with \(i=\{3,8\}\). Note that the effective distance of this particular case is the total neutrino propagation, i.e. vacuum propagation is also affected and it can be split into the regime in the SN and outside its surface until Earth, or \(L=L^{\text{SN}}+L^{\text{Vac}}\). Similarly as in \(i)\), we solve the probabilities associated with possible transitions in supernova layers only numerically. However, as we discuss in Section III.1, given that \(L^{\text{Vac}}\gg L^{\text{SN}}\), the approximation of \(L\sim L^{\text{Vac}}\) is assumed in our calculations.
**Neutrino Loss**: As mentioned in [27], it is possible to have a scenario with neutrino loss, where neutrinos are captured by effects of quantum gravity during propagation, and re-emitted to a different direction, never reaching the detector at Earth. In this picture, the authors made a choice of \(D_{00}\neq 0\). Looking at the most general form of \(\mathcal{D}(\rho)\), it is possible to say that this choice is completely out of open quantum systems formalism, i.e. naturally \(\mathcal{D}(\rho)_{0\mu}=0\) when the master equation (II.1) is assumed to describe the evolution of the reduced quantum system, with trace-preserving all times. Even though, to explore such an interesting physical situation, we test this non-unitary case that matches the choice \(\gamma_{i}=\gamma\) with \(i\) from 1 to 8, then \(D=-\text{diag}(\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma,\gamma)\), with \(\gamma=\gamma_{0}(E/10\text{ MeV})^{n}\). The solution of (II.1) gives:
\[\begin{split} P_{ii}&=e^{-\gamma x}\\ P_{ij}&=0\end{split}\] (II.7)
for any \(i,j\) from 1 to 3 with \(i\neq j\). Note that in this result, in contradiction to conventional unitary models, one state does not go to another, i.e. \(\sum_{i}P_{ij}\neq 1\), once neutrinos are lost along the way.
In the solutions of the equation of motion shown above, we absorbed a factor of 2 in the quantum decoherence parameters, i.e. \(-2\gamma_{i}\rightarrow-\gamma_{i}\), with no loss of generality, since what matters in our results is the intensity of a deviation from a standard scenario.
## III Methodology and Simulation
To test the QD models discussed in the context of a future SN detection, we use the neutrino flux coming from supernovae simulations from the Garching group [10]. For MSC\({}^{\prime}\) described in item \(i)\) of MSC in Section II.2, we exploit a 40 \(M_{\odot}\) progenitor simulation (LS180-s40.0) [56], since it has detailed matter density profiles, essential to explore such scenario. For all other cases investigated (MSC\({}^{\epsilon}\) and \(\nu\)-loss), we use simulations with 27 \(M_{\odot}\) (LS220s27.0c) and 11.2 \(M_{\odot}\) (LS220s11.2c) progenitor stars, detailed in [7].
To avoid the large uncertainties of collective effects, we only use the flux from the neutronization burst phase (first 30 ms) in our analysis, in which effects induced by \(\nu-\nu\) interaction are expected to not play a significant role. In Fig. 3 we show the luminosity of all flavors along the time window of this phase.
Next, we explain in more detail how to include non-standard physics of eqs. (II.4) and (II.7) in SN neutrino evolution and our methods to use a future SN detection to impose limits on QD parameters.
### Factorization of the dynamics
Our analysis only takes into account the MSW effect in the neutronization burst through the standard matter effect on \(\nu\) mixing. To combine QD effects and MSW through the \(\nu\) generation, propagation, crossing through Earth, and detection, it is possible to factorize the flavor probabilities as
\[P_{\alpha\beta}=\sum_{i,j,k=1}^{3}P_{\alpha i}^{m(\text{SN})}P_{ij}^{m(\text{ SN})}P_{jk}P_{k\beta}^{m(\text{Earth})}\hskip 28.452756pt\bar{P}_{\alpha\beta}=\sum_{i,j,k= 1}^{3}\bar{P}_{\alpha i}^{m(\text{SN})}\bar{P}_{ij}^{m(\text{SN})}\bar{P}_{jk} \bar{P}_{k\beta}^{m(\text{Earth})}\,\] (III.8)
where \(P_{\alpha\beta}\) (\(\bar{P}_{\alpha\beta}\)) are the transition probabilities from flavor \(\alpha\) to \(\beta\). The meaning of each term in (III.8) can be summarized as: \(P_{\alpha i}^{m(\text{SN})}\) is the probability of creating a \(\nu_{\alpha}\) as a \(i\) state in matter \(\nu_{i}^{m}\); \(P_{ij}^{m(\text{SN})}\) is the probability of converting \(\nu_{i}^{m}\rightarrow\nu_{j}^{m}\) inside supernova layers; \(P_{jk}\) the probability of converting \(\nu_{j}\rightarrow\nu_{k}\) during propagation in vacuum until Earth; and by the end, \(P_{k\beta}^{m(\text{Earth})}\) is the probability of detecting a \(\nu_{\beta}\) given a \(\nu_{k}\) state considering (or not) Earth matter effects. The index \(m\) regards that the creation or propagation is in matter. It is worth remembering that \(\nu_{e}\) and \(\bar{\nu}_{e}\) are created as a single mass eigenstate in matter. In this scenario, the sum over \(i\) vanishes, since we have \(P_{ei}^{m(\text{SN})}=\delta_{i3}\) and \(\bar{P}_{ei}^{m(\text{SN})}=\delta_{i1}\) for NH, and \(P_{ei}^{m(\text{SN})}=\delta_{i2}\) and \(\bar{P}_{ei}^{m(\text{SN})}=\delta_{i3}\) for IH. As for \(\nu_{x}\), although it is created in a coherent superposition of the other two mass eigenstates, the interference phase would be averaged out, and therefore eq. (III.8) is valid. In the context of a SN flux conservation, the simplest flavor conversion scheme could be described by just \(P_{ee}\) and \(\bar{P}_{ee}\), and in standard neutrino mixing, the factorized probabilities in (III.8) become \(P_{ij}^{m(\text{SN})}=\delta_{ij}\), \(P_{jk}=\delta_{jk}\) and \(\bar{P}_{ij}^{m(\text{SN})}=\delta_{ij}\), \(\bar{P}_{jk}=\delta_{jk}\) for adiabatic evolution. Such a scenario can be changed by quantum decoherence, allowing for the conversion among mass eigenstates in vacuum and matter.
One can also note in (II.4), (II.5), (II.6) and (III.8) that for the MSC\({}^{\ast}\) model, \(P_{ee}\) is a function of \(\Gamma_{3}\) and \(\Gamma_{8}\) in IH but only of \(\Gamma_{8}\) for NH. The \(\bar{P}_{ee}\) has the opposite dependency and we can write:
\[P_{ee}^{\text{IH}}=P_{ee}^{\text{IH}}(\Gamma_{3},\Gamma_{8}) ; \hskip 14.226378ptP_{ee}^{\text{NH}}=P_{ee}^{\text{NH}}(\Gamma_{8}).\] \[\bar{P}_{ee}^{\text{IH}}=\bar{P}_{ee}^{\text{IH}}(\Gamma_{3}) ; \hskip 14.226378pt\bar{P}_{ee}^{\text{NH}}=\bar{P}_{ee}^{\text{NH}}( \Gamma_{3},\Gamma_{8}).\]
These remarks on the survival probabilities of \(\nu_{e}\) and \(\bar{\nu}_{e}\) are essential in our results, once the flavor conversion of MSC can be described using uniquely \(P_{ee}\) and \(\bar{P}_{ee}\).
Particularly for the MSC\({}^{\sharp}\) case, considering the propagation along supernova layers, \(P_{ij}^{m(\text{SN})}\) and \(\bar{P}_{ij}^{m(\text{SN})}\) will be affected by QD, nevertheless \(P_{jk}=\delta_{jk}\) and \(\bar{P}_{jk}=\delta_{jk}\), since with no exchange of energy to the environment, quantum
Figure 3: Simulated \(\nu\) luminosity for neutronization burst phase of the emission models of 27 \(M_{\odot}\) (solid) and 11.2 \(M_{\odot}\) (dashed) progenitor stars from Garching group [7; 10].
decoherence would not play any role in the vacuum propagation. On the other hand, for MSC\({}^{e}\), both SN matter and vacuum would affect the neutrino mixing. However, as shown in Fig. 23 in the Appendix A, it would be needed a \(\Gamma_{3,8}\gtrsim 10^{-18}\) eV or even beyond to have significant changes over \(P_{ij}^{m(\text{SN})}\). As it will be clear in Section IV, this value is much higher than the possible sensitivity of a future SN detection with only vacuum effects (given the large coherence length between the SN and Earth), then we take \(P_{ij}^{m(\text{SN})}\) and \(\bar{P}_{ij}^{m(\text{SN})}\) as \(\delta_{ij}\) for MSC\({}^{e}\) from now on.
In order to put bounds on QD effects, we statistically analyze it in two scenarios: without Earth matter effects in neutrino (antineutrino) propagation, or \(P_{ke}^{m(\text{Earth})}=P_{ke}\) (\(\bar{P}_{ke}^{m(\text{Earth})}=\bar{P}_{ke}\)) in (III.8); and then we check how Earth matter effects would impact our results.
Figure 4 shows both scenarios of \(P_{ee}\) and \(\bar{P}_{ee}\) as a function of quantum decoherence parameters for neutrinos and antineutrinos, where neutrino hierarchy plays a relevant role in the considered scenarios. It is possible to see that Earth regeneration could enhance or decrease the sensitivity of standard physics on QD parameters for very specific energies and zenith angles \(\theta_{z}\). However, as we will see later, regeneration becomes more relevant for higher energies, generally at the end of the SN-\(\nu\) simulated spectrum, limiting its impact on SN flavor conversion.
It is worth mentioning that for the MSC model asymptotically we expect more sensitivity on \(P_{ee}\) in NH than IH, since for IH the standard probability is about the maximal admixture (1/3). In contrast, for \(\bar{P}_{ee}\), both hierarchy scenarios are almost equally sensitive to a maximal admixture scenario. In the case of \(\nu\)-loss we see the opposite picture for \(P_{ee}\), i.e. IH would be more impacted by an asymptotically null probability, and for \(\bar{P}_{ee}\) NH would be highly affected, with low impact on IH.
As we will see later, the most general scheme of SN-\(\nu\) fluxes at Earth can not be parameterized with just \(P_{ee}\) and \(\bar{P}_{ee}\) for the \(\nu\)-loss scenario, given no conservation of total flux. Therefore it is needed to work out \(P_{\alpha\beta}\) also for \(\alpha,\beta=\mu,\tau\) (not shown in Figs. 4 for simplicity). We clarify it in the next section.
Figure 4: Survivor probability for electron neutrinos (left) and antineutrinos (right) as a function of decoherence parameters for \(n=0\) (energy independent) and a 10 kpc propagation, without (upper plots) and with (down plots) Earth matter effects. Solid lines represent MSC\({}^{e}\) scenario (\(\Gamma_{8}\)) with \(\Gamma_{3}=10^{-27}\) eV and the dashed, the neutrino loss (\(\gamma\)). For the upper plots, quantum decoherence is taken into account only in vacuum in between SN surface until detection at Earth, with no regeneration considered. In the down ones, we set the zenith angle of \(\theta_{z}=180^{\circ}\) and \(E_{\nu}=30\) MeV.
### Exploring a future SN-\(\nu\) detection
Since the detection of SN1987A through neutrinos, a galactic SN is expected by the community as a powerful natural \(\nu\) laboratory. The SN1987A neutrino detection greatly impacted what we know about SN physics, but the low statistics of the available data make predictions on standard \(\nu\) admixture extremely challenging. On the other hand, the next generation of neutrino detectors promises a precise measurement of a galactic SN, highly increasing our knowledge of SN-\(\nu\) flavor conversion, with different detector technologies and capabilities. Here, we show the sensitivity of DUNE, HK, and JUNO on QD. These detectors have the following properties:
1. DUNE will be a 40 kt Liquid-Argon TPC in the USA. We consider only the most promising detection channel \(\nu_{e}+\mathrm{Ar}\to e^{-}+K^{+}\)[4] in our analysis, being sensitive to electron neutrinos and consequently to most neutronization burst flux6. We set an energy threshold to \(E_{\mathrm{th}}=4.5\) MeV and use the most conservative reconstruction efficiency reported in [4]. Footnote 6: Actually, it depends on the neutrino mass hierarchy, once for MSW-NH the \(\nu_{e}\) flux is highly suppressed.
2. Hyper-Kamiokande will be a water Cherenkov detector in Japan with a fiducial mass of \(\sim 374\) kt with main detector channel as the inverse beta decay (IBD), sensible to electron antineutrinos: \(\bar{\nu}_{e}+p\to e^{+}+n\). It is also expected several events from elastic scattering with electrons, with the advantage of sensitivity to all flavors: \(\nu+e^{-}\rightarrow\nu+e^{-}\). We consider both channels in our analysis. We set a 60% overall detector efficiency and \(E_{\mathrm{th}}=3\) MeV.
3. JUNO will be a liquid scintillator detector with a fiducial mass of 17 kt situated in China [60]. Despite the interesting multi-channel detection technology reported by the collaboration, we take into account only IBD events. We set an overall efficiency of 50% and \(E_{\mathrm{th}}=3\) MeV in our analysis.
In order to compare the examined scenarios, we will consider only the energy information, calculating the number of events in the \(j\)-th energy bin as
\[N_{j}=n_{d}^{c}\int_{0}^{\infty}dt\int_{0}^{\infty}dE_{\nu}\frac{d^{2}\phi_{ \nu}}{dtdE_{\nu}}\eta(E_{\nu})\int_{E_{i}}^{E_{f}}d\bar{E}_{\nu}R_{j}(\bar{E}_ {\nu},E_{\nu})\sigma(E_{\nu}),\] (III.9)
where \(n_{d}^{c}\) is the number of targets for each detector \(d\), with \(c\) accounting for each specific channel, \(\phi_{\nu}\) is the neutrino flux, \(\eta(E_{\nu})\) is the efficiency that can eventually depend on \(\nu\) energy, \(\sigma\) is the neutrino cross-section (with each channel shown in Fig. 5), \(R_{j}\) is the detector resolution. We analyze the \(\nu\) energy from the threshold of each detector up to 60 MeV. The \(\nu\) mixing is encoded in the flux \(\phi_{\nu}\), that can be written as
\[\begin{split}\phi_{\nu_{e}}&=\phi_{\nu_{e}}^{0}P_{ ee}+\phi_{\nu_{e}}^{0}(1-P_{ee})\\ \phi_{\bar{\nu}_{e}}&=\phi_{\bar{\nu}_{e}}^{0}\bar{ P}_{ee}+\phi_{\nu_{e}}^{0}(1-\bar{P}_{ee})\\ \phi_{\nu_{e}}&=\phi_{\nu_{e}}^{0}(1-P_{ee})+\phi_{ \nu_{e}}^{0}(2+P_{ee}+\bar{P}_{ee})+\phi_{\bar{\nu}_{e}}^{0}(1-\bar{P}_{ee}) \end{split}\] (III.10)
Figure 5: \(\nu\) total cross-sections for inverse beta decay (IBD) [57], \(\nu_{e}-\)Ar charge current interaction (from SNOwGLoBES) [58] and elastic \(\nu-e^{-}\) interaction [59].
for the standard MSW (widely found in literature, see [61; 7] for a review), where \(\phi^{0}_{\nu_{\alpha}}\) refers to initial SN neutrino fluxes and non-standard QD effects are hidden in \(P_{ee}\) and \(\bar{P}_{ee}\). In Fig. 6, the expected number of events for the three detectors are reported in the energy spectrum of simulated progenitors (11.2 \(M_{\odot}\) and 27 \(M_{\odot}\)) for both hierarchies and are compared to MSC\({}^{\epsilon}\) model. The results translate what is shown in Fig. 4, weighted by detector capabilities. Expected changes in the spectrum look more prominent when NH is assumed as a standard solution for DUNE, with an increase of \(\nu_{e}\) events for both hierarchies. On the other hand, for HK and JUNO the MSC\({}^{\epsilon}\) effect results in a decrease of events in IH and an increase in NH and it is not so clear which hierarchy would be more sensible to the MSC\({}^{\epsilon}\) effect, since the number of QD parameters for each one is different for both \(P_{ee}\) and \(\bar{P}_{ee}\). For instance, for \(\bar{P}_{ee}^{\rm NH}\), fixing \(\Gamma_{3}\), an increase in \(\Gamma_{8}\) is weighted by the factor 1/3 in the exponential terms, while \(\bar{P}_{ee}^{\rm IH}\) is more sensible to \(\Gamma_{8}\), since the same change is multiplied by a factor 1, but it is also independent of \(\Gamma_{3}\).
Note that eq. (III.10) is valid for a conserved total flux, which does not remain in the \(\nu\)-loss scenario. To get around this issue we propose a more generalized form of (III.10)
\[\phi_{\nu_{e}} =\phi^{0}_{\nu_{e}}P_{ee}+\phi^{0}_{\nu_{x}}(P_{\mu e}+P_{\tau e})\] (III.11) \[\phi_{\bar{\nu}_{e}} =\phi^{0}_{\nu_{e}}\bar{P}_{ee}+\phi^{0}_{\nu_{x}}(\bar{P}_{\mu e }+\bar{P}_{\tau e})\] \[\phi^{\prime}_{\nu_{x}} =\phi^{0}_{\nu_{x}}(P_{e\mu}+P_{e\tau})+\phi^{0}_{\nu_{x}}(P_{\mu \mu}+P_{\mu\tau}+P_{\tau\tau}+P_{\tau\mu})\] \[\phi^{\prime}_{\bar{\nu}_{x}} =\phi^{0}_{\bar{\nu}_{x}}(\bar{P}_{e\mu}+\bar{P}_{e\tau})+\phi^{0 }_{\bar{\nu}_{x}}(\bar{P}_{\mu\mu}+\bar{P}_{\mu\tau}+\bar{P}_{\tau\tau}+\bar{P} _{\tau\mu})\] \[\phi_{\nu_{x}} =\phi^{\prime}_{\nu_{x}}+\phi^{\prime}_{\bar{\nu}_{x}}\]
where each probability can be factorized as described in (III.8). For the ones where \(\alpha=\mu,\tau\), since these flavors are generated in a superposition of mass states in matter, the \(\nu_{\mu}-\nu_{\tau}\) mixing should be taken into account, where \(P^{m\rm SN}_{\alpha i}\) and \(\bar{P}^{m\rm SN}_{\alpha i}\) would correspond to the proper square module of elements from \(U_{\mu\tau}\) mixing matrix7. In Fig. 7 we show each probability \(P_{\alpha\beta}\) for a 10 kpc SN for the \(\nu\)-loss scenario. In Fig. 8 we show the expected spectrum of events for the \(\nu\)-loss model.
Figure 6: Spectrum of events for DUNE, HK and JUNO for NH (solid lines) and IH (dashed), with \(n=0\) for a 10 kpc SN with 11.2 \(M_{\odot}\) and 27 \(M_{\odot}\) progenitor mass simulations. Each column concerns a detector, while the rows are related to progenitor masses. The size of bins is at least twice the resolution at the specific energy and given a minimum threshold in the number of events per bin established in our analysis. The bands are to respect the 40% of the uncertainty of the flux over standard NH and IH, with details in the text. For the QD parameters, we used the values \(\Gamma_{8}=10^{-27}\) eV and \(\Gamma_{3}=4\Gamma_{8}\).
### Role of Earth matter effects
Since a galactic SN detection can be impacted by Earth matter effects, we also calculate \(P_{ee}\) and \(\bar{P}_{ee}\) to each detector given the position of the SN in the sky. However, as shown in [62], it is not expected to play an important role for the neutronization burst. The reason is that regeneration would start to be important beyond \(E_{\nu}\gtrsim 50\) MeV or even higher energies, which is close to the end of the expected spectrum. In Fig. 9 we show the impact of Earth matter effects in \(P_{ee}\) for a SN flux of \(\nu_{e}\) in IH and \(\bar{P}_{ee}\) for \(\bar{\nu}_{e}\) in NH in a range of zenith angles for only non-adiabatic MSW effect (no quantum decoherence effects) using the PREM density profile available in [63], where \(90^{\circ}\) is a horizon of an observer at Earth (with no matter effects) and \(180^{\circ}\) represents a propagation all along Earth diameter. Note that for \(P_{ee}\) in NH and \(\bar{P}_{ee}\) in IH, regeneration does not play an important role.
In Fig. 10 we also see the QD effects (MSC\({}^{\rm e}\) with \(n=0\)) combined with Earth matter effects for a specific energy (similarly as shown in Fig. 4, but for a wide range of \(\theta_{z}\) and the QD parameter). The asymptotic maximal mixing suppresses regeneration effects beyond \(\Gamma_{8}\sim 10^{-27}\) eV, being a leading effect. Since regeneration is a second-order effect, we impose bounds on QD in the next section without considering Earth matter effects, and by the end of
Figure 8: Spectrum of events for DUNE, HK and JUNO for NH (solid lines) and IH (dashed) compared to \(\nu\)-loss model, with \(n=0\) for a 10 kpc SN with 11.2 \(M_{\odot}\) and 27 \(M_{\odot}\) progenitor mass simulations. For \(\nu\)-loss we use different bin sizes in order to achieve the requirement of minimum number of events per bin of \(\sim 5\). Given the lack of events in this scenario, we decided to use a single bin for JUNO.
Figure 7: Probabilities with impact of \(\nu\)-loss with \(n=0\) considering a 10 kpc SN for NH (left) and IH (right).
Section IV.2, we show its impact on results.
## IV Future limits on quantum decoherence
In order to impose bounds on QD using simulated data, we perform a binned \(\chi^{2}\) through pull method [64] over QD parameters for MSC and \(\nu\)-loss scenarios:
\[\chi^{2}=\sum_{d}\sum_{j=1}^{m}\frac{(N_{j,d}^{\rm true}-(1+a)N_{j,d}^{\rm th})^ {2}}{N_{j,d}^{\rm th}}+\frac{a^{2}}{\sigma_{a}^{2}}\] (IV.12)
where \(m\) indicates the number of energy bins, \(d\) represents each detector, \(N_{j,d}^{\rm true}\) represents events predicted by the MSW solution, and \(N_{j,d}^{\rm th}\) accounts the theoretical number of events of the marginalized model in our analysis, i.e. MSW + quantum decoherence respectively and the second term on the right-hand side takes our estimation in the flux uncertainties of \(40\%\) into account [65].
We can note in Fig. 7 that since all probabilities vanish for high values of \(\gamma\), \(N\to 0\) for \(\nu\)-loss. However in order to avoid a bias in our analysis, we marginalize over \(\gamma\) only in a range where the requirement of at least \(\sim 5\) events per bin
Figure 10: \(P_{ee}\) in IH (left) and \(\bar{P}_{ee}\) in NH (right) under Earth matter effects as a function of QD parameter for \(E_{\nu}=30\) MeV, considering a SN 10 kpc away from Earth and \(n=0\). It is possible to see that QD suppresses regeneration effects for \(\Gamma_{8}\gtrsim 10^{-27}\) eV, where \(\Gamma_{3}=10^{-32}\) eV was set. The white line on the color bar represents maximal mixing.
Figure 9: \(P_{2e}\) (left) and \(\bar{P}_{1e}\) (right) under Earth matter effects as a function of neutrino energy and zenith angle. In standard MSW in supernova mixing, \(P_{2e}\) and \(\bar{P}_{1e}\) can be used to calculate the survival probabilities of \(\nu_{e}\) (IH) and \(\bar{\nu}_{e}\) (NH) respectively. The lines on the color bar are the adiabatic solutions for \(P_{2e}\) (yellow) and \(\bar{P}_{1e}\) (black) without regeneration effects.
is achieved (we use the same rule for MSC). We also take the size of the bins to be twice the detector energy resolution. Using these requirements, JUNO allows a single bin for \(\nu\)-loss, being a counting experiment for this analysis. The bins scheme for DUNE and HK are also changed for \(\nu\)-loss compared to MSC in order to match the established minimum number of events per bin in the tested range of \(\gamma\).
Before imposing limits on MSC and \(\nu\)-loss with eq. (IV.12), we can treat \(P_{ee}\) and \(\bar{P}_{ee}\) as free parameters, which is a reasonable approximation to an adiabatic propagation at the SN, since these probabilities are energy independent (see [66] for a more detailed discussion in the context of SN1987A), we perform a marginalization with \(\chi^{2}(P_{ee},\bar{P}_{ee})\) in eq. (IV.12) to understand how far asymptotically QD scenarios are from the standard \(\nu\) mixing and also see how sensible a combined measurement (DUNE+HK+JUNO) could be, using uniquely the neutronization burst. Fig. 11 shows how a 10 kpc SN can impose limits to \(P_{ee}\) and \(\bar{P}_{ee}\), with NH and IH concerning the true MSW model. The black dot represents maximal mixing or the asymptotic limit of MSC, which is closer to the IH solution (given by the corresponding best-fit value) than NH for \(P_{ee}\), but in an intermediary point of hierarchies with respect to \(\bar{P}_{ee}\). In the \(\nu\)-loss scenario it is not so clear from Fig. 11 which hierarchy would lead to stronger constraints, given the presence of other probabilities, such as the ones in Fig. 7.
Using eq. (IV.12) and the procedures described in Sections II and III, we treat QD parameters as free and perform a \(\chi^{2}\) analysis in order to impose statistical bounds in this effect using a future SN detection. Since nowadays the neutrino mass hierarchy is not established, we include both scenarios in our analysis.
We test both MSW-NH versus the marginalized MSW-NH + QD and also the MSW-IH versus the marginalized MSW-IH + QD in order to understand how restrictive future detectors will be. The results will show that if QD plays any role in SN neutrinos, both possible \(\nu\) hierarchies could be affected.
### Msc\({}^{\prime}\)
For the MSC\({}^{\prime}\) model, we calculate the \(\sqrt{\Delta\chi^{2}}\) bounds over the parameter \(\Gamma\), where \(\Delta\chi^{2}=\chi^{2}-\chi^{2}_{\rm min}\) (since we are not including statistical and systematic uncertainties when producing the "true" data, we always have \(\chi^{2}_{\rm min}=0\)). The results for the 3 experiments are summarized in Fig. 12, where the true scenario is NH and we marginalize over NH+QD. Note that bounds reach different significant limits for each SN distance, with lower distances being more restrictive.
Since the traveled distance is a fixed feature, the only aspect that the SN distance from Earth contributes is the number of events detected. Following Fig. 12, the best performance in NH is for DUNE, with possible \(3\sigma\) limits for a 10 kpc SN away from Earth of:
Figure 11: Limits on \(P_{ee}\) and \(\bar{P}_{ee}\) for the 27 \(M_{\odot}\) (solid) and 11.2 \(M_{\odot}\) (dashed) progenitor stars from simulations, considering only the neutronization burst. No quantum decoherence effects are taken into account in this Figure. The distance from Earth considered was 10 kpc. The probability is assumed to be a free parameter as recently proposed in [66]. The assumption of a standard adiabatic MSW conversion at the SN is taken into account (as all along the manuscript), getting rid of the energy dependency on \(P_{ee}\) and \(\bar{P}_{ee}\). The black dot is the maximal mixing scenario (1/3). Note that the 11.2 \(M_{\odot}\) line for IH matches to the 27 \(M_{\odot}\), showing that the sensitivity for simulated progenitors tested is similar.
\[\Gamma_{0}\leq\left\{\begin{array}{ll}6.2\times 10^{-14}\ {\rm eV}&(n=0)\\ 5.2\times 10^{-14}\ {\rm eV}&(n=2)\\ 1.4\times 10^{-13}\ {\rm eV}&(n=5/2)\end{array}\right.\] (IV.13)
For a SN at a distance of 1 kpc, limits of \({\cal O}(10^{-16})\) eV can be reached. HK has also a good performance and achieves \(2\sigma\) bounds for a 10 kpc SN. JUNO is not capable of individually achieving reasonable bounds on QD for SN distances \(\gtrsim 1\) kpc, but would also have a strong signal for a galactic SN as close as 1 kpc away from Earth, which can be attributed to the small fiducial mass compared to HK and a single IBD channel considered in this work (with a significantly lower cross-section than \(\nu_{e}\)-Ar for energies above \(\sim 15\) MeV). Other channels, such as \(\nu\)-p elastic scattering could possibly improve the results, but given the detection challenges associated, we decided to not include them here.
We also performed the same analysis using IH as the true theory and marginalizing over IH+QD. The results are shown in Fig. 13. The best performance is clearly for HK with \(2\sigma\) bound of:
Figure 12: Limits on \(\Gamma\) for various SN distances from Earth for DUNE (left), HK (middle), and JUNO (right) for the 40 \(M_{\odot}\) progenitor star simulation. The true scenario taken into account was NH, and we marginalize the parameters over the theoretical NH+QD (MSC\({}^{\ell}\)). No Earth matter effect was considered. Each row means a different value of \(n\) in the parameterization \(\Gamma=\Gamma_{0}(E/E_{0})^{n}\).
\[\Gamma_{0}\leq\left\{\begin{array}{ll}3.6\times 10^{-14}\ \mathrm{eV}&(n=0)\\ 8.0\times 10^{-14}\ \mathrm{eV}&(n=2)\\ 2.4\times 10^{-13}\ \mathrm{eV}&(n=5/2)\end{array}\right.\] (IV.14)
for a 10 kpc SN from Earth. DUNE is not capable to impose strong bounds in an IH scenario. JUNO performance is improved for distances \(\lesssim 1\) kpc compared to NH. Results are summarized in Table 1 in Appendix B.
A 20 kpc SN could not impose strong bounds for individual experiments. Distances as far as 50 kpc (as Large Magellanic Cloud) were not investigated in this work, given the lack of events per bin, in which a more refined unbinned statistical analysis would be required, which is not strongly motivated by the fact that expected limits are below \(2\sigma\).
The bounds and sensitivity of each detector in a given hierarchy shown above could be associated with the sensitivity to \(P_{ee}\) and \(\bar{P}_{ee}\) shown in Fig. 11. In NH (left plot), limits over \(P_{ee}\) are more restrictive than \(\bar{P}_{ee}\) with respect to maximal mixing represented by the black dot. For IH (right plot), we have an opposite sensitivity, since \(P_{ee}\sim 1/3\), while for \(\bar{P}_{ee}\) there is a gap between the best fit and 1/3 probability, allowing limits with certain significance to be imposed. Since DUNE is most sensitive to \(\nu_{e}\), via \(\nu_{e}\)-Ar interaction, it will be more sensitive to \(P_{ee}\) and then more relevant in the NH scenario. As for HK and JUNO, they are more sensitive to \(\bar{\nu}_{e}\) and therefore to \(\bar{P}_{ee}\), which reflects a better performance in the IH scenario. In our calculations, the elastic scattering considered in HK does not contribute much to the total \(\chi^{2}\).
Figure 13: Same as Fig. 12 but with IH as the true theory, marginalized over the parameters of the IH+QD model.
### Msc\({}^{\epsilon}\)
The same procedure described in the section above was performed on the MSC\({}^{\epsilon}\) model, with bounds over the parameter \(\Gamma_{8}\). Results are summarized in Fig. 14 for NH vs NH+QD. SN distance also plays an important role in this scenario and results and their aspects are similar to MSC\({}^{\ell}\) described in the last section. DUNE has the best performance for the tested SN distances and even for a 10 kpc SN, bounds with \(3\sigma\) could be achieved for \(n=0,2\) and 5/2. Despite the stronger effects caused by MSC for larger distances, the number of events decrease with \(L^{2}\), and stronger limits can be imposed for a SN happening at shorter distances, reflecting that the larger number of neutrinos arriving at the detector is a crucial aspect.
From Fig. 14, taking the result of a 10 kpc SN (27 \(M_{\odot}\)), DUNE would potentially impose \(\Gamma_{8}\leq 4.2\times 10^{-28}\) eV for \(2\sigma\) and \(\Gamma_{8}\leq 1.7\times 10^{-27}\) eV for \(3\sigma\) with \(n=0\), whereas the HK bound is \(\Gamma_{8}\leq 4.2\times 10^{-27}\) eV for \(2\sigma\). Looking at limits from various works [23; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 44], to the best knowledge of the authors, this is an unprecedented level of sensitivity for testing quantum decoherence, orders of magnitude more restrictive than any other work in the subject. Fig. 15 shows bounds from works with different sources and place the limits from this work for both hierarchy scenarios.
Note that for \(n=2\) and 5/2 the bounds are over \(\Gamma_{08}\) in \(\Gamma_{8}=\Gamma_{08}(E/10\) MeV\()^{n}\). For a 10 kpc SN (\(27M_{\odot}\)), DUNE \(3\sigma\) bounds reach:
\[\Gamma_{08}\leq\left\{\begin{array}{ll}7.0\times 10^{-28}\ {\rm eV}&(n=2)\\ 6.2\times 10^{-28}\ {\rm eV}&(n=5/2)\end{array}.\right.\] (IV.15)
Figure 14: Same as 12 but for MSC\({}^{\epsilon}\) with simulations of the 27 \(M_{\odot}\) (solid) and 11.2 \(M_{\odot}\) (dashed) progenitor masses. The bounds are orders of magnitude more restrictive than for MSC\({}^{\ell}\).
HK is able to achieve \(2\sigma\) bounds as restrictive as \(\Gamma_{08}\leq 2.7\times 10^{-28}\) eV and \(\Gamma_{08}\leq 1.2\times 10^{-28}\) eV for \(n=2\) and \(5/2\) respectively. All mentioned results are summarized in Table 2 in the Appendix B.
We also performed a combined fit for the three detectors using the same \(\nu\) hierarchy scheme shown in Fig. 16, where a \(3\sigma\) limit for a 10 kpc SN would reach:
\[\Gamma_{08}\leq\left\{\begin{array}{ccc}6.2\times 10^{-28}\ \text{eV}&(n=0) \\ 1.2\times 10^{-28}\ \text{eV}&(n=2)\\ 0.72\times 10^{-28}\ \text{eV}&(n=5/2)\end{array}\right..\] (IV.16)
Even a \(4\sigma\) of maximal mixing is possible to be achieved for all values of \(n\), but such significance is achieved only by the \(27\)\(M_{\odot}\) simulated progenitor. Although a combined analysis reaches high significance, it should be taken with a grain of salt, since it is not possible to be sure that experiments would be simultaneously in operation.
Using the same procedure as done in NH, we make the analysis assuming IH as the true mixing and marginalizing over IH+QD. The results are shown in Fig. 17. HK has the strongest bounds on this scenario but does not reach \(3\sigma\) for a 10 kpc SN, even though the potential limits for \(2\sigma\) are:
\[\Gamma_{08}\lesssim\left\{\begin{array}{ccc}1.3\times 10^{-27}\ \text{eV}&(n=0) \\ 1.4\times 10^{-28}\ \text{eV}&(n=2)\\ 4.9\times 10^{-28}\ \text{eV}&(n=5/2)\end{array}\right..\] (IV.17)
DUNE has a very poor performance in this scenario for any distance \(\gtrsim 1\) kpc. JUNO sensitivity is similar to NH marginalization discussed above. In a combined fit in IH, shown in Fig. 18, the following \(3\sigma\) limits can be obtained:
\[\Gamma_{08}\lesssim\left\{\begin{array}{ccc}5.4\times 10^{-27}\ \text{eV}&(n=0) \\ 3.5\times 10^{-27}\ \text{eV}&(n=2)\\ 3.3\times 10^{-27}\ \text{eV}&(n=5/2)\end{array}\right..\] (IV.18)
To check the impact of regeneration on the above results, we calculated the bounds of a combined detection of DUNE, HK, and JUNO including this effect. We test different \(\theta_{z}\), the zenith to respect to DUNE, with the assumption that
Figure 15: Current bounds on quantum decoherence for a number of works from many neutrino sources and also the SN limits presented here (\(n=0\)). Arrows with longer horizontal bases correspond to current experimental bounds, whereas minor bases are to respect to possible future limits. Numbers in the arrows indicate the reference in which the limits were obtained. Thin arrows indicate bounds equivalent to MSC\({}^{\ell}\), while thick-filled ones are the MSC\({}^{\epsilon}\). White-filled thick arrows correspond to \(\nu\)-loss bounds. Supernova limits described in this work are in red, and are to respect to a distance of 10 kpc from Earth unless distance is indicated, with more restrictive bounds being possible for closer SNs.
the SN flux comes from DUNE longitude. The results are in Fig. 19. We can note in the left plot that the impact of the Earth matter effect is small but enhances QD bounds for a 10 kpc detection and limits could be stressed beyond \(4\sigma\). The right plot shows the situation where the IH scenario is assumed to be true and NH+QD is marginalized. We will discuss such a scenario in Section V, but we also see that regeneration will not change significantly the results.
Figure 16: Combined fit for the true MSW-NH marginalizing over MSW-NH with QD (MSC\({}^{\rm{c}}\)) effects.
Figure 17: Same as Fig. 14 but for IH versus IH + QD.
### Neutrino loss
Since in \(\nu\)-loss the spectrum of events decreases asymptotically to zero, the bounds on this scenario are expected to be as significant or even more than MSC for all experiments. Since the calculated number of events for NH is low (mainly for DUNE and JUNO) and \(\nu\)-loss would decrease it, not fulfilling our requirement of \(\gtrsim 5\) events per bin, we perform here only the IH (true) versus IH+QD. Fig. 20 shows the \(\sqrt{\Delta\chi^{2}}\) for each individual detector. We see that high values of \(\gamma\) are strongly bounded, even for JUNO. For a SN from 10 kpc away from Earth, DUNE, HK and JUNO are capable to impose \(\gamma\leq 5.2\times 10^{-28}\) eV, \(\gamma\leq 4.9\times 10^{-28}\) eV and \(\gamma\leq 5.9\times 10^{-28}\) eV respectively with \(3\sigma\) of significance (\(n=0\)). Note that beyond 10 kpc the number of events per bin would be significantly small for a \(\nu\)-loss scenario and we do not consider it in this analysis.
HK is capable to achieve the best (\(3\sigma\)) bounds with \(\gamma_{0}\leq 2.1\times 10^{-29}\) eV and \(\gamma_{0}\leq 1.2\times 10^{-29}\) eV for \(n=2\) and \(5/2\) respectively, with a 10 kpc SN. Although not shown in the plots, it is worth mentioning that HK would impose bounds on \(\gamma\) even for NH, given the high statistics associated with this experiment, being the most sensitive one for the \(\nu\)-loss model. We detail the bounds and all mentioned results here in Table 3.
## V Neutrino mass hierarchy measurement
In a future supernova detection, the neutronization burst arises as a robust test of neutrino mass hierarchy, with \(\nu\)-Ar in DUNE capable to determine the correct scenario with relatively high confidence. However, although the possible strong bounds are to be imposed on quantum decoherence, if QD plays a significant role in \(\nu\) mixing, the IH could be mimicked by a NH with the impact of QD (particularly, in the MSC models). A similar analysis was performed in the context of \(\nu\)-decay in [67]. Therefore, the question that arises is how much NH and IH are distinguishable if we
Figure 19: Limits on MSC with the impact of Earth matter effects for a SN 10 kpc from Earth and the 27 \(M_{\odot}\) simulation for different zenith angles \(\theta_{z}\) (\(n=0\)). The limits correspond to a combined detection of DUNE, HK, and JUNO, but \(\theta_{z}\) is to respect to DUNE, with SN beam in the direction of DUNE longitude. The \(\theta_{z}=320^{o}\) means that regeneration effects at HK and JUNO are expected, even if the SN beam does not cross Earth for reaching DUNE.
Figure 18: Same as Fig. 16 but now accounting the IH scenario.
compare both hierarchies superposing the standard NH to QD. Fig. 21 shows the statistical bounds of the scenario where IH is taken as the true theory and NH+QD is marginalized in a combined detection for \(n=0,2,5/2\). The results show that the significance of hierarchy determination significantly weakens for the tested SN distances and even a combined detection could not disentangle the hierarchies if MSC plays an important role.
To check this statement we can compare the values of \(\sqrt{\Delta\chi^{2}}\) for \(\Gamma_{8}\to 0\) and \(\Gamma_{8}\to\infty\) in Fig. 21. We can assume that \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to 0}\) corresponds to the distinguishability of hierarchy in a standard scenario since \(\Gamma_{8}\) is small enough to neglect QD effects. The plateau in the limit of \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to\infty}\) shows how NH+QD would differ from IH in a future combined detection, in which has lower values of \(\sqrt{\Delta\chi^{2}}\), resulting in a less significant hierarchy discrimination. Taking as a reference a SN distance of 10 kpc for the 27 \(M_{\odot}\) simulation, with a combined detection of DUNE, HK and JUNO, we have a \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to 0}=6.89\) going to \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to\infty}=3.13\). For an individual detection with the same SN distance, DUNE would change from \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to 0}=5.70\), which is statistically significant to determine the hierarchy, to a mere \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to\infty}=0.37\). HK also could be affect with a \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to 0}=3.36\) going to \(\sqrt{\Delta\chi^{2}}|_{\Gamma_{8}\to\infty}=2.65\). JUNO can not distinguish the neutrino hierarchies significantly at 10 kpc. It is important to mention that for 1 kpc and 5 kpc DUNE could be highly affected by this hierarchy misidentification, but HK still would provide a distinction of \(\gtrsim 5\sigma\) even with QD effects. For SN distances \(>5\) kpc, the neutrino hierarchies would be hardly disentangled by the tested experiments if QD effects are significant. As far as we tested, the \(\nu\)-loss model did not lead to the same potential hierarchy misidentification found in the MSC.
Figure 20: Limits on \(\gamma\) for various SN distances from Earth for all detectors in the \(\nu\)-loss scenario with true IH marginalized over the parameters of the IH+QD model.
## VI Conclusions
In this paper, we have explored the capability of a future SN neutrino detection in imposing limits in quantum decoherence scenarios. As the neutrinos are already treated as an incoherent mixture of mass eigenstates inside the SN, damping effects are not expected, then we explore secondary quantum decoherence scenarios, such as the relaxation mechanism, which can be potentially observed in a SN neutrino signal. We limit ourselves to scenarios where the decoherence matrix \(D\) is diagonal in the neutrino vacuum mass basis. Among the possible models to be investigated, we consider the ones we denoted as Mass State Coupling (MSC), leading to maximal mixing of states, and the neutrino loss (\(\nu\)-loss), associated to the loss of neutrino flux along propagation. These scenarios are well-motivated by quantum gravity, where a possible dependency with energy is expected in the form of \(\gamma=\gamma_{0}(E/E_{0})^{n}\), and therefore, we explore the limits on the decoherence parameters for different \(n\).
The analysis was done considering DUNE, HK, and JUNO as possible detectors. For the neutrino flux data, three progenitor stars were considered, a 40 \(M_{\odot}\) (LS180-s40.0), 27 \(M_{\odot}\) (LS220s27.0c) and 11.2 \(M_{\odot}\) (LS220s11.2c), using the SN simulation data from the Garching group [7; 10; 56]. To get around the unsolved problem of neutrino collective effects, only the neutronization burst was considered, given that collective effects are expected to not play a significant role in this emission phase.
When considering the neutrino propagation inside the supernova, the relaxation effect could affect the neutrino flavor conversion, even with the assumption of no exchange of neutrino energy to the environment, or \([H,V_{p}]=0\) (MSC\({}^{\prime}\)). We show that in this regime it is possible to get competitive limits to QD parameters. However, the required values for the decoherence parameters need to be much larger than the ones in the scenario where \([H,V_{p}]\neq 0\) (MSC\({}^{\rm e}\)) (see the Appendix A), which would provide the most restrictive bounds on QD to date. For MSC\({}^{\rm e}\), we only consider the decoherence/relaxation acting on neutrino propagation in the vacuum from the SN until it reaches the detectors at Earth, for which the propagation length is orders of magnitude larger than the SN size, and therefore, more sensible to the relaxation effects. We also explore the possible effects of Earth regeneration due to the neutrino propagation inside the Earth, which has minor effects in the bounds for the relaxation parameters, being the vacuum propagation the most relevant coherence length.
With all considerations, we show that the detectors used in the analysis are capable to impose the limits listed in Tables 1 and 2 for the MSC scenario, depending on the distance being considered and the neutrino mass hierarchy. For the NH, the DUNE detector is the most promising one, while HK is the most sensible in the case of IH. The possible limits on the decoherence parameters are orders of magnitude stronger than the ones imposed by current terrestrial and solar experiments, as shown in Fig. 15. For the \(\nu\)-loss scenario, the limits are shown in Table 3. Due to the neutrino disappearance, extra care needed to be taken in this scenario so that the requirement of at least 5 events per bin is fulfilled and the \(\chi^{2}\) analysis can be applied.
Finally, we explored the possible degeneracy between the different standard scenarios of unknown mass hierarchy (NH and IH) without QD and the ones with QD effects included. As we saw, the IH scenario could be easily mimicked by NH combined with QD-MSC effects.
Figure 21: Statistically comparing the inverted hierarchy (IH) to normal hierarchy (NH) with the impact of quantum decoherence for a combined detection using the 11.2 \(M_{\odot}\) (dashed) and 27 \(M_{\odot}\) (solid) simulations. No regeneration effects were taken into account.
## Acknowledgements
We thank Hans-Thomas Janka and the Garching group for providing the SN simulations used in this work. MVS is thankful to Alberto Gago for pointing out complete positivity relations. EK is very grateful for the hospitality of GSSI during the preparation of this manuscript. This study was financed by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, and partially by the Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) grants no. 2019/08956-2, no. 14/19164-6, and no. 2022/01568-0.
|
2309.13172 | Walking-by-Logic: Signal Temporal Logic-Guided Model Predictive Control
for Bipedal Locomotion Resilient to External Perturbations | This study proposes a novel planning framework based on a model predictive
control formulation that incorporates signal temporal logic (STL)
specifications for task completion guarantees and robustness quantification.
This marks the first-ever study to apply STL-guided trajectory optimization for
bipedal locomotion push recovery, where the robot experiences unexpected
disturbances. Existing recovery strategies often struggle with complex task
logic reasoning and locomotion robustness evaluation, making them susceptible
to failures caused by inappropriate recovery strategies or insufficient
robustness. To address this issue, the STL-guided framework generates optimal
and safe recovery trajectories that simultaneously satisfy the task
specification and maximize the locomotion robustness. Our framework outperforms
a state-of-the-art locomotion controller in a high-fidelity dynamic simulation,
especially in scenarios involving crossed-leg maneuvers. Furthermore, it
demonstrates versatility in tasks such as locomotion on stepping stones, where
the robot must select from a set of disjointed footholds to maneuver
successfully. | Zhaoyuan Gu, Rongming Guo, William Yates, Yipu Chen, Ye Zhao | 2023-09-22T20:25:48Z | http://arxiv.org/abs/2309.13172v1 | Walking-by-Logic: Signal Temporal Logic-Guided Model Predictive Control for Bipedal Locomotion Resilient to External Perturbations
###### Abstract
This study proposes a novel planning framework based on a model predictive control formulation that incorporates signal temporal logic (STL) specifications for task completion guarantees and robustness quantification. This marks the first-ever study to apply STL-guided trajectory optimization for bipedal locomotion push recovery, where the robot experiences unexpected disturbances. Existing recovery strategies often struggle with complex task logic reasoning and locomotion robustness evaluation, making them susceptible to failures caused by inappropriate recovery strategies or insufficient robustness. To address this issue, the STL-guided framework generates optimal and safe recovery trajectories that simultaneously satisfy the task specification and maximize the locomotion robustness. Our framework outperforms a state-of-the-art locomotion controller in a high-fidelity dynamic simulation, especially in scenarios involving crossed-leg maneuvers. Furthermore, it demonstrates versatility in tasks such as locomotion on stepping stones, where the robot must select from a set of disjointed footholds to maneuver successfully.
## I Introduction
This study investigates signal temporal logic (STL) based formal methods for robust bipedal locomotion, with a specific focus on circumstances where a robot encounters environmental perturbations at unforeseen times.
Robust bipedal locomotion has been a long-standing challenge in the field of robotics. While existing works have achieved impressive performance using reactive regulation of angular momentum [1, 2] or predictive control of foot placement [3, 4], few offer formal guarantees on a robot's ability to recover from perturbations, a feature considered crucial for the safe deployment of bipedal robots. To this end, our research centers around designing task specifications for bipedal locomotion push recovery, and employing trajectory optimization that assures task correctness and guarantees system robustness.
Formal methods for bipedal systems have gained significant attention in recent years [5, 6]. The prevailing approach in existing works often relies on abstraction-based methods such as linear temporal logic (LTL) [7] with relatively simple verification processes, which abstract complex continuous behaviors into discrete events and low-dimensional states. However, challenges arise when addressing continuous, high-dimensional systems like bipedal robots. As a distinguished formal logic, STL [8] offers mathematical guarantees of specifications on dense-time, real-valued signals, making it suitable for reasoning about task logic correctness and quantifying robustness in complex robotic systems.
Self-collision avoidance is another crucial component for ensuring restabilization from disturbances, especially for scenarios involving crossed-leg maneuvers [3, 4, 9] where the distance between the robot's legs diminishes, as shown in Fig. 1(b). Several previous studies [2, 10] relied on inverted pendulum models to plan foot placements for recovery but often overlooked the risk of potential self-collisions during the execution of the foot placement plan. On the other hand, swing-leg trajectory planning that considers full-body kinematics and collision checking is prohibitively expensive for online computation.
In order to address these challenges, we design an optimization-based planning framework, illustrated in Fig. 1. As a core component of the framework, a model predictive controller (MPC) encodes a series of STL specifications (e.g., stability and foot placements) as an objective function to enhance task satisfaction and locomotion robustness. Furthermore, this MPC ensures safety against leg self-collision via a set of data-driven kinematic constraints.
Solving the MPC generates a reduced-order optimal plan that describes the center of mass (CoM) and swing-foot trajectories, including the walking-step durations. From this MPC trajectory, a low-level controller derives a full-body
Fig. 1: Block diagram of the proposed framework. (a) The signal temporal logic specification \(\varphi^{\mathrm{loco}}\) specifies the locomotion task. (b) A set of data-driven kinematic constraints enforce the leg self-collision avoidance. (c) The model predictive control-based trajectory optimization solves a stable locomotion trajectory. (d) A whole-body controller tracks the desired trajectory. (e) Perturbed walking experiments on our bipedal robot Cassie.
motion through inverse kinematics and then uses a passivity-based technique for motion tracking. We summarize our core contributions as follows:
* This work represents the first-ever step towards incorporating STL-based formal methods into trajectory optimization (TO) for dynamic legged locomotion. We design a series of STL task specifications that guide the planning of bipedal locomotion under perturbations.
* We propose a Riemannian robustness metric that evaluates the walking trajectory robustness based on reduced-order locomotion dynamics. The Riemannian robustness is seamlessly encoded as an STL specification and is therefore optimized in the TO for robust locomotion.
* We conduct extensive push recovery experiments with perturbations of varying magnitudes, directions, and timings. We compare the robustness of our framework with that of a foot placement controller baseline [2].
This work is distinct from our previous study [11] in the following aspects. (i) Instead of a hierarchical task and motion planning (TAMP) framework using abstraction-based LTL [11], this study employs an optimization-based MPC that integrates STL specifications to allow real-valued signals. This property eliminates the mismatch between high-level discrete action sequences and low-level continuous motion plans. (ii) The degree to which STL specifications are satisfied is quantifiable, enabling the MPC to provide a least-violating solution when the STL specification cannot be strictly satisfied. The LTL-based planner in [11], on the other hand, makes decisions only inside the robustness region, which is more vulnerable in real-system implementation.
## II Non-periodic Locomotion Modeling
### _Hybrid Reduced-Order Model for Bipedal Walking_
We propose a new reduced-order model (ROM) that extends the traditional linear inverted pendulum model (LIPM) [12, 13]. The LIPM features a point mass denoted as the center-of-mass (CoM), and a massless telescopic leg that maintains the CoM at a constant height. The LIPM has a system state \(\mathbf{x}\coloneqq[\mathbf{p}_{\mathrm{CoM}};\mathbf{v}_{\mathrm{CoM}}]\), where \(\mathbf{p}_{\mathrm{CoM}}=[p_{\mathrm{CoM},x};\quad p_{\mathrm{CoM},y};\quad p_{ \mathrm{CoM},z}]\) and \(\mathbf{v}_{\mathrm{CoM}}=[v_{\mathrm{CoM},x};\quad v_{\mathrm{CoM},y};\quad v_{ \mathrm{CoM},z}]\) are the position and velocity of the CoM in the local stance-foot frame, as shown in Fig. 2(a). The LIPM dynamics are expressed as follows:
\[\begin{bmatrix}\ddot{p}_{\mathrm{CoM},x}\\ \ddot{p}_{\mathrm{CoM},y}\end{bmatrix}=\omega^{2}\begin{bmatrix}p_{\mathrm{CoM },x}\\ p_{\mathrm{CoM},y}\end{bmatrix} \tag{1}\]
where \(\omega=\sqrt{g/p_{\mathrm{CoM},z}}\) and \(g\) is the acceleration due to gravity. The subscripts \(x\) and \(y\) indicate the sagittal and lateral components of a vector, respectively.
We design a variant of the traditional LIPM that additionally models the swing-foot position and velocity (Fig. 2(a)). In effect, the state vector is augmented as \(\mathbf{\bar{x}}\coloneqq[\mathbf{p}_{\mathrm{CoM}};\mathbf{v}_{\mathrm{CoM}};\mathbf{p}_{ \mathrm{swing}}],\mathbf{p}_{\mathrm{swing}}\in\mathbb{R}^{3}\), and the control input \(\mathbf{\bar{u}}\) sets the swing foot velocity \(\mathbf{\dot{p}}_{\mathrm{swing}}\). Moreover, we define \(\mathbf{y}=[\mathbf{\bar{x}};\mathbf{\bar{u}}]\in\mathbb{R}^{12}\) as the system output, which will be used in Sec. III for signal temporal logic (STL) definitions.
At contact time, a _reset map_\(\mathbf{\bar{x}}^{+}=\Delta_{j\to j+1}(\mathbf{\bar{x}}^{-})\) uses the swing foot location to transition to the next walking step:
\[\begin{bmatrix}\mathbf{p}_{\mathrm{CoM}}^{+}\\ \mathbf{v}_{\mathrm{CoM}}^{+}\\ \mathbf{p}_{\mathrm{swing}}^{+}\end{bmatrix}=\begin{bmatrix}\mathbf{p}_{\mathrm{CoM}}^ {-}-\mathbf{p}_{\mathrm{swing}}^{-}\\ \mathbf{v}_{\mathrm{CoM}}\\ -\mathbf{p}_{\mathrm{swing}}^{-}\end{bmatrix} \tag{2}\]
This occurs when the system state reaches the switching condition \(\mathcal{S}\coloneqq\{\bar{x}|p_{\mathrm{swing},z}=h_{\mathrm{terrain}}\}\), where \(h_{\mathrm{terrain}}\) is the terrain height. Note that the aforementioned position and velocity parameters are expressed in a local coordinate frame attached to the stance foot. The swing foot becomes the stance foot immediately after it touches the ground.
**Remark**.: _Our addition of the swing-foot position \(\mathbf{p}_{\mathrm{swing}}\), together with \(\mathbf{p}_{\mathrm{CoM}}\), uniquely determines the leg configuration of the Cassie robot (e.g., via inverse kinematics), allowing us to plan a collision-free trajectory using only the ROM in Sec. IV-B._
### _Keyframe-Based Non-Periodic Locomotion and Riemannian Robustness_
To enable robust locomotion that adapts to unexpected perturbations or rough terrains, we employ the concept of a _keyframe_ (proposed in our previous work [14]) as a critical locomotion state. The keyframe summarizes a non-periodic walking step in a reduced-order space, and it addresses the robot's complex interaction with the environment. The keyframe allows for the quantification of locomotion robustness, which will be integrated as a cost function within the trajectory optimization in Sec. IV.
**Definition II.1** (Locomotion keyframe).: _Locomotion keyframe is defined as the robot's CoM state \((\mathbf{p}_{\mathrm{CoM}},\mathbf{v}_{\mathrm{CoM}})\) at the apex, i.e., when the CoM is over the stance foot in the sagittal direction (\(p_{\mathrm{CoM},x}=0\)), as shown in Fig. 2(a)._
To quantify the robustness of a non-periodic walking step, we design a robust region centered around a nominal keyframe state in a Riemannian space. The Riemannian space [14] is a reparameterization of the Euclidean CoM phase space using tangent and cotangent locomotion manifolds, represented by a pair \((\sigma,\zeta)\). \(\sigma\) represents the tangent manifold along which the CoM dynamics evolve, while \(\zeta\) represents the cotangent manifold orthogonal to \(\sigma\). These manifolds can be derived analytically from the LIPM dynamics in (1); the detailed derivation is in [14]. Within the Riemannian space, we define a robust keyframe region that enables stable walking. This region is referred to as the Riemannian region.
**Definition II.2** (Riemannian region).: _The Riemannian region \(\mathcal{R}\) is the area centered around a nominal keyframe state \((\sigma_{\mathrm{nom}},\zeta_{\mathrm{nom}})\): \(\mathcal{R}_{d}\coloneqq\{(p_{\mathrm{CoM},d},v_{\mathrm{CoM},d})\quad|\quad \sigma(p_{\mathrm{CoM},d},v_{\mathrm{CoM},d})\in\Sigma_{d},\ \zeta(p_{\mathrm{CoM},d},v_{\mathrm{CoM},d})\in \mathbb{Z}_{d}\}\), where \(d\in\{x,y\}\) indicates sagittal and lateral directions, respectively. \(\Sigma_{d}=[\sigma_{\mathrm{nom},d}-\delta\sigma_{d},\sigma_{\mathrm{nom},d} +\delta\sigma_{d}]\) and \(\mathbb{Z}_{d}=[\zeta_{\mathrm{nom},d}-\delta\zeta_{d},\zeta_{\mathrm{nom},d} +\delta\zeta_{d}]\) are the ranges of the manifold values for \(\sigma\) and \(\zeta\), where \(\delta\sigma_{d},\delta\zeta_{d}\) are robustness margins._
The sagittal and lateral Riemannian regions in the phase space are illustrated in Fig. 2(b) as shaded areas. The bounds of these Riemannian regions are curved in the phase space because they obey the LIPM locomotion dynamics. Notably, while two Riemannian regions exist in the lateral phase space, only one is active at any given time, corresponding with the stance leg labeled in Fig. 2(b).
**Definition II.3** (Riemannian robustness).: _The Riemannian robustness is the minimum signed distance of an actual keyframe CoM state \(\mathbf{x}\) to all the bounds of the Riemannian regions. Namely, \(\rho_{\mathrm{riem}}:=\min_{l=1}^{8}(r_{l}(\mathbf{x}))\), where \(r_{l}(\mathbf{x})\) is the signed distance to the \(l^{\mathrm{th}}\) bound of the Riemannian regions, as illustrated in Fig. 2(b). We have a total of \(8\) bounds as the sagittal and lateral Riemannian regions each have \(4\) bounds._
Riemannian robustness represents the locomotion robustness in the form of Riemannian regions. Any keyframe inside the Riemannian region has a positive robustness value, which indicates a stable walking step. In the next section, our goal is to leverage Riemannian robustness as an objective function and use STL-based optimization to plan robust trajectories for locomotion recovery.
## III Signal Temporal Logic and Task Specification for Locomotion
Signal temporal logic (STL) [15] uses logical symbols of negation (-), conjunction (\(\wedge\)), and disjunction (\(\vee\)), as well as temporal operators such as eventually (\(\Diamond\)), always (\(\square\)), and until (\(\mathcal{U}\)) to construct specifications. A specification is defined with the following syntax:
\[\begin{split}\varphi&\coloneqq\pi\mid\neg\varphi\mid \varphi_{1}\wedge\varphi_{2}\mid\varphi_{1}\vee\varphi_{2}\mid\\ &\Diamond_{[t_{1},t_{2}]}\varphi\mid\Box_{[t_{1},t_{2}]}\varphi \mid\varphi_{1}\ \mathcal{U}_{[t_{1},t_{2}]}\varphi_{2}\end{split} \tag{3}\]
where \(\varphi\), \(\varphi_{1}\), and \(\varphi_{2}\) are STL specifications. \(\pi:=(\mu^{\pi}(\mathbf{y})-c\geq 0)\) is a boolean predicate, where \(\mu^{\pi}:\mathbb{R}^{p}\rightarrow\mathbb{R}\) is a vector-valued function, \(c\in\mathbb{R}\), and the signal \(\mathbf{y}(t):\mathbb{R}_{+}\rightarrow\mathbb{R}^{p}\) is a \(p\)-dimensional vector at time \(t\). For a dynamical system, the signal \(\mathbf{y}(t)\) is the system output (in our study, \(\mathbf{y}=[\mathbf{\bar{x}};\mathbf{\bar{u}}]\in\mathbb{R}^{12}\)). The time bounds of an STL formula are denoted with \(t_{1}\) and \(t_{2}\), where \(0\leq t_{1}\leq t_{2}\leq t_{\mathrm{end}}\) and \(t_{\mathrm{end}}\) is the end of a planning horizon. The validity of an STL specification is inductively defined using the rules in Table I.
STL provides the capability of quantifying _robustness degree_[16][17]. A positive robustness degree indicates specification satisfaction, and its magnitude represents the resilience to disturbances without violating this specification. When incorporated into trajectory optimization as a cost, the robustness degree allows for a minimally specification-violating trajectory if the task specification cannot be satisfied strictly [18]. Table II shows the semantics of the robustness degree.
The rest of this section introduces the locomotion specification \(\varphi_{\mathrm{loco}}\), designed to guarantee stable walking trajectories. We interpret locomotion stability as a _liveness_ property in the sense that a keyframe with a positive Riemannian robustness will _eventually_ occur in the planning horizon.
_Keyframe specification_\(\varphi_{\mathrm{keyframe}}\): To enforce properties on a keyframe, we first describe it using an STL formula \(\varphi_{\mathrm{keyframe}}\), checking whether or not a signal \(\mathbf{y}\) is a keyframe. According to Def. II.1, the keyframe occurs when the CoM is over the foot contact in the sagittal direction. Illustrated in Fig. 2(a), this definition is formally specified as \(\varphi_{\mathrm{keyframe}}:=(\mu^{\pi}_{\mathrm{CoM,}x}(\mathbf{y})=0)\), where the predicate denotes the sagittal CoM position \(\mu^{\pi}_{\mathrm{CoM,}x}(\mathbf{y})=p_{\mathrm{CoM,}x}\).
Fig. 2: Illustration of the locomotion specifications. (a) The highlighted state in the middle is the keyframe of a walking step. (b) The grey areas are the Riemannian regions in the sagittal and lateral phase spaces. The signed distances to the bounds of the Riemannian regions are indicated by the arrows. (c) Cassie’s foot is specified to step inside the lateral bounds.
_Riemannian robustness_\(\varphi_{\rm{riem}}\): A stable walking step has a keyframe with positive Riemannian robustness; i.e., the keyframe resides in the Riemannian region, as defined in Def. II.3. As shown in Fig. 2(b), we encode the Riemannian robustness specification \(\varphi_{\rm{riem}}\) such that it is True when a CoM state \(\mathbf{x}\) of a signal is inside the Riemannian region: \(\varphi_{\rm{riem}}:=\bigwedge_{l=1}^{8}(r_{l}(\mathbf{x})\geq 0)\), where \(r_{l}(\mathbf{x})\) is the signed distance from \(\mathbf{x}\) to the \(l^{\rm{th}}\) bound of the Riemannian region in the Riemannian space.
_Locomotion stability_\(\varphi_{\rm{stable}}\): To encode this property using STL, we specify that the keyframe of the last walking step falls inside the corresponding Riemannian region. This stability property is encoded as \(\varphi_{\rm{stable}}:=\Diamond_{\{T_{\rm{contact}}^{N+1},T_{\rm{contact}}^{N+1 }\}}(\varphi_{\rm{keyframe}}\wedge\varphi_{\rm{riem}})\), where \(T_{\rm{contact}}^{N}\) and \(T_{\rm{contact}}^{N+1}\) are the \(N^{\rm{th}}\) and \(N+1^{\rm{th}}\) contact times and represent the time bounds of the last walking step in the planning horizon.
_Swing foot bound_\(\varphi_{\rm{foot}}\): For locomotion in a narrow space (e.g., a treadmill, as shown in Fig. 2(c)), we use a _safety_ specification \(\Box\varphi_{\rm{foot}}\) to ensure the foothold lands inside of the treadmill's edges. The operator \(\Box\) without a time bound means the specification should hold for the entire planning horizon. We define \(\varphi_{\rm{foot}}:=(\mu_{\rm{left}}^{\pi}(\mathbf{y})\geq 0)\wedge(\mu_{\rm{right}}^ {\pi}(\mathbf{y})\geq 0)\), where \(\mu_{\rm{left}}^{\pi}=-p_{\rm{swing},y}+e_{\rm{left}}\) and \(\mu_{\rm{right}}^{\pi}=p_{\rm{swing},y}-e_{\rm{right}}\) are the predicates for limiting the lateral foot location against the left edge \(e_{\rm{left}}\) and the right edge \(e_{\rm{right}}\) of the treadmill.
_Overall locomotion specification_\(\varphi_{\rm{loco}}\): The compounded locomotion specification is \(\varphi_{\rm{loco}}=\varphi_{\rm{stable}}\wedge(\Box\varphi_{\rm{foot}})\). Satisfying the specification \(\varphi_{\rm{loco}}\) is equivalent to having a positive robustness degree: \((\mathbf{y},t)\models\varphi_{\rm{loco}}\Leftrightarrow\rho^{\varphi_{\rm{loco}} }(\mathbf{y},t)\geq 0\). In order to maximize the locomotion robustness, we use the robustness degree \(\rho^{\varphi_{\rm{loco}}}\) as an objective function in the trajectory optimization in the following section.
## IV Model Predictive Control for Push Recovery
### _Optimization Formulation_
We design a model predictive controller (MPC) to solve a sequence of optimal states and controls (i.e., signals) that simultaneously satisfy specification \(\varphi_{\rm{loco}}\), system dynamics, and kinematic constraints within an \(N\)-step horizon.
The MPC functions as the primary motion planner of the framework and operates in both normal and perturbed locomotion conditions. Our MPC is formulated as the following nonlinear program:
\[\min_{\mathbf{X},\mathbf{U},\mathbf{T}} w\mathcal{L}(\mathbf{U})-\tilde{\rho}^{\varphi_{\rm{loco}}}(\mathbf{X}, \mathbf{U})\] (4) s.t. \[\mathbf{\bar{x}}_{i+1}^{j}=f(\mathbf{\bar{x}}_{i}^{j},\mathbf{\bar{u}}_{i}^{ j},T^{j}),\qquad i\in\mathbb{H}\setminus\mathbb{S}, j\in\mathbb{J} \tag{5}\] \[\mathbf{\bar{x}}^{+,j+1}=\bar{\Delta}_{j\to j+1}(\mathbf{\bar{x}}^{-,j}), \qquad j\in\mathbb{J}\] (6) \[g_{\rm{collision}}(\mathbf{\bar{x}}_{i})\geq\epsilon_{d}, i\in\mathbb{H}\] (7) \[g_{\rm{duration}}(T^{j})\geq 0, j\in\mathbb{J}\] (8) \[h_{\rm{initial}}(\mathbf{\bar{x}}_{0})=0,h_{\rm{transition}}(\mathbf{\bar{x }}_{i})=0, i\in\mathbb{S} \tag{9}\]
where \(\mathbb{H}\) is a set of indices that includes all time steps in the horizon. We design \(\mathbb{H}\) to span from the acquisition of the latest measured states till the end of the next \(N\) walking steps, with a total of \(M\) time steps. Fig. 3 illustrates a horizon with \(N=2\). \(\mathbb{S}\) is the set of indices containing the time steps of all contact switch events, \(\mathbb{S}\subset\mathbb{H}\). \(\mathbb{J}=\{0,\ldots,N\}\) is the set of walking step indices. The decision variables include \(\mathbf{X}=\{\mathbf{\bar{x}}_{1},\ldots,\mathbf{\bar{x}}_{M}\}\), \(\mathbf{U}=\{\mathbf{\bar{u}}_{1},\ldots,\mathbf{\bar{u}}_{M}\}\), and \(\mathbf{T}=\{T^{0},\ldots,T^{N}\}\). \(\mathbf{T}\) is a vector defining the individual step durations for all walking steps.
\(\mathcal{L}(\mathbf{U})=\sum_{i=1}^{M}\|\mathbf{\bar{u}}_{i}\|^{2}\) is a cost function penalizing the control with a weight coefficient \(w\). The robustness degree \(\tilde{\rho}^{\varphi_{\rm{loco}}}(\mathbf{X},\mathbf{U})\) represents the degree of satisfaction of the signal \((\mathbf{X},\mathbf{U})\) with respect to the locomotion specification \(\varphi_{\rm{loco}}\). \(\tilde{\rho}^{\varphi_{\rm{loco}}}\) is a smooth approximation of \(\rho^{\varphi_{\rm{loco}}}\) using smooth operators [19]. The exact, non-smooth version \(\rho^{\varphi_{\rm{loco}}}\) has discontinuous gradients, which can cause the optimization problem to be ill-conditioned. Maximizing \(\tilde{\rho}^{\varphi_{\rm{loco}}}(\mathbf{X},\mathbf{U})\) encourages the keyframe towards the center of the Riemannian region, as discussed in Sec. III.
To satisfy the LIPM dynamics (1) while adapting step durations \(\mathbf{T}\), we use a second-order Taylor expansion to derive the approximated discrete dynamics (5). (6) defines the reset map from the foot-ground contact switch. (7) represents a set of self-collision avoidance constraints, which ensures a collision-free swing-foot trajectory. The threshold \(\epsilon_{d}\) is the minimum allowable distance for collision avoidance. The \(g_{\rm{collision}}\) is a set of multilayer perceptrons (MLPs) learned from leg configuration data, as detailed in Sec. IV-B. (8) clamps step durations \(\mathbf{T}\) within a feasible range. By allowing variations in step durations, we enhance the perturbation recovery capability of the bipedal system [20]. (9) are the equality constraints of the MPC: \(h_{\rm{initial}}\) denotes the initial state constraint; \(h_{\rm{transition}}\) is the guard function posing kinematic constraints between the swing foot height and the terrain height, \(p_{\rm{swing},z}=h_{\rm{terrain}}\), for walking step transitions at contact-switching indices in \(\mathbb{S}\).
Upon the successful completion of a MPC optimization, the solution is immediately sent to the low-level passivity-based controller [21] for tracking and execution. The MPC then reinitializes the same problem based on the latest state measurements.
### _Data-Driven Self-Collision Avoidance Constraints_
We design a set of MLPs to approximate the mapping from reduced-order linear inverted pendulum model (LIPM) states to the distances between geometry pairs that pose critical
Fig. 3: The planning horizon starts from the current measured state (pink). An example of \(N=2\) walking steps and \(8\) knot points per walking step is illustrated for simplicity (our actual implementation has \(10\) knot points).
collision risks. According to Cassie's kinematic configuration depicted in Fig. 4(a), these pairs include left shin to right shin (LSRS), left shin to right tarsus (LSRT), left shin to right Achilles rod (LSRA), left tarsus to right shin (LTRS), left tarsus to right tarsus (LTRT), and left Achilles rod to right shin (LARS). A total of \(6\) MLPs are constructed, each approximating the distance between one geometry pair. The MLPs are then encoded as constraints in the MPC to ensure collision-free trajectories.
Each MLP consists of \(2\) hidden layers of \(24\) neurons and is trained on a dataset with \(10^{6}\) entries obtained through an extensive exploration of leg configurations. The MLPs achieved an accurate prediction performance with an average absolute error of \(0.002\) m, and an impressive evaluation speed of over \(1000\) kHz, compared to \(1\) kHz using full-body kinematics-based approaches for collision checking.
We illustrate the effectiveness of the MLPs through kinematic analysis of the collision-free range of motion of Cassie's swing leg during crossed-leg maneuvers. Specifically, we consider a representative crossed-leg scenario where Cassie's left foot is designated as the stance foot and affixed directly beneath its pelvis. We move Cassie's right leg within the \(xy\) plane at the same height as the stance foot while recording the minimum value among all \(6\) MLP-approximated distances.
The result is plotted as a heat map in Fig. 4(b), where the coordinate indicates the location of the swing foot with respect to the pelvis. As expected, the plot reveals a trend of decreasing distance as the swing foot approaches the stance foot. A contour line drawn at \(\epsilon_{d}=0.03\) m indicates the MLP-enforced boundary between collision-free and collision-prone regions for foot placement. The collision-prone region to the left of the plane exhibits a cluster of red zones, each indicating a different active collision pair.
## V Results
### _Self-Collision Avoidance during Leg Crossing_
We demonstrate the ability of the signal temporal logic-based model predictive controller (STL-MPC) to avoid leg collisions in a critical push recovery setting, where a perturbation forces the robot to execute a crossed-leg maneuver.
The MPC with collision constraints generates a trajectory as shown in Fig. 5(a), where the swing leg adeptly maneuvers around the stance leg and lands at a safe crossed-leg recovery point. Similarly, the robot exterates itself from the crossed-leg state in the subsequent step, following a curved trajectory that actively avoids self-collisions. An overhead view comparing the perturbed and unperturbed trajectories is shown in Fig. 5(c). Fig. 5(b) shows that the multilayer perceptron (MLP)-approximated collision distances are accurate and that the planned trajectory is safe against the threshold \(\epsilon_{d}=0.03\).
### _Comprehensive, Omnidirectional Perturbation Recovery_
We examine the robustness of the STL-MPC framework through an ensemble of push-recovery tests conducted in simulation, where horizontal impulses are systematically applied to Cassie's pelvis. The impulses are exerted for a fixed duration of \(0.1\) s but vary in magnitude, direction, and timing. Specifically, impulses have: \(9\) magnitudes evenly distributed between \(80\) N and \(400\) N; \(12\) directions evenly distributed between \(0^{\circ}\) and \(330^{\circ}\); and \(4\) locomotion phases at a percentage \(s\) through a walking step, where \(s=0\%\), \(25\%\), \(50\%\), \(75\%\). Collectively, this experimental design encompasses a total of 432 distinct trials. For a baseline comparison, the same perturbation procedure is applied to an angular-momentum-based reactive controller (ALIP controller) [2].
In Fig. 6, we compare the maximum impulse the STL-MPC can withstand to that of the baseline ALIP controller. The STL-MPC demonstrates superior perturbation recovery performance across the vast majority of directions and phases, as reflected by the blue region encompassing the red region. The improvement is particularly evident for directions between \(30^{\circ}\) and \(150^{\circ}\), wherein crossed-leg
Fig. 4: (a) The robot kinematic anatomy for collision pair definitions. (b) The MLP prediction of the minimum distance between Cassie’s two legs with the left foot affixed to \((0,0)\) and the right foot moving in the \(xy\) plane.
Fig. 5: (a) Snapshots of Cassie performing a crossed-leg maneuver for push recovery. (b) The MLP-approximated collision distances are accurate compared with the ground truth, and the planned leg trajectory is safe against the threshold \(\epsilon_{d}=0.03\). (c) An overhead view of the CoM trajectory and foot placements when a lateral perturbation induces a crossed-leg maneuver.
maneuvers are induced for recovery, and active self-collision avoidance plays a critical role. This highlights the STL-MPC's capability to generate safe crossed-leg behaviors, thereby significantly enhancing its robustness against lateral perturbations. On the other hand, for perturbations between \(210^{\circ}\) and \(330^{\circ}\), both frameworks exhibit comparable performance, generating wide side-steps for recovery. Note that we use \(N=2\) walking steps as the MPC horizon, as existing studies [22, 23, 24] indicate that a two-step motion is sufficient for recovery to a periodic orbit.
Additionally, we observe the STL-MPC struggles most when the perturbation happens close to the end of a walking step at \(s=75\%\), as indicated by the smaller region than the baseline in the bottom right of Fig. 6. This is due to the reduced flexibility to adjust the contact location and time within the short remaining duration of the perturbed step.
### _Stepping Stone Maneuvering_
To demonstrate the STL-MPC's ability to handle a broad set of task specifications, we study locomotion in a stepping-stone scenario as shown in Fig. 7. To restrict the foot location to the stepping stones, we augment the locomotion specification \(\varphi_{\mathrm{loco}}\) with an additional specification \(\varphi_{\mathrm{stones}}\) that encodes stepping stone locations. For each rectangular stone, the presence of a stance foot \(\mathbf{p}_{\mathrm{stance}}\) inside its four edges is specified as \(\varphi_{\mathrm{stone}}^{s}=\bigwedge_{i=1}^{4}(\mu_{i}^{s}(\mathbf{p}_{ \mathrm{stance}})\geq 0)\), where \(s\in\{1,\ldots,S\}\), \(S\) is the total number of stepping stones, and \(\mu_{i}^{s}\) is the signed distance from the stance foot to the \(i^{\mathrm{th}}\) edge of the \(s^{\mathrm{th}}\) stone. Then the combined foot location specification for \(N\) walking steps is:
\[\varphi_{\mathrm{stones}}=\bigwedge_{j=1}^{N}(\square_{[T^{j},T^{j}]}\sum_{s=1 }^{S}\varphi_{\mathrm{stone}}^{s})\]
The augmented specification is the compound of the original locomotion specification \(\varphi_{\mathrm{loco}}\) and the newly-added stepping stone specification: \(\varphi_{\mathrm{loco}}^{\prime}=\varphi_{\mathrm{loco}}\land\varphi_{ \mathrm{stones}}\).
We test STL-MPC using \(\varphi_{\mathrm{loco}}^{\prime}\) in two scenarios. The first scenario has stepping stones generated at ground level with random offsets and yaw rotations, as shown in Fig. 7(a). The STL-MPC advances Cassie forward successfully. In the second scenario, the STL-MPC demonstrates the ability to cross legs in response to a lateral perturbation in Fig. 7(b).
### _Computation Speed Comparison between Smooth Encoding Method and Mixed-Integer Program_
To encode the robustness degree (as discussed in Sec. III) of STL specifications into our gradient-based trajectory optimization (TO) formulation, we adopt a smooth-operator method [25] that allows a smooth gradient for efficient computation. Specifically, we replace the non-smooth \(\min\) and \(\max\) operators in the robustness degree (as defined in Table II) with their smooth counterpart \(\widetilde{\min}\) and \(\widetilde{\max}\), as detailed in [25].
We benchmark the solving speed of the smooth method with the traditional mixed-integer programming (MIP) method [8]. The smooth method demonstrates a faster and more consistent solving speed, and its time consumption is nearer to linear with respect to the walking steps \(N\).
## VI Conclusion
This study presents a model predictive controller using signal temporal logic (STL) for bipedal locomotion push recovery. Our main contribution is the design of STL specifications that quantify the locomotion robustness and guarantee stable walking. Our framework increased Cassie's impulse tolerance by \(81\)% in critical crossed-leg scenarios. Further research will be focused on hardware verification and extensions to rough, dynamic terrain.
Fig. 8: A comparison of the traditional MIP method and our smooth method shows the planning time to solve trajectories for \(N\)-walking-step horizons. The smooth method is faster and more consistent over all horizons.
Fig. 6: The maximum force exerted on the pelvis from which the robot can safely recover within two steps in all \(12\) directions. The perturbations happen at different phases \(s\) during a left leg stance. Values on the left half result in crossed-leg maneuvers, and values on the right half correspond to wide-step recoveries.
Fig. 7: Illustration of maneuvering over two stepping-stone scenarios. (a) STL-MPC solves dynamically feasible trajectories that satisfy an additional foot-on-stones specification. (b) STL-MPC successfully plans crossed-leg maneuvers to recover from perturbation. |
2309.03361 | Linear Optimization by Conical Projection | This article focuses on numerical efficiency of projection algorithms for
solving linear optimization problems. The theoretical foundation for this
approach is provided by the basic result that bounded finite dimensional linear
optimization problem can be solved by single projection operation on the
feasible polyhedron. The further simplification transforms this problem into
projection of a special point onto a convex polyhedral cone generated basically
by inequalities of the original linear optimization problem. | Evgeni Nurminski, Roman Tarasov | 2023-09-06T21:08:23Z | http://arxiv.org/abs/2309.03361v1 | # Linear Optimization by Conical Projection+
###### Abstract
This article focuses on numerical efficiency of projection algorithms for solving linear optimization problems. The theoretical foundation for this approach is provided by the basic result that bounded finite dimensional linear optimization problem can be solved by single projection operation on the feasible polyhedron. The further simplification transforms this problem into projection of a special point onto a convex polyhedral cone generated basically by inequalities of the original linear optimization problem.
linear optimization orthogonal projection polyhedral cone
## Introduction
Linear optimization remains a workhorse of many practical applications and modern general-purpose industrial quality simplex-based algorithms and interior point methods demonstrated remarkable success in the area. Nevertheless there is an intellectual challenge to develop new approaches which may find their applications in these or that situations. In this article, we consider a projection-based algorithm for solving linear optimization problems in the standard form
\[\min_{Ax\leq b}cx\ =\ cx^{\star}=\ \max_{uA=c,\ u\leq 0}ub\ =\ u^{\star}b \tag{1}\]
which is written here with its dual.
Here in more or less standard notation \(x\), \(c\) and \(x^{\star}\) are vectors of finite (\(n\)) dimensional euclidean space \(E\) with the inner product \(xy\) and norm \(\|x\|^{2}=xx\). The right-hand side \(b\) of the constraints in (1) belongs to \(m\)-dimensional space \(E^{\prime}\) and \(A\) is a linear operator (\(m\times n\) matrix) from \(E\) to \(E^{\prime}\) -- \(A:E\to E^{\prime}\). The dual part of this problem has \(m\)-dimentional vector \(u\) of dual variables and its optimal solution is denoted as \(u^{\star}\).
Solution of (1) can be reduced to solving primal-dual system of linear inequalities or convex and even polyhedral feasibility problem (PFP):
\[cx\geq bu,\ Ax\leq b,\ uA=c,\ u\leq 0. \tag{2}\]
The PFP and Convex Feasibility Problem (CFP) for the general convex sets were in gun sights of many mathematicians since the middle of 20 century and projection methods are amongst the most popular for solving it since pioneering works [1], [2], [3] (see the extensive review of H. Bauschke and J.M. Borwein [4]). However in the area of linear optimization projection methods were not very successful in practical sense, mainly for slow convergence and computational difficulties of solving multiple high dimensional projection problems for polyhedrons of the general type [11].
We hope nevertheless that our experiments inspire new interest in projection agenda and algorithms. These hopes are based on certain intrinsic properties of projection operation which can operate on reduced basis sets and on additional decomposition possibilities, see f.i. [12].
## 1 Notations and Preliminaries
As it defined in Introduction let \(E\) be a finite-dimensional vector space of the primal variables with the standard inner product \(xy\) and the norm \(\|x\|^{2}=xx\). This space is then self-conjugate with the duality relation induced by the inner product. The dimensionality of this space, if needed, is determined as \(\dim(E)\) and the space of dimensionality \(n\) when necessary is denoted as \(E^{n}\). The non-negative part of a space \(E\) will be denoted as \(E_{+}\).
Among the others special vectors and sets we mention the null vector \(\mathbf{0}\), vector of ones \(\mathbf{1}=(1,1,\ldots,1)\), and the standard simplex \(\Delta=E_{+}\cap\{x:\mathbf{1}x=1\}\). Linear envelope, convex and conical hull of a set \(X\) are denoted as \(\ln(X)\), \(\mathrm{co}(X)\) and \(\mathrm{Co}(X)\) respectively.
We define linear operators, acting from \(E\) into \(E^{\prime}\) with \(\dim(E^{\prime})=m\) as collections of vectors \(\mathcal{A}=\{a^{1},a^{2},\ldots,a^{m}\}\) with \(a^{i}\in E\) which produce vector \(y=(y_{1},y_{2},\ldots,y_{m})\in E^{m}\) according to following relations \(y_{i}=a^{i}x,i=1,2,\ldots,m\). In the classical matrix-vector notation vectors \(\mathcal{A}\) form the rows of the matrix \(A\) and \(y=Ax\). At the same time we will consider the row subspace \(E^{\prime}\) as the linear envelope of \(\mathcal{A}\):
\[E^{\prime}=\ln(\mathcal{A})=\{x=\sum_{i=1}^{m}a^{i}z_{i}=A^{T}z,z\in E^{m}\} \subset E.\]
The projection operator of a point \(p\) onto a closed convex set \(X\) in \(E\) is defined as
\[p\!\downarrow\!X=\operatorname{argmin}_{x\in X}\|p-x\|,\]
that is \(\min_{x\in X}\|p-x\|=\|p-p\!\downarrow\!X\|\). For closed convex \(X\), this operator is well-defined and Lipschitz-continuous with the Lipschitz constant less or equal \(1\). The point-to-set projection operation is naturally generalized for sets: \(X\!\downarrow\!A=\{z=x\!\downarrow\!A,x\in X\}\).
We will also notice that this operator is idempotent: \((p\!\downarrow\!X)\!\downarrow\!X=p\!\downarrow\!X\) and linear for projection on linear subspace \(L\) of \(E\): \(\alpha p\!\downarrow\!L=\alpha(p\!\downarrow\!L)\) for \(\alpha\in\mathbb{R}\) and \((p+q)\!\downarrow\!L=p\!\downarrow\!L+q\!\downarrow\!L\). Of course \(p=p\!\downarrow\!L+p\!\downarrow\!L^{\perp}\).
For a closed convex set \(X\) denote as \(\left(X\right)_{z}\) its support function
\[(X)_{z}=\min_{\begin{array}{c}x\in X\end{array}}xz. \tag{3}\]
In this notation the standard linear optimization problem
\[\min_{\begin{array}{c}Ax\leq b\end{array}}cx\ =\ \min_{\begin{array}{c}x \in X\end{array}}cx \tag{4}\]
becomes just \((X)_{c}\).
Basically the same holds and for nonlinear problems
\[\min_{\begin{array}{c}x\in X\end{array}}f(x)\ =\ \min_{\begin{array}{c} \bar{x}\in\bar{X}\end{array}}\bar{c}\bar{x}=\left(\bar{X}\right)_{\bar{c}} \tag{5}\]
for \(\bar{x}=(x,\xi)\), \(\bar{X}=\{\bar{x}:x\in X,f(x)\leq\xi\), \(\bar{c}=(\mathbf{0},1)\).
There is a general result which connect support functions with projection [5].
Theorem 3.1: _Let \(X\) -- closed bounded subset of \(E\) and \(c\in E\). Then for any \(x^{0}\)_
\[\left(X\right)_{c}=\lim_{\tau\to\infty}\ c((x^{0}+\tau c)\!\downarrow\!X). \tag{6}\]
For the formal correctness of application of the theorem 3.1 to the set \(\bar{X}\) it is necessary to ensure boundness of \(\bar{X}\). This, generally speaking, formal requirement can be easily satisfied by adding an arbitrary upper bound \(\bar{f}\geq\inf_{x\in X}f(x)\) for \(\xi\). Toward this purpose any \(x^{0}\in X\) will provide trivial upper bound \(\bar{f}=f(x^{0})\).
For linear optimization problems (4) where \(X\) is a bounded polyhedron, exact equivalence can be proved [6]:
Theorem 3.2: _If (4) has a unique solution \(x^{\star}\), then for any \(x^{0}\) there existsm \(\theta_{c}>0\) such that_
\[(x^{0}-\theta c)\!\downarrow\!X=x^{\star} \tag{7}\]
_for any \(\theta\geq\theta_{c}\)._
In more details the problem (7) can be written down as:
\[\min_{\begin{array}{c}x\in X\end{array}}\|x-x^{0}+\theta c\|^{2}=\min_{ \begin{array}{c}y\in X_{\theta,c}\end{array}}\|y\|^{2}, \tag{8}\]
where \(X_{\theta,c}=X-x^{0}+\theta c\) is the original feasible set \(X\), shifted by \(x^{c}=\theta c-x^{0}\).
If the polyhedron \(X\) is described by a system of linear inequalities
\[X=\{x:Ax\leq b\} \tag{9}\]
then
\[X_{\theta,c}=\{x:Ax\leq b^{c}\}, \tag{10}\]
\(b^{c}=b-A(x^{0}-\theta c)\).
The latter problem (8) does not look as something essentially different however it can be transformed into the conical projection problem,
\[\min_{\begin{array}{c}\bar{x}\in\mathrm{Co}(\bar{A})\end{array}}\|\bar{x}-p\|^ {2} \tag{11}\]
where \(\bar{x}=(x,\xi)\) -- is the vector from extended space \(\bar{E}=E\times\mathbb{R}\), and \(\bar{A}=|A,-b^{c}|\). The rows of this matrix can be considered as vector of \(\bar{E}\). Then \(\mathrm{Co}(\bar{A})\) is the conical envelope of these vectors which can be represented as
\[\mathrm{Co}(\bar{A})=\{\bar{x}=\bar{A}^{T}z,z\in E_{+}^{r}\},\]
where \(E_{+}^{r}\) -- non-negative orthant of the correspondent dimensionality.
Finally the algorithm for solution of (4) can be represented by the algorithmic scheme 1.
```
Data: The dataset \((A,b,c\) of the original problem, and scaling constant \(\theta>0\) Result: The solution \(x^{\star}\) of the linear optimization problem (4). Step 1. Data preparation for projection problem; \[\begin{array}{c}x^{c}=x-0c;\ b^{c}=b-Ax^{c};\\ \bar{A}=\left[A,-b^{c}\right];\ \bar{p}=\left[{}_{n},1\right]\end{array}\] (12) Step 2. Solution of the projection problem; \[\begin{array}{c}\min_{\begin{array}{c}\bar{x}\in\mathrm{Co}(\bar{A}) \end{array}}\|\bar{x}-\bar{p}\|^{2}=\|\bar{p}\!\downarrow\!\mathrm{Co}(\bar{A} )-\bar{p}\|^{2}\end{array}\] (13) Step 3. Getting back to (4); By representing solution of the problem (13) as \(\bar{p}\!\downarrow\!\mathrm{Co}(\bar{A})=(y^{c},\xi)\), where \(y^{c}\in E\), a \(\xi\in\mathbb{R}\) compute \[\begin{array}{c}x^{\star}=y^{c}/\xi+\theta c.\end{array}\]
```
**Algorithm 1**Solving (4) by projection.
## 2 Inside out
The subject of this section is the least norm problem \(\min_{x\in X}\|x\|^{2}\) in an \(n\)-dimensional euclidean space \(E\) for a bounded closed convex polyhedron \(X\). Here we do not make a great distinction between row and column vectors which are assumed of any type depending on context. Polyhedron \(X\) most commonly described as the intersection of half-spaces
\[X=\{x:a^{i}x\leq\beta_{i},i=1,2,\ldots,m\}=\{x:Ax\leq b\} \tag{14}\]
where vectors \(a^{i},i=1,2,\ldots,m\) of the dimensionality \(n\) can be considered as rows of the matrix \(A\), and the \(m\)-vector \(b=(\beta_{1},\beta_{2},\ldots,\beta_{m})\) is the corresponding right-hand side vector. It can be considered as the "outer" description of \(X\) in contrast with the "inner" description
\[X=\operatorname{co}(\hat{x}^{j},\hat{x}^{j}\in\operatorname{Ext}(X),j=1,2, \ldots,J) \tag{15}\]
as the convex hull of the set \(\operatorname{Ext}(X)\) of extreme points of the same set \(X\). The later is often considered as "polytope" description. These are equivalent descriptions for this class of polyhedrons/polytopes, but direct conversion between them is complicated as any of them may be exponentially long even for the polynomially long in \(n,m\) counterparts.
The polyhedron description is more common so the vast majority of computational algorithms is developed namely for this description of \(X\). The notable exceptions are possibly game problems with probability simplexes and nondifferentiable optimization algorithms in which subdifferentials are approximated by convex hulls of known subgradients. However convex hull-like description has its own computational advantages, for instance as linear optimization problem over convex hulls it has low \(nm\) complexity for the trivial direct algorithm and can be reduced to logarithmic complexity if parallel computations allowed. In [7] we considered the transformation of the least norm problem with the polyhedral description (14) into the close relative of (15) with practically the same data-size as (14). The original version of this transformation was rather convoluted and here we present its alternative derivation which uses basically only standard duality arguments.
To begin with we expand our basic space \(E\) with one additional variable into \(\bar{E}=E\times\mathbb{R}\) and transform the initial least norm problem into something which is almost homogeneous:
\[\min_{Ax\leq b}\frac{1}{2}(\|x\|^{2}+1)=\min_{\begin{array}{c}\bar{A}\bar{x} \leq 0\\ \bar{e}\bar{x}=1\end{array}}\frac{1}{2}\|\bar{x}\|^{2} \tag{16}\]
with \(\bar{A}=|A,-b|\), \(\bar{x}=(x,\xi)\), and vector \(\bar{e}=(0,0,\ldots,0,1)\in\bar{E}\). The saddle point reformulation of this problem goes as follows:
\[\begin{array}{l}\min_{\begin{array}{c}\bar{A}\bar{x}\leq 0\\ \bar{e}\bar{x}=1\\ \end{array}}\frac{1}{2}\|\bar{x}\|^{2}=\max_{\begin{array}{c}u\geq 0,\ \theta\\ \end{array}}\min\left\{\frac{1}{2}\|\bar{x}\|^{2}+\theta(1-\bar{e}\bar{x})+u \bar{A}\bar{x}\right\}=\\ \bar{e}\bar{x}=1\\ \max_{\begin{array}{c}u\geq 0,\ \theta\\ \end{array}}\left\{\theta+\min\left\{\frac{1}{2}\|\bar{x}\|^{2}+(u\bar{A}- \theta\bar{e})\bar{x}\right\}=\max_{\begin{array}{c}u\geq 0,\ \theta\\ \end{array}}\left\{\theta-\frac{1}{2}\|u\bar{A}-\theta\bar{e}\|^{2}\right\} \\ u\geq 0,\ \theta\end{array}}\]
Introducing the cone \(\mathcal{K}=\{z:z=u\bar{A},u\geq 0\}\) we can rewrite the last problem as
\[-\min_{\begin{array}{c}\theta\\ \end{array}}\left\{\frac{1}{2}\min_{z\in\mathcal{K}}\|z-\theta\bar{e}\|^{2}- \theta\right\}=-\min_{\begin{array}{c}\theta\\ \end{array}}\left\{\frac{1}{2}\theta^{2}\min_{z\in\mathcal{K}}\|z-\bar{e}\|^{2 }-\theta\right\}=-\min_{\begin{array}{c}\theta\\ \end{array}}\left\{\frac{1}{2}\gamma^{2}\theta^{2}-\theta\right\}=1/2\gamma^{2},\]
where we made use of \(\alpha\mathcal{K}=\mathcal{K}\) for any \(\alpha>0\) and denoted \(\gamma^{2}=\min_{z\in\mathcal{K}}\|z-\bar{e}\|^{2}\). The solution of the last minimum is attained for \(\theta^{\star}=1/\gamma^{2}\). As solution of \(\min_{z\in\mathcal{K}}\|z-\bar{e}\|^{2}=\|z^{\star}-\bar{e}\|^{2}\) is unique we obtain \(\bar{x}^{\star}=\theta^{\star}(z^{\star}-\bar{e})\).
## 3 Numerical experiments
For numerical experiments we implemented the projection algorithm in the OCTAVE open-source MATLAB-like system [9]. For comparison, we used the GLPK linear programming kit implemented in C programming language and also built into OCTAVE as internal function. This function however has a limited functionality in comparison with stand-along GLPK solver [10], but still allows for some basic comparison of projection algorithm and contemporary symplex method.
We considered the following 2 types of linear optimization problems. The first one consists of linear optimization problems of different dimensions with the following structure
\[\min_{\begin{array}{c}Ax\leq\vec{0}\\ -g\leq x\leq f\end{array}}cx\ =\ \min_{\bar{A}x\leq\bar{b}}cx \tag{17}\]
where \(\bar{A}=[A;\ I;\ -I],\ \bar{b}=[\vec{0};\ f;\ g]\). Here \(I\) is the identity matrix, the elements of the matrix \(A\) and the vectors \(f,g\) are randomly generated independently and uniformly from the segment \([0,1]\), and the elements of the vector \(c\) are generated from the segment \([-5,5]\). Also for matrices \(A,B)\) of matching dimensions \([A;B]\) denotes (by following MATLAB/OCTAVE convention) a stacked up \(A,B\), that is \([A^{T}B^{T}]^{T}\). This form of problems on one hand ensures the feasibility ( \(\vec{0}\) is a feasible point ) and boundness (due to simple two-sided bounds on variables) but intersection of the cone and the box produces sufficiently demanding feasible polyhedron.
However, this form provides one-sided advantages to GLPK as it immediately makes use of built-in presolver which takes into account sparsity and specific structure of box constraints. Of course our experimental implementation of projection algorithm is missing such advanced features, so our second type of test problems was a simple modification of (17) which consisted in replacing \(x\) with the new variable \(z\), such that \(x=Qz\) with 100%-dense random unitary matrix \(Q\). Then the test problems become
\[\min_{\bar{A}_{Q}z\leq\bar{b}}c_{Q}z \tag{18}\]
where \(\bar{A}_{Q}=\bar{A}Q,\ c_{Q}=cQ\). After such changes the problem constraints became fully dense and GLPK presolver does not interfere with optimization.
Fig.1 shows that presolver keeps the GLPK solver noticeably faster, but projection algorithm demonstrates at least the similar dynamics when problem dimensions increase.
For dense problems (18) projection algorithm demonstrates however some speed-up (Fig. 2) which slightly increases when problem size grows. It was a pleasant surprise that despite the very different levels of implementation the projection algorithm was faster than GLPK.
The important characteristic of the quality of solution is its relative accuracy which the projection algorithm manages to attain. The Fig. 3 demonstrates the general trend in relative deviation of the objective values obtained by the projection algorithm from optimal values, obtained by GLPK, and random oscillations in these deviation. It is worth noticing that we see very little growth in the deviations despite significant growth of the size of problems. Secondly, we see that despite random oscillations the deviations remain quite small, of the order of hundredth of percent.
Finally we provide experimental data which give a plausible explanation on why projection algorithms may compete with traditional simplex-like algorithms. Fig. 4 shows the growth of the number of active generators in conical projection. Quite naturally it grows practically linear with algorithm iterations and it is clear that for the major part of the run the basis size remains well below the theoretical limit. The computing time for update of projection operator in projection algorithm also follows the growth of the basis size (Fig. 5). It implies that the best part of it essentially smaller then the maximal run time.
Figure 1: Dependence of the running time of the algorithm on the dimensionality: (17).
Fig. 1 and 2 demonstrate generally polynomial growth of computing time both for GLPK and projection algorithm as a function of problems size. We provide the additional Fig. 6 to demonstrate more explicitly the difference in computing time between GLPK and projection algorithm. It can be seen from Fig. 6 that despite rather large fluctuations in the ratio between computing times for GLPK and projection algorithm the general tendency is in favor of projection when problems become larger.
Figure 2: Dependence of the running time of the algorithm on the dimensionality: (18).
Figure 4: Analysis of the projection algorithm: Basis size over iterations.
Figure 3: Deviations from the optimums
Figure 5: Analysis of the projection algorithm: The time it takes to recalculate the basis depending on its size.
Figure 6: The ratio of the running times of GLPK to copr on each sample.
#### Acknowledgements
This research was initiated and supported at Sirius supplementary educational program \(*\)Actual methods of information theory and optimization\(*\), November, 2022.
|